Training Data Poisoning
Created: 2024-03-12 12:02
#quicknote
One of the Vulnerabilities in LLM-base applications.
Training data poisoning involves the intentional manipulation of data used for either initial model training, fine-tuning, or embedding processes. Attackers aim to introduce vulnerabilities, backdoors, or biases into the model, potentially leading to:
- Security Compromises:Â Poisoning can enable unauthorized actions or the leakage of sensitive information.
- Performance Issues:Â Deliberate degradation of the model's accuracy or reliability.
- Ethical Concerns:Â Introduction of harmful biases that reflect in the model's output.
- Downstream Risks:Â Exploitation of vulnerabilities in software dependent on the poisoned model.
- Reputational Damage:Â Loss of trust due to security breaches or biased outputs.
Resources
Tags
#aisecurity #llm #cybersecurity