Context poisoning occurs when misleading or false information is sowed into training data, which can skew the AI's learning process. Imagine a teacher being given false answers; the teacher's credibility and ability to educate would suffer. In AI, this risks producing unreliable results. For instance, if a chatbot is trained on biased data, its interactions may reflect those biases, impacting users negatively. Understanding context poisoning helps build robust AI systems that are less vulnerable to manipulation.
**Key takeaway:**