AI doesn't 'know' facts — it predicts likely text. Sometimes it generates plausible-sounding but totally fake information: made-up research papers, non-existent legal cases, wrong dates and statistics. This is called hallucination. Never trust AI outputs for critical facts without verification. The more obscure the topic, the higher the hallucination risk.
**Key takeaway:** AI can confidently lie. Always verify important facts.