Recent incidents of ChatGPT generating demonic rituals and Google's AI promoting dubious medical procedures expose a fundamental flaw in large language models: their inability to preserve context. When stripped of cultural and historical framing, AI outputs transform from harmless references into dangerous misinformation. This context collapse threatens to undermine trust in AI systems while raising critical questions about training data transparency.