After disturbing mental health incidents involving AI chatbots, state attorneys general sent a letter to major AI companies, warning them to fix “delusional outputs†or risk legal action, TechCrunch reported Wednesday.
The letter, signed by 42 attorneys general from U.S. states and territories, asked Microsoft, OpenAI, Google, Anthropic, and others to implement new safeguards to protect users.
It called for safeguards including new incident reporting procedures to notify users when chatbots produce harmful outputs, and transparent audits by third parties of large language models for signs of delusional or sycophantic ideations.
Those third parties could include academics and civil society groups, and should be allowed to “evaluate systems pre-release without retaliation and to publish their findings without prior approval from the company,†the letter said.