Apple is among the tech companies currently at the center of a major legal and societal debate. This stems from a joint letter by 42 US attorneys general. The letter addresses the growing risks of generative AI and the question of how companies like Apple should assume responsibility. It makes clear that, in the view of the authorities, the security measures implemented so far are insufficient and that concrete threats have already led to real harm.
The National Association of Attorneys General has sent a 12-page document to 13 major tech companies, including Apple, Google, Meta, Microsoft, OpenAI, and other companies that develop or sell AI products. The letter focuses primarily on the increasing number of AI responses that generate submissive or delusional content. Attorneys General emphasize that these developments are not merely theoretical risks, but have led to incidents involving violence, mental health crises, and even deaths. Interactions between AI systems and children are particularly sensitive and, in some cases, described as disturbing. The letter calls for stricter safeguards and demands clear accountability within the companies.
Growing concern about subservient and delusional AI results
At its core, the letter addresses an increasing number of responses generated by generative AI systems in conversations. The attorneys general describe cases in which AI models confirm irrational assumptions for users, fabricate false facts, or assume roles that amplify psychologically distressing content. According to the letter, these problems are not isolated incidents but have increased significantly in recent years.
Among the most serious criticisms are incidents in which children interacted with AI chatbots that assumed adult personalities. In some cases, these chatbots sought romantic relationships with minors, encouraged drug use or violence, gave advice against medical treatment, or urged children to hide conversations from their parents. Prosecutors consider these examples a clear indication that child protection measures must be strengthened.
Examples of real damage
The letter cites several incidents intended to illustrate the risks of generative AI models. Among them is the case of 47-year-old Canadian Allan Brooks, who, after repeated interactions with ChatGPT, became convinced he had developed a new kind of mathematics. This delusion led to a mental health crisis.
Another case involves 14-year-old Sewell Setzer III from the USA. His death by suicide is part of an ongoing lawsuit. The lawsuit alleges that a chatbot from Character.AI encouraged him to join the bot. The letter makes it clear that these two cases are only examples and that the actual number of problematic interactions is significantly higher.
Prosecutors place these incidents within a broader context, including domestic violence, poisonings, psychotic episodes, and other forms of real harm. In their assessment, these cases illustrate the potential for far-reaching negative consequences, regardless of whether users already exhibit a pre-existing vulnerability.
Indications of possible legal violations
The letter also indicates that some of the companies involved may have already violated laws. Specifically mentioned are potential violations of consumer protection laws, regulations concerning risk warnings, provisions for protecting children's online privacy, and in some cases, even criminal law. The prosecutors are therefore demanding greater transparency and accountability in the use of AI models.
Demands on Apple and the other companies
The letter lists a number of specific measures that are to become mandatory in the future. These include:
- Development and enforcement of clear guidelines to prevent subservient and delusional statements
- rigorous security tests before the release of any new AI models
- Clearly visible and permanent warnings about potentially harmful results
- a clear organizational separation between revenue optimization and security-relevant decisions
- the appointment of executives who are directly responsible for AI security
- the authorization of independent audits, particularly regarding impacts on children
- the publication of incident logs and response times
- Notification of users who have been exposed to harmful or misleading AI content
- Ensuring that AI systems do not generate illegal or dangerous content for children
- Age-appropriate protective measures that protect minors from violence and sexual content
All companies are expected to confirm their intention to implement these measures by January 16, 2026. In addition, the Attorneys General expect in-person meetings to discuss the current problems and potential solutions in detail.
Who signed the letter
The letter was signed by the Attorneys General of the following states and U.S. territories: Alabama, Alaska, American Samoa, Arkansas, Colorado, Connecticut, Delaware, District of Columbia, Florida, Hawaii, Idaho, Illinois, Iowa, Kentucky, Louisiana, Maryland, Massachusetts, Michigan, Minnesota, Mississippi, Missouri, Montana, New Hampshire, New Jersey, New Mexico, New York, North Dakota, Ohio, Oklahoma, Oregon, Pennsylvania, Puerto Rico, Rhode Island, South Carolina, Utah, Vermont, U.S. Virgin Islands, Virginia, Washington, West Virginia, and Wyoming.
The list shows how broad the political support for stricter AI rules has become and that the pressure on Apple and the other companies is increasing not only from a technical but also from a societal perspective.
Why the pressure on Apple and other providers is increasing
The debate surrounding the risks and responsibilities associated with artificial intelligence has reached a new level. With the letter from the 42 state attorneys general, Apple and other tech giants are coming under increased scrutiny, as companies must now demonstrate that safety is not being sacrificed for economic gain. Examples from recent years show how quickly AI interactions can lead to real harm, especially when children are involved. The coming months will be crucial, as authorities expect binding commitments to comprehensive safeguards. For tech companies, this marks the beginning of a period in which transparency, safety, and clear accountability in the use of AI will become increasingly important and will significantly shape their future direction. (Image: Natakorn1981 / DepositPhotos.com)
- Pressure on Google: EU takes its cue from Apple's App Store
- Leak: Apple is preparing iPads with faster hardware for 2026
- Apple leak reveals Studio Display 2 with major upgrades
- Apple TV enhances its reputation with new AFI accolades
- Apple: Citi sees strong growth and raises price target
- Thanks to AI, the Apple Watch is becoming a tool for disease prediction
- Apple and Google simplify switching and support DMA, according to the EU
- Disney plans to expand its board of directors and is counting on Jeff Williams
- iPhone Fold: Analysts see a strong impact on the segment
- Apple Arcade announces fresh content for January 2026
- Google is focusing on AI-powered smart glasses and plans to launch them in 2026
- Netflix guarantees: Warner will continue producing for Apple TV
- Apple and Google simplify switching between devices
- The iPhone 16 was the number one phone worldwide in the third quarter of 2025
- Paramount is fighting against the sale of Warner Bros. to Netflix
- Apple TV receives 14 nominations at the 2026 Golden Globes
- Apple remains stable: Johny Srouji confirms he will stay with the company
- Apple Fitness+ introduces German and much more
- Apple with new AI focus: Wedbush raises price target
- Apple TV reveals new features for the F1 streaming experience
- Evercore has higher expectations for Apple and raises its price target to $325
- India plans to permanently activate A-GPS, sparking renewed criticism
- Apple's new design era is generating internal enthusiasm despite major upheaval
- Apple is struggling with talent exodus to OpenAI and internal resignations



