At a time when complaints are piling up against ChatGPT, OpenAI is engaging in a controversial legislative approach to protect itself from the most serious consequences of using artificial intelligence. This bill, supported in the state of Illinois, aims to exempt AI companies from any civil liability, even in the event of tragedies such as mass deaths. What is behind this strategic move by OpenAI in a context of increasing international regulation?
Key Points
- OpenAI supports a bill in Illinois to exempt AI companies from any civil liability in the event of disasters.
- The SB 3444 bill covers extreme scenarios, such as the use of AI to create chemical or nuclear weapons.
- The requirements of the AI Act and European regulations impose increased responsibilities on AI providers, contrasting with the American approach.
OpenAI and the Illinois Bill
OpenAI, a leader in artificial intelligence, is at the center of a controversy related to a bill in Illinois, SB 3444. This text aims to protect AI companies from any civil liability in the event of disasters caused by their technologies. This initiative comes as the company faces several lawsuits, notably for incidents related to the use of ChatGPT.
Jamie Radice, spokesperson for OpenAI, stated that this legislation aims to reduce the risks associated with the most advanced AI systems. According to her, it would also prevent legislative fragmentation between different U.S. states.
Reactions and Opposition
The bill is not unanimous. Many point out that the majority of Americans oppose any reduction in the liability of AI companies. Scott Wisor, director of Secure AI, recently reiterated this opposition during a public intervention.
Meanwhile, the American administration continues to allow states to legislate individually on AI-related issues. However, the Illinois bill could serve as a model for potential federal legislation, which worries some industry experts.
Regulatory Obligations in Europe
In Europe, the situation is significantly different. The obligations imposed by the AI Act, which will come into effect in August 2026, require increased transparency and human oversight of AI systems. These measures aim to strengthen the responsibility of advanced technology providers.
The revised product liability directive, implemented in 2024, extends strict liability to software, including those based on AI. Thus, in the event of harm caused by an AI system, the manufacturer can be held liable, even without proven fault. This approach strongly contrasts with that envisaged by the Illinois bill.
Future Perspectives for AI Regulation
In light of these developments, the question of AI regulation remains central. In 2026, as AI technologies continue to develop at a rapid pace, the need for harmonized and effective regulation becomes increasingly evident. Governments and international bodies are working to establish regulatory frameworks that ensure both innovation and public safety.
The debate on the liability of AI companies in disaster cases highlights the complex challenges faced by regulators. It also underscores the importance of ongoing dialogue between the industry, legislators, and the public to find a balance between technological innovation and citizen protection.






