Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
Editorials & Other Articles
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsOpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters -Wired
OpenAI is throwing its support behind an Illinois state bill that would shield AI labs from liability in cases where AI models are used to cause serious societal harms, such as death or serious injury of 100 or more people or at least $1 billion in property damage.
The effort seems to mark a shift in OpenAIs legislative strategy. Until now, OpenAI has largely played defense, opposing bills that could have made AI labs liable for their technologys harms. Several AI policy experts tell WIRED that SB 3444which could set a new standard for the industryis a more extreme measure than bills OpenAI has supported in the past.
The bill would shield frontier AI developers from liability for critical harms caused by their frontier models as long as they did not intentionally or recklessly cause such an incident, and have published safety, security, and transparency reports on their website. It defines a frontier model as any AI model trained using more than $100 million in computational costs, which likely could apply to Americas largest AI labs, like OpenAI, Google, xAI, Anthropic, and Meta.
We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businessessmall and bigof Illinois, said OpenAI spokesperson Jamie Radice in an emailed statement. They also help avoid a patchwork of state-by-state rules and move toward clearer, more consistent national standards.
Under its definition of critical harms, the bill lists a few common areas of concern for the AI industry, such as a bad actor using AI to create a chemical, biological, radiological, or nuclear weapon. If an AI model engages in conduct on its own that, if committed by a human, would constitute a criminal offense and leads to those extreme outcomes, that would also be a critical harm. If an AI model were to commit any of these actions under SB 3444, the AI lab behind the model may not be held liable, so long as it wasnt intentional and they published their reports.
The effort seems to mark a shift in OpenAIs legislative strategy. Until now, OpenAI has largely played defense, opposing bills that could have made AI labs liable for their technologys harms. Several AI policy experts tell WIRED that SB 3444which could set a new standard for the industryis a more extreme measure than bills OpenAI has supported in the past.
The bill would shield frontier AI developers from liability for critical harms caused by their frontier models as long as they did not intentionally or recklessly cause such an incident, and have published safety, security, and transparency reports on their website. It defines a frontier model as any AI model trained using more than $100 million in computational costs, which likely could apply to Americas largest AI labs, like OpenAI, Google, xAI, Anthropic, and Meta.
We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businessessmall and bigof Illinois, said OpenAI spokesperson Jamie Radice in an emailed statement. They also help avoid a patchwork of state-by-state rules and move toward clearer, more consistent national standards.
Under its definition of critical harms, the bill lists a few common areas of concern for the AI industry, such as a bad actor using AI to create a chemical, biological, radiological, or nuclear weapon. If an AI model engages in conduct on its own that, if committed by a human, would constitute a criminal offense and leads to those extreme outcomes, that would also be a critical harm. If an AI model were to commit any of these actions under SB 3444, the AI lab behind the model may not be held liable, so long as it wasnt intentional and they published their reports.
https://www.wired.com/story/openai-backs-bill-exempt-ai-firms-model-harm-lawsuits/
2 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies
OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters -Wired (Original Post)
justaprogressive
Friday
OP
Kid Berwyn
(24,571 posts)1. AI thinks of everything!

Its for border security.
https://www.eldiario24.com/en/china-deploys-advanced-border-robots/26270/
cbabe
(6,677 posts)2. Previous post with comments
https://www.democraticunderground.com/?com=view_post&forum=1014&pid=3647283
Google and chatbot startup Character.AI are settling lawsuits over teen suicides
Yesterday
https://www.businessinsider.com/google-character-ai-settling-lawsuits-teen-suicides-new-york-texas-2026-1?op=1
Google and chatbot startup Character.AI are settling lawsuits over teen suicides
Shubhangi Goel
Senior Reporter, Tech
Jan 7, 2026, 9:48 PM PT
Google and Character.AI have agreed to settle lawsuits over chatbot-linked teen suicides. The cases allege that AI chatbots contributed to mental health crises among teenagers.
OpenAI and Meta have been involved in similar chatbot safety lawsuits and probes. Google and chatbot-building startup Character.AI have agreed to settle multiple lawsuits from families whose teenagers died by suicide or hurt themselves after interacting with Character.AI's chatbots.
more
Google and chatbot startup Character.AI are settling lawsuits over teen suicides
Yesterday
https://www.businessinsider.com/google-character-ai-settling-lawsuits-teen-suicides-new-york-texas-2026-1?op=1
Google and chatbot startup Character.AI are settling lawsuits over teen suicides
Shubhangi Goel
Senior Reporter, Tech
Jan 7, 2026, 9:48 PM PT
Google and Character.AI have agreed to settle lawsuits over chatbot-linked teen suicides. The cases allege that AI chatbots contributed to mental health crises among teenagers.
OpenAI and Meta have been involved in similar chatbot safety lawsuits and probes. Google and chatbot-building startup Character.AI have agreed to settle multiple lawsuits from families whose teenagers died by suicide or hurt themselves after interacting with Character.AI's chatbots.
more