New York AI Safety Law (RAISE Act)

Kanishga Subramani avatar
New York AI Safety Law (RAISE Act)

Artificial Intelligence (AI) is advancing at an unprecedented pace, transforming industries from healthcare and finance to education and entertainment. However, with this rapid growth comes increasing concerns about AI safety, accountability, and transparency. Addressing these challenges, New York has taken a major step forward by enacting a comprehensive AI safety law that could influence AI regulation across the United States.

On December 19, 2025, New York Governor Kathy Hochul signed the Responsible AI Safety and Education (RAISE) Act into law, marking one of the most significant state-level efforts to regulate advanced AI systems.

What Is the RAISE Act?

The RAISE Act focuses on regulating “frontier AI models”– highly advanced AI systems developed by large technology companies. These models often have the potential to impact millions of users and, if misused or poorly governed, could cause widespread harm.

Under the new law, companies developing or deploying such AI systems must:

  • Publish clear AI safety and risk-mitigation plans
  • Conduct ongoing risk assessments
  • Report serious AI-related safety incidents to state authorities within 72 hours

The goal is not to restrict innovation, but to ensure that powerful AI systems are developed responsibly and with public safety in mind.

Why New York Introduced AI Regulation Now

The absence of comprehensive federal AI regulation in the United States has left states to take the lead. New York’s move follows similar efforts in California and reflects a growing recognition that voluntary self-regulation by AI companies is no longer sufficient.

AI systems are increasingly used in decision-making processes that affect employment, credit access, healthcare outcomes, and public services. Without oversight, these systems can reinforce bias, compromise privacy, or be exploited for harmful purposes. The RAISE Act aims to close this governance gap by introducing legal accountability for the most powerful AI developers.

Balancing AI Innovation and Public Safety

One of the defining features of the RAISE Act is its targeted approach. The law primarily applies to large-scale AI developers, rather than startups or smaller businesses. This helps reduce the regulatory burden on early-stage innovation while ensuring that organizations with the greatest influence are held to higher standards.

Supporters argue that the law promotes responsible AI development without stifling technological progress. Critics, however, believe the final version of the bill could have gone further in defining strict enforcement mechanisms. Despite these debates, the Act is widely seen as a practical first step toward meaningful AI governance.

How the RAISE Act Could Shape National AI Policy

New York’s AI safety law may act as a blueprint for other states – and potentially for future federal legislation. Historically, state-level regulations in areas such as data privacy and financial services have influenced nationwide standards. The RAISE Act could follow a similar path, especially as public concern around AI misuse continues to grow.

For AI companies operating in or doing business with New York, compliance with this law may soon become a baseline expectation rather than an exception.

Conclusion: A Turning Point for AI Governance

The RAISE Act signals a shift from reactive to proactive AI regulation. By prioritizing transparency, risk management, and accountability, New York is positioning itself as a leader in ethical AI governance. As AI becomes more deeply embedded in everyday life, laws like this will play a crucial role in ensuring that innovation benefits society without compromising safety, trust, or human values.

Sources

https://www.wsj.com/articles/new-york-signs-ai-safety-bill-into-law-ignoring-trump-executive-order-f1ece21d

https://www.politico.com/news/2025/12/19/kathy-hochul-signs-new-yorks-ai-safety-law-aimed-at-tech-industry-heavyweights-00700473