States Take the Lead: “Pose, Don’t Preach” in AI Oversight

Imposing a broad moratorium on all state action … deprives consumers of reasonable protections,” argued a coalition of 40 attorneys general in urging Congress to maintain state oversight while federal rules remain absent

In the absence of comprehensive federal AI rules, several U.S. state attorneys general are stepping into the breach to protect consumers. These officials are using existing laws on privacy, consumer protection, and discrimination to address the growing risks of artificial intelligence, focusing on practical enforcement rather than waiting for national legislation.

A proposed federal moratorium that would block states from enacting AI regulations for a decade has drawn sharp criticism. Forty state attorneys general,  Republicans and Democrats, warned that such a measure would “deprive consumers of reasonable protections.” They argue that, until Congress acts, states must continue using their existing legal authority to regulate AI.

 

While only a few states, such as California, Colorado, and Utah, have passed AI-specific laws, many others are proactively applying existing statutes. Massachusetts Attorney General Andrea Campbell cautioned that AI must comply with existing privacy and consumer protection statutes. She emphasized that misleading claims about AI or misuse of personal data could violate state law. Both Oregon and Massachusetts have issued explicit guidance warning against AI-related fraud, including deepfake scams and deceptive marketing practices. New Jersey created a “Civil Rights and Technology Initiative” to monitor and challenge algorithmic bias that threatens protected rights.

 

Supporters of state-level action argue that rapid technological change demands quick, localized responses. They warn that a federal moratorium enacted without robust federal regulations would leave consumers exposed. Attorneys general can initiate investigations and bring legal cases under existing state statutes, providing immediate enforcement. State actions influence corporate behavior and can pave the way for later federal legislation, shaping how AI is used responsibly. Critics, including some federal legislators and tech companies, favor a single national framework to avoid a patchwork of inconsistent state rules. With the upcoming AI and energy summit in Pennsylvania just around the corner, more opinions from CEO’s in the AI space are sure to come.

 

In the evolving landscape of AI policy, state attorneys general are adopting a “pose, don’t preach” philosophy—focusing on enforcement rather than philosophical debates. Until comprehensive federal AI regulation catches up, states are using traditional legal tools to fill the gap and protect citizens from harm. For companies and consumers alike, this approach highlights the importance of vigilance, compliance, and transparency in the deployment of AI across America.

Ad_TwoHops_1040
Picture of Jessie Marie

Jessie Marie

With a distinguished background in military leadership, Jessie honed her discipline, precision, and strategic decision-making skills while serving in the United States Marine Corps, earning an honorable discharge in 2012. Transitioning her expertise into the world of technology, she pursued an Associate of Science degree from Moreno Valley College, where she excelled academically, receiving recognition in Computer Science and participating in the prestigious DNA Barcoding Challenge in collaboration with the University of California, Riverside. Now, as an AGL author, Jessie brings her analytical mindset and technical acumen to the forefront of discussions on Artificial Intelligence and the Internet of Things (IoT), exploring their transformative impact on connectivity, automation, and the future of digital ecosystems.

More Stories

Get the news that's designed for you, along with over 12,000+ others

Your Ads Here

Grow Your Business With AGL

Enable Notifications OK No thanks