Taming The A.I. Wild West: How U.S. Legislators Are Grappling With Artificial Intelligence

Recently, multi-modal models such as Googles Gemini 4 have gained the ability to produce realistic videos which include realistic voices and music. Others such as Stable Diffusion or Midjourney have been able to produce still images that resemble landscape photography, life-like portraits, or even art shown in museums for a few years now and they are only improving as time goes on.  Large language models, most famously Chat-GPT, can generate entire lexicons of text in mere minutes, resulting in news agencies and social media platforms which entirely consist of AI-generated content. Mind you, this is just the things they are willing to admit.

Most of the world appears to be very confident that these models will continue to improve and, in turn, their usage will become more and more ubiquitous through every facet of daily life. Investors are incredibly bullish on AI, according to the Stanford Human Center of Artificial Intelligence 2025 AI Index Report, in 2024 alone the U.S. invested $109.1 billion, China invested $9.3 billion, and international private equity firms invested $33.9 billion. Those investment figures are up 19% from the previous year.

The rapid advancement and usage of multi-modal artificial intelligence models has outpaced the ability of US regulatory bodies to respond to such developments. In addition, the potential economic booms associated with AI have resulted in a technical gold-rush and intense competition between tech companies. This has made some legislators wary of regulating AI; fearing it will pave the way for other geo-political entities, namely China, to gain dominance in the AI sector. Others have voiced their concerns about the huge potential downside AI introduces such as deep-fakes, massive misinformation campaigns, exacerbating mental health crisis, scams, fraud, adversarial use by foreign entities, and upsets to the job market or economy; and emphasize the need for a regulatory framework to address these concerns.

In the US, we have began to see the first inklings of a governmental response to AI in the form of the 2023 Executive Order 14110: Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence which attempted to outline responsible AI development which was “safe and secure”. This order was later repealed by in the Removing Barriers to American Leadership in Artificial Intelligence executive order in January 2025.

More recently, the “TAKE IT DOWN” act was signed into law, criminalizing the publication of AI-created deepfakes without proper permission and marks the United States first legislative response to artificial intelligence.

Plans for the federal government’s “AI-first government strategy” were accidentally left public on the code sharing Git-hub. These plans outlined AI-tools that would be used across the various departments and agencies of the federal government, in addition to a centralized system which could simultaneously monitor and improve upon said tools.

Many bills have been introduced to congress which seek to regulate AI include the “REAL Political Advertisements Act” (prohibiting deceptive AI use in political ads), the “Stop Spying Bosses Act” (regulating employers ability to monitor staff with AI), the “NO FAKES act” (prohibiting the unauthorized use of AI likeness), and the "AI Research Innovation and Accountability Act” (requiring safe and transparent AI research protocols).  All these acts are in the early stages of the legislative process, and there is no indication of whether they will be passed into law.

The federal government’s slow approach to AI regulation has been an excellent example of America’s dual federalism, as states have been much quicker to adopt laws regulating AI usage. According to NCSL, 28 states and the Virgin Islands have adopted over 75 AI related laws in 2025 alone. These cover a wide-variety of topics from ownership of AI content, prohibition of AI use to stalk an individual, or even Montana’s “Right to Compute” law which limits the governments power to take action to limit usage of computation resources for lawful purposes. The Colorado AI act, which is a broad-sweeping enacted law that applies to all “consequential AI usage”, requires developers to take reasonable care to ensure safety and equity through use of their AI. Other states, Connecticut, Massachusetts, New Mexico, New York, and Virginia, have introduced bills which heavily follow the Colorado AI act.

All 50 states have introduced multiple pieces of legislation seeking to regulate AI. Most of these measures include ensuring safe and responsible usage in addition to criminalizing certain uses of AI. Oklahoma currently has a dozen or so AI related bills pending, including one (H.B. 1916) which would create an AI Responsibility Council that monitors AI within the state usage and draft AI regulation.

However, Congress is currently debating H.R.1: One Big Beautiful Bill Act which, at the time of writing, allocates billions of tax-payer dollars for enabling government-wide systems with artificial intelligence in addition to prohibiting any state from enforcing any regulation on AI models for 10 years. If the federal government successfully introduces a 10-year ban on state regulation of AI models, the efforts of all 50 states to regulate AI would be delayed by a decade. With the advancement of AI technology, it is clear that both some federal and state legislators see an immediate need to create a regulatory framework for Artificial Intelligence to protect their state and their citizens.

The exact language limiting AI regulation does leave a narrow path for states seeking to regulate AI, allowing for criminal penalties to be assessed and for generally applicable laws to be enforced in usage of AI systems. However, the only other AI related state legislation which would be permitted would be laws which remove legal impediments to AI and streamline processes that facilitate adoption of AI models. In addition, the bill prohibits enforcement of any covered state-level law, meaning that previously enacted laws would be rendered ineffective.

Next
Next

Building More Than Homes: Behind Beacon Homes of Oklahoma