For businesses in the United States, AI governance will soon become a compliance imperative, not just a best practice.
When the European Union adopted the European Artificial Intelligence Act in 2024, it set a global benchmark for risk-based AI regulation. The gradual implementation of this law, starting with the banning of high-risk systems and moving to comprehensive governance by 2026, has influenced legislative agendas around the world. For U.S. companies, the law is a sign that AI governance will soon become a compliance imperative, not just a best practice.
Landmark Colorado law will have ripple effects
In May 2024, Colorado became the first state to enact a large-scale AI law, the Colorado Artificial Intelligence Act. Modeled in part on the EU framework, Colorado imposes obligations on developers and deployers of high-risk AI tools, such as those used to make employment, housing, health care and lending decisions. These obligations include impact assessments, risk management programs, transparency and human oversight.
Originally scheduled to take effect in February 2026, enforcement of Colorado’s law is delayed until June due to industry pressure and legislative changes. Nonetheless, Colorado’s landmark law has inspired similar laws in other states, such as California and Illinois, and could still seep into New Hampshire’s current legislative session.
Momentum builds in state legislatures
While Colorado leads with a comprehensive AI law, other state legislatures are proposing both broad and targeted laws. California has passed several such laws, including the Transparency in Frontier Artificial Intelligence Act, which mandates disclosures and security protocols for developers of advanced AI models. California also has laws addressing chatbot security and consumer protection, which will begin enforcement in 2026.
Illinois and New York City have focused on the use of job-related AI. These laws require notification or consent from applicants before using AI tools in the hiring process, and prohibit or require auditing of automated employment decisions. General privacy laws, including New Hampshire’s, also impose restrictions on automated decision-making, which extend to employment decisions as well as other contexts.
New Hampshire has yet to pass a general AI law, opting instead for narrower measures addressing specific risks. For example, current New Hampshire law prohibits state agencies from using AI for real-time biometric surveillance and discriminatory profiling without a warrant, as well as for specific uses of generative AI, such as for deepfakes and communications with minors.
Federal legislation and decrees
At the federal level, comprehensive AI legislation remains elusive. Rather, the political landscape is shaped by the actions of the executive. In early 2025, President Trump signed Executive Order 14179, titled Removing Barriers to American Leadership in Artificial Intelligence, which repealed previous security-focused mandates and prioritized innovation. Most recently, a draft executive order leaked in November 2025 signaled an intent to preempt state AI laws, citing concerns about a “patchwork” of regulations that could stifle competitiveness. The plan proposed creating a federal AI task force and conditioning federal funding on states’ compliance with national rules. Although the order has not been finalized or released, it highlights the tension between federal uniformity and states’ rights – a debate that will shape AI governance in 2026 and beyond.
Businesses should start preparing for compliance now
Whether state or federal regulations emerge during this legislative session or in the near future, businesses should start preparing to comply now. Here are three main steps to achieve this.
Conduct an evaluation of the use of AI. Inventory all AI tools the company is already using and identify AI technologies that will benefit the organization.
Establish an AI governance framework. Create a cross-functional AI governance team that includes leaders from across the company, as well as technology and legal advisors with AI expertise. Develop written policies that align with existing regulations and emerging standards, such as the EU AI law and the AI risk management framework promulgated by the National Institute of Standards and Technology.
Integrate AI into operations. Operationalizing the use of AI through testing, prototyping, and end-use of AI in production environments. Ensure that appropriate due diligence is carried out and contracts are signed with suppliers regarding their use of AI.
AI is not a distant concept. This is a current business reality. With hundreds of AI-related bills introduced in the United States and global frameworks such as the EU AI Act and the NIST AI Risk Management Framework setting the bar for compliance, businesses must act now to maintain their competitive advantage. Don’t wait for a law to force you to comply. Lead the way.
Cam Shilling founded and chairs McLane Middleton’s Cybersecurity and Privacy Group. The group of six lawyers and a paralegal helps businesses and private clients improve their AI security, privacy and compliance, and address any incidents or breaches that occur.
