neural-bridge.dev
/ Compliance & Risk · Working Paper · v0.1 · 7 min read

AI Security Regulation in 2026: What Practitioners Need to Know

By Andy Herman

If you build, deploy, or use AI in production right now, you’re being regulated by people you’ve never met. Some of those regulations took effect this year. More are coming. The landscape is fragmented across jurisdictions, fast-moving across calendar quarters, and sometimes contradictory across regulators. It’s also the most consequential thing happening in cybersecurity policy in a generation.

This piece is the lay of the land as of mid-2026. It is not legal advice. It is a working practitioner’s map of what exists, who it applies to, and what it actually changes about how you build.

The four jurisdictions that matter

For most practitioners working with AI in 2026, four jurisdictions matter:

  1. The European Union — by far the most prescriptive, anchored by the AI Act
  2. The US federal government — voluntary frameworks (NIST AI RMF) and sector-specific rules
  3. US states — fragmented, growing fast, Colorado is the test case
  4. The UK and Commonwealth — a third path between the EU and US approaches

I’ll walk through each, then knit them together.

The EU AI Act

The EU AI Act is the world’s first comprehensive AI regulation. It entered force in August 2024 and is fully applicable in August 2026. Some categories of high-risk AI got a transition extension to December 2027 in the recent Digital Omnibus amendment.

The Act’s core mechanism is risk tiering:

  • Prohibited. AI uses banned outright. Social scoring by governments, certain biometric identification, manipulative AI exploiting vulnerable groups. These provisions have been in effect since February 2025.
  • High-risk. AI in safety-critical or rights-affecting domains. This is where most of the compliance work lives. Examples: hiring software, credit scoring, medical AI, education assessment, infrastructure safety. Subject to extensive documentation, risk management, human oversight, and conformity-assessment requirements. CE marking required.
  • Limited risk. Things like chatbots and emotion-recognition systems. Mostly transparency obligations.
  • Minimal risk. Most AI applications. Encouraged but not required to follow voluntary codes of conduct.

For practitioners, the questions are concrete:

  1. Are you a “provider” or a “deployer”? Different obligations.
  2. Is your AI on the high-risk list (Annex III)? If yes, the heavy compliance kicks in.
  3. Are you using a General-Purpose AI model (a foundation model)? Specific obligations have applied to those since August 2025.

The penalty structure is stiff. Up to €35M or 7% of global annual turnover for prohibited-use violations. Up to €15M or 3% for other violations. The pattern (and the percentages) will sound familiar to anyone who lived through GDPR. That is not an accident.

If you operate in or sell into the EU and your AI does anything in the high-risk list, you should already be deep in this. If your AI is generic and doesn’t touch a high-risk domain, you have lighter obligations but real ones, particularly around transparency.

The US federal landscape: NIST AI RMF

The US doesn’t have a federal equivalent of the EU AI Act. What it does have is the NIST AI Risk Management Framework (AI RMF), originally released January 2023 and substantially expanded with the Generative AI Profile (NIST AI 600-1) in July 2024.

Key points:

  • It’s voluntary. No fines for not adopting it. But…
  • It’s the de-facto US standard. Federal agencies, federal contractors, and most large enterprises use it as their reference framework.
  • It’s structured around four functions: Govern, Map, Measure, Manage. You build a program that does all four; the details are organization-specific.
  • The Generative AI Profile adds 200+ specific suggested actions for managing GenAI-specific risks: confabulation, harmful content, privacy leaks, environmental impact, misuse.

Why does voluntary matter? Because state-level regulation is increasingly using NIST AI RMF as the baseline for what “reasonable” risk management means. Colorado’s AI Act explicitly accepts NIST conformance as an affirmative defense. Other state bills follow the same pattern.

If you operate in the US and want a single framework to orient around, this is it.

US state-level regulation

The US is regulating AI state by state. The pattern is fragmented and accelerating.

The most consequential is the Colorado AI Act (SB 24-205), which takes effect June 30, 2026 (delayed from February 2026). It applies to high-risk AI in employment, housing, education, healthcare, insurance, legal, and financial services. Key requirements:

  • Risk Management Program aligned with NIST AI RMF, ISO/IEC 42001, or another recognized framework
  • Impact assessments within 90 days of deployment, repeated annually and after major changes
  • Notice obligations to affected consumers
  • Right to appeal consequential decisions

Colorado is being watched as the test case. Other states with active bills or laws as of 2026 include New York (Local Law 144 on bias audits in hiring), Illinois, California (multiple proposals), and Texas. The contours differ; the direction of travel is one-way.

For practitioners: if you sell software that makes consequential decisions about people in any of these states, you have a compliance question even if you’re not based there.

ISO/IEC 42001

ISO/IEC 42001:2023 is the first international standard for AI Management Systems. It’s the AI equivalent of ISO 27001 (information security) and ISO 9001 (quality management).

Why it matters: it is certifiable. Organizations can hire an accredited auditor, get certified to ISO 42001, and use that certification as evidence of responsible AI governance. Several regulations explicitly accept ISO 42001 conformance as compliance evidence, including the Colorado AI Act’s affirmative-defense clause.

For most companies: if you’re going to formalize an AI program, ISO 42001 is the closest thing to a recognized destination. Whether you certify or just align is a budget conversation.

Sector-specific overlays

Most practitioners face regulations beyond the AI-specific ones above. A few overlays that matter:

  • NIS2 (Network and Information Security Directive 2). EU cybersecurity directive that increasingly catches AI systems handling critical-infrastructure or essential-services data. AI security falls within its operational-security obligations. Transposition deadline was October 2024; member-state laws are landing in 2025 and 2026.
  • DORA (Digital Operational Resilience Act). EU financial-services regulation requiring ICT risk management, including AI used in financial decisions. Took effect January 2025.
  • HIPAA. US healthcare privacy. AI processing PHI is in scope. The HIPAA Security Rule maps reasonably onto AI risk management with some new wrinkles around training data.
  • FedRAMP. US government cloud authorization. Generative AI services for government use have additional requirements through the FedRAMP Emerging Technology Prioritization Framework.
  • GDPR. Predates AI but applies to AI processing personal data. Article 22 (automated decision-making) is suddenly very relevant.

For most teams, your AI compliance is the intersection of AI-specific regulations and your sector-specific regulations. The practical work is the joint mapping.

How they intersect

Real compliance for a US-based SaaS product with EU customers and an AI feature, in 2026, looks roughly like:

  1. Adopt NIST AI RMF as your internal framework.
  2. Map your AI features against the EU AI Act risk tiers. If anything is high-risk, plan for the August 2026 (or December 2027) deadline.
  3. Maintain impact assessments ready to show Colorado regulators if asked.
  4. Watch other states. Bills move fast.
  5. Layer sector overlays (HIPAA, DORA, etc.) where they apply.
  6. Consider ISO 42001 certification if you sell to enterprises that ask for it.

This is a lot. The good news is it’s tractable: most of these frameworks are designed to be compatible, not contradictory. A well-built NIST AI RMF program covers maybe 70% of what ISO 42001 wants and 60% of what the EU AI Act wants for high-risk systems.

The bad news: that last 30-40% is non-trivial, and it’s the part most companies don’t realize they’re missing until an audit.

How to keep up

The landscape moves quarterly. Sources I recommend:

For a working professional, 30 minutes a week is enough to stay roughly current. The big-block reads (the AI Act itself, NIST AI RMF documents, ISO 42001) are 4-8 hours each.

What this means for personal projects like Neural Bridge

Personal projects are mostly out of the regulatory net. Building a personal AI substrate for your own use doesn’t trigger the EU AI Act or Colorado AI Act, because those target consequential decisions affecting other people. But two things to track:

  1. If Neural Bridge ever gets a public chat interface where users other than me input data, the limited-risk transparency obligations under the EU AI Act apply.
  2. If I ever monetize a service built on Neural Bridge, the rules tighten fast.

For now, Neural Bridge is in scope for OWASP’s technical guidance, out of scope for most AI regulation. That’s a comfortable place to build from, but I’m noting the threshold.

Further reading

See also

  • Security Architecture — Neural Bridge’s threat model and how it sits in this landscape
  • 01 - OWASP for AI — companion paper on technical-control frameworks
  • 01 - Memory Poisoning in Personal Agentic AI Substrates — concrete LLM01 / LLM04 / LLM08 deep dive