The EU AI Act Is Now Enforced: What Every SaaS Founder Needs to Do in 2026
The EU AI Act's key provisions are now in force. If your SaaS product touches European users or enterprise customers who do, you need to understand which obligations apply to you — and act now rather than scrambling later.
The compliance window is closed
In August 2024, the EU AI Act was formally published. The GPAI (general-purpose AI) model provisions kicked in August 2025. The high-risk system obligations became enforceable in August 2026. The full framework is now live.
If you're a SaaS founder who thought "I'll deal with it later," later is now. This post covers what the Act actually requires, which obligations apply based on your product's risk tier, and what practical steps to take over the next 90 days.
I'm writing this from Canada, where we're watching the EU framework closely because Canada's own AI regulatory direction — Bill C-27 (the AIDA provisions) has been stalled but is expected to move in some form — is likely to converge with the EU approach. Canadian founders building for global markets need to treat EU compliance not as a domestic obligation but as the emerging de facto global standard for enterprise AI products.
What the EU AI Act actually regulates
The Act takes a risk-based approach: different obligations apply based on how risky the AI application is assessed to be.
Unacceptable risk — prohibited
Certain AI applications are simply banned in the EU:
- Social scoring systems by governments
- Biometric categorization by sensitive characteristics (political opinion, religion, etc.)
- Emotion recognition in workplaces and educational institutions
- Untargeted scraping of facial images from the internet to create recognition databases
- AI systems that subliminally manipulate behavior in ways that harm users
- Predictive policing based solely on profiling
If your product does any of these: stop. Immediately. These are not grey areas.
High risk — heavy obligations
High-risk AI systems must comply with a comprehensive set of requirements before being placed on the EU market. The categories include:
- **Biometric identification and categorisation systems**
- **Critical infrastructure management** (energy, water, transport)
- **Education and vocational training** (assessment, admissions, scoring)
- **Employment and worker management** (recruitment, performance monitoring, task allocation)
- **Access to essential services** (credit scoring, insurance underwriting, social benefits)
- **Law enforcement** (risk assessment, evidence evaluation, profiling)
- **Migration and border control** (risk assessment, document verification)
- **Administration of justice and democratic processes**
If you're building in insurance, HR tech, credit, real estate financing, or public sector tools — pay attention. These categories directly cover common SaaS market segments.
Requirements for high-risk systems include:
- Risk management system documentation
- Data governance and data management practices documentation
- Technical documentation sufficient for conformity assessment
- Automatic logging of events (full audit trail)
- Transparency to deployers and users
- Human oversight capabilities built into the system
- Accuracy, robustness, and cybersecurity standards
Limited risk — transparency obligations
AI systems with limited risk must meet transparency requirements:
- Chatbots and conversational AI must disclose that they are AI
- Deepfake content must be labeled
- AI-generated content must be disclosed when relevant
This catches most SaaS products with AI assistants or generative features. If your product has a chatbot or generates content that users might believe is human-created, you need a clear disclosure.
Minimal risk — no specific obligations
Most AI applications fall here: spam filters, recommendation systems, AI-enabled features in games, and general-purpose tools with no significant risk of harm. Voluntary codes of conduct apply but are not mandatory.
General-purpose AI (GPAI) model obligations
If you're building or deploying a GPAI model — a foundation model used for a wide range of downstream tasks — additional obligations apply since August 2025:
- Technical documentation
- Copyright compliance policy
- Summary of training data (published)
- For systemic risk models (above 10^25 FLOPs training compute): adversarial testing, incident reporting, cybersecurity measures
This primarily affects labs like OpenAI, Anthropic, and Google — but if you're fine-tuning and redistributing foundation models at any significant scale, review whether GPAI obligations apply.
Practical compliance steps for SaaS founders
Step 1: Classify your product
Run every AI-enabled feature of your product against the risk categories above. Be honest — regulators will not respect "we didn't think that counts." Document your classification rationale.
Questions to ask:
- Does this feature make or strongly influence consequential decisions about people? (employment, credit, housing, education, health)
- Does it use biometric data?
- Is it deployed in critical infrastructure?
- Does it generate content a user might believe is human-created?
Step 2: For high-risk systems — build the compliance infrastructure
If you're in a high-risk category, you need:
**Technical documentation package:**
- System description and intended purpose
- System components and their interactions
- Data requirements and data management practices
- Performance metrics and accuracy benchmarks
- Risk management assessment and mitigation measures
- Human oversight mechanisms
**Logging and audit trail:** Build automatic, tamper-resistant logging of all AI-driven decisions. Include input data, output data, confidence scores, model version, timestamp, and any human review actions. Store for minimum 6 months (longer for high-risk categories).
**Transparency documentation for deployers:** If you sell your AI system to another company (deployer) rather than direct to end consumers, you must provide them with instructions for use that include the system's capabilities and limitations, accuracy metrics, and what kind of human oversight is required.
**Human override mechanisms:** High-risk systems must allow deployers to override, correct, or stop the system's operation. Build a manual review queue, correction interface, and kill switch.
Step 3: For limited risk — add disclosures
Add clear disclosures to any conversational AI or generative content feature:
- Label chatbots as AI (not just in T&Cs buried on page 47)
- Label AI-generated content contextually, at the point of generation
- Document synthetic media policies
Step 4: Appoint an EU representative
If you're not established in the EU but your product is used by EU users, you need either a local establishment or an officially appointed EU representative for regulatory purposes. This is a legal/entity question — talk to your lawyer.
Step 5: Register in the EU AI Act database
High-risk systems must be registered in the EU's public database before deployment. This is a public registry of high-risk AI systems — your system's documentation summary will be publicly accessible.
The Canadian angle
Canada's AI regulatory framework has been slower to develop. The AIDA provisions in Bill C-27 remained stalled as of early 2026 but the political direction is toward a risk-based approach similar to the EU model.
For Canadian founders building for global enterprise markets:
1. **EU compliance creates a marketable advantage today.** Enterprise buyers in finance, insurance, and regulated industries are asking vendors specifically about EU AI Act compliance status. Being compliant is a sales argument, not just a legal obligation.
2. **The GDPR playbook applies.** Canadian SaaS companies that went through GDPR compliance for their EU customers in 2018-2020 know the pattern: build the technical infrastructure, write the policies, train the teams, and use it as a trust signal with customers everywhere.
3. **Don't wait for Canadian regulation.** When Canada's AI regulation does come, it will be easier to comply if you've already built EU-standard governance. Build to the highest applicable standard and you're covered globally.
The penalties are real now
Non-compliance fines under the EU AI Act:
- Prohibited applications: up to €35 million or 7% of global annual turnover
- High-risk system violations: up to €15 million or 3% of global annual turnover
- Incorrect information to authorities: up to €7.5 million or 1.5% of global annual turnover
For an early-stage SaaS company, these numbers are existential. More immediately threatening: enterprise customers in the EU are now contractually embedding AI Act compliance requirements into vendor agreements. Failing to comply can cost you individual enterprise deals worth multiples of any fine.
The bottom line
The EU AI Act is not a future problem. It's a current one. If your SaaS product has any AI features touching EU users, start with classification today. If you're in insurance, HR, credit, or real estate — the high-risk provisions likely apply and you need a compliance project running now.
The good news: the compliance infrastructure required — documentation, logging, transparency, human oversight — is also just good software engineering. Teams that build EU-compliant AI products are building more trustworthy products for every market.
---
Strategic Guides
If this topic is relevant to your roadmap, these in-depth guides will help:
Topics in this post
Related reads
Agentic AI in 2026: Why This Year AI Goes From Tool to Worker
Agentic AI — systems that plan, take actions, and complete multi-step goals without step-by-step human direction — is moving from demo to deployment in 2026. Here's what's changed, what's still broken, and how to build with it responsibly.
Building AI Products in Canada in 2026: The Honest Founder's Guide
Building an AI startup in Canada in 2026 comes with real advantages, real challenges, and a few myths that need dispelling. Here's the honest picture from a founder building vertical AI products from Edmonton, Alberta.
The Future of Software Development: What AI Means for Developers and Founders in 2026–2030
AI is reshaping how software is written. Not replacing developers — but fundamentally changing what they do, how expensive software is to build, and what kinds of products are now possible. Here's an honest look at where software development is heading by 2030.