Today is April 24, 2026. The EU AI Act’s enforcement deadline for high-risk AI systems is August 2, 2026. That’s exactly 100 days.
In most engineering organizations, the legal team is tracking this. The compliance team is tracking this. The engineering team has not been formally told what they need to ship by August.
By July, that gap will not be recoverable. Not because the work is impossible. Because the work requires sprint capacity that wasn’t planned for.
Who this actually applies to#
Before anything else, kill the myth that this is “a European company problem.”
The EU AI Act applies extraterritorially. If your AI system is used by EU citizens, you are in scope regardless of where your company is headquartered. US-based SaaS with EU customers? In scope. Israeli startup selling to a German bank? In scope. AI feature in a product that’s accessible from Europe at all? In scope. Your B2B API is called by someone else’s product that serves EU users? Still in scope. Downstream distribution doesn’t insulate upstream providers.
There’s no “I didn’t know” exemption. Fines go up to €35 million or 7% of global annual revenue, whichever is higher.
If you have any customer traffic from the EU, even indirect traffic through a partner, this is your problem.
What the law actually requires (in engineering language)#
Each critical article, translated into changes in your repo.
Article 50: Transparency#
Law: Users must be told when they’re interacting with an AI. AI-generated content needs machine-readable markers and metadata.
Engineering translation:
- Add a visible UI disclosure anywhere users interact with an AI-driven feature. Not buried in the terms of service. In the flow.
- Attach machine-readable metadata (HTTP headers, EXIF-equivalent content tags) to any AI-generated content your system produces or distributes.
- For chat interfaces, a persistent “AI assistant” label near the input field is the minimum.
What this means for your sprint: audit every product surface where a model output reaches a user. Every single one. Add disclosure if missing. Add metadata tagging if content leaves your system.
Article 12: Record-keeping#
Law: Every interaction with a high-risk AI system must be logged in a structured, auditable format that a regulator can query.
Engineering translation:
- Structured event logging on every model inference. Inputs, outputs, model version, timestamp, user or tenant identifier, confidence scores if available.
- The log must be queryable. A 12-month pile of unstructured stdout does not count.
- Retention needs to match the regulatory requirement (typically 6 years for high-risk systems).
What this means: if your current AI feature logs to stdout or to a generic app log, that’s not compliant. You need a dedicated audit trail with a proper schema, proper indexing, and retention guarantees.
What this costs: this is the one that eats the most sprint time. Log schema design, storage tier pricing, indexing for query performance, access controls on the audit store. If you’re starting in April for an August deadline, you’re already tight.
Article 14: Human oversight#
Law: Sensitive AI decisions need a defined path for human review before taking effect.
Engineering translation:
- Identify the decision points where AI output influences high-risk outcomes (hiring, credit, healthcare, legal, education, critical infrastructure).
- At each of those points, there must be a deterministic path that routes the decision to a human before the outcome is final.
- The human must have the actual ability to override the AI’s suggestion, not just acknowledge it. “Click to confirm” with no real friction doesn’t count.
What this means: your AI features that auto-approve, auto-reject, or auto-route need a human gate if the outcome is classified high-risk. The gate has to be real, with a real UI, real authority, and real training for the humans using it.
Article 5: Prohibited practices#
Law: Some AI uses are outright banned. Social scoring of individuals by public authorities, exploitative manipulation of vulnerabilities, certain biometric categorization, real-time remote biometric ID in public spaces.
Engineering translation:
- Content policy filters on inputs before they reach your models.
- A classification layer that recognizes and blocks prohibited use patterns.
- Documentation showing how you prevent your system from being used for prohibited purposes.
What this means: for most engineering teams, this is the smallest implementation lift, unless you’re in a directly affected industry (HR tech, surveillance, credit scoring, biometrics). The documentation burden is still real. Auditors will ask for your prohibited-use risk assessment even when your answer is “we don’t do any of this.” “We don’t do that” is an answer that requires evidence, not a shrug.
Why the legal team isn’t the bottleneck#
The legal teams have been on this for a year. The compliance frameworks exist. The consultants are getting 20 to 30% of the budget pie for certification-related work. Vendors are already passing costs through with visible markups.
None of that ships code.
The bottleneck is engineering sprint capacity that was never allocated. Specifically:
- Audit log infrastructure (Article 12) is an engineering-heavy build
- Human oversight UIs (Article 14) need product and front-end work
- AI feature disclosure (Article 50) needs coordinated UX across every surface
- API inventory and risk classification (prerequisite for all of it) requires engineering time to map
In organizations doing this well, someone senior on the engineering side already took the brief from legal and translated it into specific issues in the backlog before the end of Q1 2026. If that hasn’t happened in your org yet, somebody needs to do it this week.
The 100-day plan#
Here’s the realistic minimum. Compress if you have less time. Don’t expand if you have more, because you don’t.
Days 1 to 15 (now through May 9): Inventory and triage#
- Complete API inventory of every AI-involved endpoint your systems call, produce, or expose.
- Classify each endpoint by risk level under the Act (minimal, limited, high-risk, prohibited).
- Name an engineering owner for each high-risk surface. Not the CTO. An actual engineer who’s going to do the work.
If you do nothing else in the next two weeks, do this. Everything else depends on it.
Days 16 to 50 (May 10 through June 13): Build the audit layer#
- Design and ship a structured event logging system for high-risk AI interactions.
- Retention policy, schema, indexing, access controls. All of it.
- Backfill where you have data. Don’t backfill where you don’t, but document the gap.
This is where your engineering budget goes. If you’re outsourcing one thing, outsource the rest so engineering can focus here.
Days 51 to 80 (June 14 through July 13): Disclosure and oversight#
- Add AI disclosures across every relevant product surface.
- Add machine-readable metadata to AI-generated content.
- Ship the human oversight UIs for high-risk decision points.
This is where product and design need to stop saying “it doesn’t affect this quarter’s roadmap.” It does now.
Days 81 to 100 (July 14 through August 2): Documentation and dry-runs#
- Complete the technical documentation required for your risk classification.
- Run internal dry-runs of a regulator query. Can you actually produce the audit trail for a specific user’s specific interaction from four months ago? If not, fix it now.
- Train the humans doing the oversight role. They need to understand what they’re reviewing.
The one thing that blows up the plan#
If you’re an engineering leader reading this in April, you have time. If you’re reading this in July, you don’t. The honest answer at that point is to either pull high-risk AI features off your EU-facing product or accept that your first enforcement cycle will go badly. Better said out loud now.
What to do this week#
Three things, in order.
Monday morning: one-hour sync between your most senior engineer and your most senior compliance person. Leave with a shared doc listing every AI-involved product surface. Share with the CTO or VP Eng by end of day.
By Thursday: classify every surface (minimal, limited, high-risk, prohibited). For high-risk ones, name an engineering owner.
By Friday: the audit-log infrastructure team exists and knows what they’re building. Even if it’s two people. Even if one of them is borrowed from a platform team. The work starts now or it doesn’t finish.
The EU AI Act isn’t a future problem anymore. It’s a planning problem you have this week. It’s also where the longstanding gap between how fast organizations produce AI code and how slowly they govern it finally gets priced. In fines. In front of regulators. Most orgs will not realize that until too late. The ones that do now get to ship on time.
If you’re already working on this, I’d love to hear what’s surprised you. If you haven’t started, forward this to whoever decides sprint priorities. Find me on X, Telegram, or LinkedIn.
Disclaimer: This article references the EU AI Act and related compliance materials for illustrative and educational purposes. It is not legal advice. You should consult a qualified legal team for compliance specifics in your jurisdiction and industry. Articles, deadlines, and classifications referenced are based on publicly available sources at the time of writing and may change. The opinions expressed are my own. I have no financial interest, business relationship, or affiliation with any specific compliance vendor mentioned. This is commentary, not legal, investment, or business advice.


