EU AI Act High-Risk Deadline: Swedish Prep List for August 2026
On 2 August 2026 the majority of the EU AI Act. The high-risk Annex III obligations. Enters into force. For any Swedish B2B handling hiring, credit scoring, critical infrastructure, education, employment management, essential services, law enforcement, migration, or justice, the deadline is four months out. This is the practical preparation list that matches a four-month runway, with weekly milestones and the artifacts auditors will look for.
The Foundation: Is Your System High-Risk?
Annex III defines the high-risk categories by use case, not by technology. A hiring recommendation system is high-risk regardless of whether it uses a small classifier or a frontier LLM. A chatbot that answers customer product questions is not high-risk, even if it is powered by the same frontier LLM. The classification must be done per-system, not per-vendor.
The Artifacts Due by 2 August
- System classification. A documented assessment of which systems are high-risk under Annex III and why. Board-reviewed.
- Technical documentation (Annex IV). A comprehensive dossier for each high-risk system: description, intended purpose, design choices, data sources, training methodology, performance metrics, risk management, human oversight, and post-market monitoring plan.
- Risk management system (Article 9). Ongoing process, not a static document. Identifies, estimates, evaluates, and mitigates risks over the system's lifecycle.
- Data and data governance documentation (Article 10). Training, validation, and test data descriptions; relevance, representativeness, bias considerations, and any gaps.
- Logging (Article 12). Automatic recording of events for the lifetime of the system. Scope, retention, and access documented.
- Transparency and information to users (Article 13). Instructions for use that enable deployers to interpret outputs and use the system appropriately.
- Human oversight (Article 14). Measures enabling natural persons to understand, monitor, and intervene in the system's operation.
- Accuracy, robustness, cybersecurity (Article 15). Documented performance, resilience, and security measures.
- Conformity assessment. Either internal control or notified body, depending on the system.
- CE marking and EU database registration. For providers; deployers have related but distinct obligations.
A Four-Month Runway
Month 1 (April). Classification and scoping
- Inventory every AI system in the organisation.
- Classify each against Annex III and the prohibitions in Article 5.
- Confirm provider vs deployer role for each system.
- Decide which systems continue, which are modified, and which are paused.
Month 2 (May). Documentation and logging scaffolding
- Annex IV documentation started in parallel for all high-risk systems.
- Article 12 logging schema defined; deployed to production with a retention policy.
- Human-oversight design documented; UI elements for intervention specified.
- Risk management system template operationalised.
Month 3 (June). Data governance and conformity
- Data-governance evidence completed. Training data representativeness, bias assessment.
- Conformity assessment process started; if a notified body is needed, engage now.
- Accuracy and robustness test results produced.
- Transparency instructions for use published.
Month 4 (July). Finalise, register, prepare to operate
- Conformity assessment concluded; CE marking applied.
- EU database registration completed.
- Post-market monitoring plan active, with a 15-day incident-reporting runbook.
- Board sign-off documented.
The Logging Schema That Satisfies Article 12
// Annex-IV-aligned event log for a high-risk system
interface AIActLogEvent {
eventId: string; // ULID or UUID
systemVersion: string; // semver or build hash
timestamp: string; // ISO 8601 UTC
inputRef: string; // hash reference; no PII in log
outputRef: string; // hash reference
decision: string; // high-level outcome
confidence?: number;
humanInLoopStage?: 'none' | 'advised' | 'approved' | 'overridden';
humanOverride?: {
reason: string;
userId: string;
timestamp: string;
};
riskFlags?: string[];
retentionClass: 'short' | 'medium' | 'long';
}
The Swedish Angle
- Competent authority. Integritetsskyddsmyndigheten (IMY) leads AI Act enforcement in Sweden, with sector authorities (Finansinspektionen, Läkemedelsverket, etc.) participating on their domains.
- AI Sandbox. The Swedish regulatory sandbox is the right venue for high-risk systems still in design. Engage early if you need regulator-side input.
- Multi-lingual documentation. Instructions for use should be available in Swedish where deployers are Swedish. Not a legal requirement in every case, but the procurement expectation.
The Honest Risk
Teams that started classification in 2025 are on track. Teams starting in April 2026 are already late. Not fatal, but requiring focused effort and realistic scope cuts. Penalties cap at €35M or 7% of global turnover; the economic case for staffing this work properly makes itself.