A GDNG compliance checklist for health AI infrastructure — 37 requirements across 6 domains. Track your readiness. Identify gaps. See how sovereign infrastructure resolves each one.

Where your data lives determines whether you’re compliant. GDNG mandates that health data processing occurs within sovereign Trust Centres — contractual assurances alone are insufficient.
All health data processing occurs within German or EU-based Trust Centres — no cross-border transfers to non-adequate countries
Federated learning architecture ensures raw data never leaves the originating Trust Centre. Models travel to data — data never travels.
Raw patient data never leaves the originating data controller’s secure perimeter during AI model training
Only encrypted model gradients aggregate centrally. Differential privacy prevents patient re-identification.
Data Processing Agreements (Auftragsverarbeitungsverträge) in place with all infrastructure providers
Standard DPA templates pre-configured for GKV insurer engagements with Art. 28 compliance.
Data residency demonstrated through technical architecture — not merely contractual assurances
Architecture-level guarantee: federated nodes are physically deployed within customer infrastructure or certified German data centres.
Encryption at rest (AES-256) and in transit (TLS 1.3) for all health data stores and API endpoints
AES-256 at rest, TLS 1.3 in transit enforced across all API endpoints and Trust Centre nodes.
No dependency on US-headquartered cloud providers for primary health data processing (CLOUD Act risk mitigation)
Infrastructure-independent: runs on sovereign European cloud, AWS EU, Azure EU, or on-premises. No US vendor lock-in.
Differential privacy guarantees applied to any aggregated model parameters leaving local Trust Centres
Differential privacy and gradient compression applied to all federated aggregation steps.

Every AI processing operation requires a documented lawful basis, purpose limitation, and data minimisation framework. This is where most in-house builds fail compliance review.
Lawful basis for health data processing identified — Art. 9(2)(h) scientific research or Art. 9(2)(i) public interest in public health
Configurable legal basis mapping per data type and processing operation, pre-mapped to SGB V §284ff and §6 GDNG provisions.
Data Protection Impact Assessment (DPIA) completed for all AI processing activities involving health data
DPIA documentation template and risk assessment framework provided as part of deployment package.
Purpose limitation documented — AI models trained only for specified, explicit, and legitimate prevention or care purposes
Per-endpoint purpose scoping with immutable audit trails documenting the declared purpose of every API call.
Data minimisation enforced — only necessary data fields ingested for each AI model’s specified purpose
Schema-level field filtering ensures only declared-necessary fields are processed per model configuration.
Pseudonymisation protocols applied before analytical processing, with re-identification keys stored separately
Built-in pseudonymisation layer with k-anonymity (k ≥ 5) enforced at query time. Keys managed by data controller.
Technical and Organisational Measures (TOMs) documented per GDPR Art. 32, including access controls, audit logging, and incident response
Pre-documented TOMs aligned with BSI IT-Grundschutz, including role-based access control and per-operation audit logging.
Consent management or statutory basis documented for each data source feeding AI models
Consent and legal basis mapping per data type with configurable workflows for GKV statutory basis provisions.

Full lineage tracking from training data through model deployment to clinical recommendation. Every decision must be traceable, every model version reproducible.
Full lineage tracking for all AI model training runs — which data sources, Trust Centres, and model versions contributed to each output
Immutable audit log recording every training run: data sources, Trust Centre contributions, hyperparameters, and output model hash.
Model versioning with immutable audit logs — every deployed model traceable to its training data and parameters
Git-like model versioning with tamper-proof hash chain. Full rollback capability to any previous model state.
Explainability documentation for all AI-generated risk scores or intervention recommendations
SHAP values plus causal effect estimates per recommendation. Interpretable by clinical staff without data science background.
Patient rights infrastructure — data subjects can request access, rectification, and erasure of data used in AI processing
Data subject access API enabling automated responses to Art. 15–17 requests across federated infrastructure.
Incident response plan for AI model failures, including patient notification and regulatory reporting procedures
Integrated incident detection with automated alerting. Template-based regulatory notification workflows.
Regular audit schedule (minimum annually) with independent review of AI processing compliance
Continuous compliance monitoring dashboard with exportable audit reports for independent review.

Health risk prediction is classified high-risk under EU AI Act Annex III. This triggers conformity assessments, bias audits, and post-market monitoring obligations from August 2026.
High-risk AI classification assessed — health risk prediction and clinical decision support classified high-risk under EU AI Act Annex III
Architecture designed from day one for high-risk classification requirements. Medical device certification pathway under evaluation.
Bias audit framework implemented — mathematical fairness metrics enforced at training time, not post-hoc
Equalized odds optimisation across SES quintiles. Intersectionality modelling (SES × gender × age). Bias correction validated across demographic subgroups.
Training data governance documented — data quality, representativeness, and potential bias sources assessed
Training data governance framework with representativeness analysis, bias source mapping, and data quality scoring per dataset.
Human oversight mechanisms — AI recommendations do not auto-execute clinical interventions without qualified human review
Human-in-the-loop architecture: all intervention recommendations require clinician approval before execution. No autonomous clinical actions.
Robustness testing completed — model performance validated across demographic subgroups (age, gender, SES)
Continuous fairness audits in production with automated drift detection across demographic subgroups.
AI system registered in the EU AI database as required for high-risk systems
Pre-formatted registration documentation aligned with EU AI database requirements.
Post-market monitoring plan for deployed models, including performance drift detection and retraining triggers
Continuous model performance monitoring with configurable drift thresholds and automated retraining triggers.

GDNG doesn’t exist in isolation. EHDS secondary use, Medical Device Regulation, and DiGA requirements create overlapping obligations that must be addressed in a single architecture.
EHDS secondary use readiness — pseudonymisation protocols align with EHDS Art. 33 requirements and data permit process understood
Pseudonymisation protocols pre-aligned with EHDS Art. 33. Data permit application workflows documented.
FHIR R4 interoperability — AI system ingests and outputs data in HL7 FHIR R4 format per ePA infrastructure requirements
FHIR-native data layer with schema validation on every API call. Standardised export formats for EHDS compliance.
MDR classification assessed — determine if AI system qualifies as medical device under MDR Annex VIII
Designed for Medical Device Class IIa certification under MDR. Clinical evaluation report framework in place.
DiGA Fast-Track alignment — if applicable, AI components meet BfArM digital health application requirements
White-label SDK enables embedding Loretta capabilities within existing DiGA applications. DiGA-compatible API format.
SOC 2 Type II or equivalent security attestation planned for enterprise deployments
SOC 2 Type II attestation planned with BSI IT-Grundschutz alignment in place.

Compliance isn’t just technical architecture — it requires organisational governance, staff training, vendor management, and operational resilience.
Data Protection Officer (DPO) appointed and involved in all AI deployment decisions involving health data
DPO consultation workflows built into deployment process. Compliance sign-off gates at each deployment stage.
Staff training programme — all personnel handling health data AI systems trained on GDNG and GDPR requirements
Training materials and compliance onboarding documentation provided as part of enterprise deployment package.
Vendor assessment completed for all third-party AI components — sub-processor compliance verified
Full sub-processor transparency. Supply chain documented with compliance attestations for all components.
Business continuity plan for AI infrastructure — failover procedures ensuring care not disrupted by system outages
SLA-backed infrastructure with automated failover. Graceful degradation ensures care continuity during outages.
Integration testing with existing IT stack (Epic, SAP IS-H, Cerner, gematik TI) validated
API-first architecture integrates with existing Epic/SAP/Cerner stacks. No data migration required.
Download your checklist as a PDF to share with your compliance team, attach to audit documentation, or track progress offline. Your checked items will be preserved in the export.
Uses your browser's print-to-PDF. Select "Save as PDF" as the destination.