Scientific Foundations

The Science Behind Loretta

Most health AI systems in Europe face a fundamental contradiction: the data needed to train effective models is siloed across insurers, hospitals, and research institutions, and existing regulations make centralising that data either illegal or prohibitively complex. Loretta resolves this tension by combining four scientific capabilities into a single infrastructure layer.

Learn more about our study View our evidence base
01
Causal AI
From correlation to intervention

Nearly all health AI systems today are predictive. They produce risk scores, such as “This person has a 62% probability of developing Type 2 Diabetes,” but they cannot answer the question that actually matters: what should we do about it? Predictive models detect correlations, not causes. They might observe that frequent GP visits correlate with higher diabetes rates, but the visits are not causing diabetes. Acting on correlations alone can be misleading or harmful.

Loretta uses causal inference, a branch of statistical science that estimates the actual effect of an intervention on an outcome, distinguishing genuine causes from confounders. Rather than estimating average effects across populations, Loretta calculates the personalised benefit of a specific intervention for each individual, producing recommendations that are robust and clinically meaningful.

This transforms Loretta from a passive risk-scoring tool into an intervention recommendation engine. Instead of simply flagging “high risk,” Loretta can recommend a specific programme estimated to reduce an individual’s 5-year diabetes risk by a measurable amount. These recommendations are grounded in causal evidence, not correlation.

02
Federated Learning
Intelligence travels to data, not the other way around

Health data in Germany is among the most heavily regulated in the world. Statutory health insurance data must remain within certified Trust Centres, and GDPR classifies health data as a special category with strict processing requirements. Effective AI needs large, diverse datasets, but the legal framework prohibits centralising them. Every approach that relies on collecting data into a single location hits this wall in the European market.

Federated learning inverts the traditional approach. Instead of moving data to a central model, Loretta moves the model to the data. The model trains locally within each Trust Centre, and patient records never leave the secure perimeter. Only encrypted, privacy-protected model updates are shared. The result is a continuously improving model trained across diverse populations, with full audit trails and no centralised data exposure.

Each new insurer or hospital that connects to the network improves the model for every other participant, without any of them sharing raw data. The more organisations join, the better the intelligence becomes for all. This is the network effect of data without the liability of data centralisation.

03
API-First Architecture
Infrastructure that integrates in weeks, not years

Many health AI companies build impressive technology but struggle to deploy it inside large, conservative institutions. Their solutions require deep integration work, custom deployments, and months of IT project management. For German statutory health insurers running complex legacy environments, this approach is a non-starter because their IT teams are already stretched thin.

Loretta is designed as API-first infrastructure. Every capability, including risk scoring, intervention recommendations, and fairness audits, is available as a secure, documented endpoint. There is no platform to install. Insurers connect through a standardised gateway and integrate through simple API calls, enabling a typical pilot deployment in four to six weeks rather than twelve to eighteen months.

API-first architecture compresses traditional deployment timelines dramatically. It converts large upfront capital expenditure into predictable operating costs, with transparent, usage-based pricing so insurers pay for what they use.

04
GDNG-Native Compliance
Built for the regulatory landscape, not retrofitted

Germany’s Health Data Use Act (GDNG) creates a new legal framework for health data in research and AI. The EU AI Act classifies health AI as “high-risk,” triggering mandatory requirements for transparency, bias auditing, and human oversight. Most health AI vendors build first and address compliance later. This retrofit approach is slow, fragile, and expensive.

Loretta is GDNG-native. The architecture was designed from day one around regulatory requirements. Loretta runs natively within Trust Centre infrastructure, ensuring no patient-level data ever leaves the secure environment. Complete audit trails record every model decision and data access event. All outputs include robust explanations that satisfy EU AI Act requirements for high-risk AI systems.

Instead of assembling specialist teams and building sovereign AI infrastructure from scratch, insurers access pre-certified endpoints that handle data sovereignty, causality, equity, and auditability by design. For insurers facing regulatory deadlines, Loretta makes compliance possible at scale.

How the Four Pillars Work Together

Each pillar addresses a distinct challenge, but their power is in the combination. Causal AI without data sovereignty is illegal in Europe. Data sovereignty without causal AI produces risk scores no one can act on. Both without rapid deployment remain in the lab. And all three without regulatory-native design face months of compliance rework. Loretta integrates these four pillars into a single coherent system so insurers get actionable, compliant, and deployable intelligence from day one.

Clinical Validation

Evidence-based validation from first principles

We are conducting a funded research study in collaboration with the Berliner Institut für Innovationsforschung (BIFI) to establish the psychological and behavioural foundations of AI-driven causal prevention. The BIFI study systematically examines how end users perceive, trust, and engage with causal health recommendations. It addresses acceptance, explainability thresholds, perceived fairness, and cognitive load. These findings directly inform Loretta's design principles and reduce implementation risk before clinical deployment.

Building on these foundational insights, we are preparing a 200-patient randomised controlled trial designed to clinically validate our causal inference engine against standard diabetes management protocols.

Evidence Base

Selected Research

Identifying top ten predictors of type 2 diabetes through machine learning analysis of UK Biobank data
Author(s): Lugner, M., Rawshani, A., Helleryd, E. et al.
Journal: Scientific Reports 14, 2102 (2024)
Read more →
Effective questionnaire-based prediction models for type 2 diabetes across several ethnicities: a model development and validation study
Author(s): Kokkorakis, Michail et al.
Journal: eClinicalMedicine (The Lancet)
Read more →
Learning from the machine: is diabetes in adults predicted by lifestyle variables? A retrospective predictive modelling study of NHANES 2007–2018
Author(s): Riveros Perez E, Avella-Molano B
Journal: BMJ Open 2025;15:e096595
Read more →
Predicting the Development of Type 2 Diabetes in a Large Australian Cohort Using Machine-Learning Techniques: Longitudinal Survey Study
Author(s): Zhang L, Shang X, Sreedharan S, Yan X, Liu J, Keel S, Wu J, Peng W, He M
Journal: JMIR
Read more →
Development and validation of the type 2 diabetes mellitus 10-year risk score prediction models from survey data
Author(s): Stiglic, Gregor et al.
Journal: Primary Care Diabetes, Volume 15, Issue 4, 699–705
Development and validation of QDiabetes-2018 risk prediction algorithm to estimate future risk of type 2 diabetes: cohort study
Author(s): Hippisley-Cox J, Coupland C
Journal: BMJ 2017;359:j5019
Read more →
Burden and attributable risk factors of non-communicable diseases and subtypes in 204 countries and territories, 1990–2021: a systematic analysis for the global burden of disease study 2021
Author(s): Li, J., Pandian, V., Davidson, P. M., Song, Y., Chen, N., & Fong, D. Y. T.
Journal: International Journal of Surgery, 111(3), 2385–2397 (2025)
Read more →
Prognosis — A Wearable Health-Monitoring System for People at Risk: Methodology and Modeling
Author(s): Pantelopoulos, A., Bourbakis, N. G.
Journal: IEEE Transactions on Information Technology in Biomedicine, vol. 14, no. 3, pp. 613–621 (May 2010)
Read more →