Zurück zu Einblicke

What Is Prevention Intelligence as a Service?

Teilen

The scientific foundation, the engineering challenge, and why we believe this will reshape how Europe prevents chronic disease.

I want to be precise about what we are building and why, because the gap between health technology marketing and scientific reality has become wide enough to be dangerous.

Most companies in digital health describe what they do in terms designed to sound impressive to people who do not know the field well. We are going to do the opposite. I am going to explain the specific scientific problem we are solving, the specific methods we are using, and the specific limitations we have not yet overcome. If you are a researcher, a clinician, an insurer, or an investor, you deserve that precision. If you are a patient, you deserve it even more.

The scientific problem

Chronic disease prevention has an evidence problem that operates at a level most people do not recognise.

We know, from decades of epidemiological research, that behavioural factors — sleep, physical activity, diet, stress — are implicated in the development of Type 2 diabetes, cardiovascular disease, and metabolic syndrome. The observational evidence is extensive. But “implicated in” is not the same as “causally responsible for,” and the distinction matters enormously when you are trying to tell a specific person what to change.

Consider sleep and diabetes. Observational studies consistently show that short sleep duration correlates with higher diabetes incidence. But when researchers at the University of Bristol applied Mendelian randomisation — a method that uses genetic variants as instrumental variables to test causal direction — they found that overall sleep duration does not have a robust causal effect on glycated haemoglobin or diabetes risk. What does appear to be causal is insomnia symptoms: frequent insomnia was associated with higher HbA1c levels with effect estimates of 0.05 standard deviation units in multivariable regression and 0.52 in one-sample Mendelian randomisation. The distinction between “sleep less” and “sleep poorly” is the difference between a correlational finding and a causal one. Getting this wrong does not just waste a patient’s time. It erodes the credibility of every digital health system that claims to offer personalised prevention.

The distinction between “sleep less” and “sleep poorly” is the difference between a correlational finding and a causal one. Getting this wrong does not just waste a patient’s time. It erodes the credibility of every digital health system that claims to offer personalised prevention.

This is the problem we are solving. Not “how do we build a health app,” but “how do we build infrastructure that makes causal claims about behavioural prevention that are epistemologically defensible.”

The method

Our approach is built on targeted maximum likelihood estimation, introduced by Mark van der Laan and colleagues at UC Berkeley in 2006. TMLE is a semiparametric estimation framework that achieves double robustness: it combines an outcome model and a treatment model such that the estimate remains consistent if either model is correctly specified. This is not a minor technical advantage. In observational health data, where model misspecification is the norm rather than the exception, double robustness is a foundational requirement.

A systematic review published in the Annals of Epidemiology found that by 2022, TMLE had been applied across seven distinct epidemiological disciplines, with 59 percent of publications originating outside the United States. The method has been validated in pharmacoepidemiology with hundreds of potential confounders — for instance, in studies of post-myocardial infarction statin use and one-year mortality. What has not been done, until our work, is applying TMLE to continuously updating behavioural data within a federated architecture for the purpose of chronic disease prevention.

We completed a systematic review of 68 peer-reviewed studies in March 2026. We identified nineteen static risk prediction models in the prevention space. Zero included causal robustness testing. Zero performed behavioural intervention attribution. Zero operated on continuously updating data. The intersection we occupy — continuous behavioural monitoring, causal inference via TMLE, and dynamic risk updating — is not contested. It is unoccupied.

We specified four directed acyclic graphs formalising the causal structures for our initial prevention models: sleep behaviour and metabolic risk, physical activity and glucose regulation, dietary patterns and inflammatory markers, and equity analysis across socioeconomic groups using the German Index of Socioeconomic Deprivation. Each DAG required resolving the post-treatment variable bias problem — a methodological challenge that arises when covariates are themselves affected by the exposure, which is inherent in continuous behavioural data. The resolution involves strict temporal specification of confounder measurement relative to formally defined intervention windows.

I want to be explicit about what we do not yet have. We have causal plausibility estimates. We do not yet have causal validation. Our four-stage validation cascade is designed to close that gap: Stage A is retrospective TMLE on NHANES cohort data, checking whether our average treatment effect estimates fall within published randomised trial effect size ranges. Stage B aligns our confounder framework with the NAKO cohort, Germany’s largest population study. Stage C is a controlled user study on explanation design. Stage D is the SOVEREIGN-DM randomised controlled trial. Until Stage D reports, our standard is plausibility, not proof. We say this publicly because we think transparency about the state of evidence is more valuable than premature confidence.

Until Stage D reports, our standard is plausibility, not proof. We say this publicly because we think transparency about the state of evidence is more valuable than premature confidence.

The architecture

The engineering challenge is as demanding as the scientific one, because the causal methods described above must operate within a sovereignty-compliant federated architecture.

Federated learning means models travel to data, not data to models. Each participating institution — an insurer, an employer, a care facility — trains the model locally, within its own data environment. No patient-level records cross institutional boundaries. The aggregated model parameters are returned to the platform, but the raw data stays sovereign.

This is not a philosophical preference. It is a regulatory requirement. The EU AI Act, which entered full application in 2025, mandates transparency and auditability for AI systems in healthcare. The European Health Data Space regulation establishes the framework for secondary use of health data with sovereignty provisions that take effect through 2029. Germany’s Gesundheitsdatennutzungsgesetz adds domestic governance requirements. A prevention intelligence platform that centralises health data is not merely risky in this regulatory environment. It is architecturally incompatible with the direction European health governance is moving.

The privacy-preserving mechanisms in federated health AI have matured significantly. Recent implementations have achieved privacy budgets of epsilon approximately 0.69, aligning with NIST SP 800-226 guidelines published in March 2025, which indicate that a conservative setting of epsilon less than or equal to 1 provides strong real-world privacy in most cases. Practical deployments have demonstrated that federated architectures can maintain clinically acceptable performance under moderate privacy budgets, though strict privacy settings can lead to accuracy degradation — a trade-off we monitor continuously and report transparently.

There is an additional engineering dimension that most federated learning companies do not address: the institution-side governance layer. It is not sufficient for data to stay within the institution. The institution must be able to audit every computation, manage consent, trace data lineage, and verify that model updates conform to their governance policies. We are building an institution-side governance SDK for exactly this purpose. Without it, data sovereignty is a claim. With it, data sovereignty is a verifiable property.

What PIaaS actually means

Prevention Intelligence as a Service is the category name for infrastructure that integrates these capabilities: continuous behavioural monitoring, causal attribution via TMLE on directed acyclic graphs, dynamic risk updating, sovereignty-compliant federated architecture, and an explanation layer designed for institutional and individual trust.

Each word in that category name does specific work. Prevention distinguishes this from diagnosis, treatment, and drug discovery — the domains where most health AI investment is concentrated. Intelligence distinguishes this from measurement. A fitness tracker measures. A prevention intelligence platform determines what would happen if behaviour changed — a counterfactual claim that requires causal methods. As a Service indicates that this is infrastructure, not a product. It is designed to be embedded within the existing institutional architecture of European healthcare — insurers, employers, care facilities — not to replace it.

This is not a health app. The relationship between a health app and a prevention intelligence platform is analogous to the relationship between a spreadsheet and a financial risk modelling system. They handle similar data types at entirely different levels of analytical sophistication, institutional integration, and regulatory seriousness.

Who this serves and why

We operate across three markets through common infrastructure.

WorkHealth Intelligence serves the occupational health market. Under Germany’s Arbeitssicherheitsgesetz, employers are mandated to invest in occupational health. The current model — reactive, clinic-based, episodic screening by Betriebsrärzte — was not designed for continuous prevention. Our platform provides employers and their occupational health physicians with real-time, causal workforce health analytics: not which employees are at risk, but which specific interventions would reduce risk by how much for which employee cohorts. This layer is deployed with institutional clients and generating revenue.

GKV Sovereign Prevention Analytics serves Germany’s statutory health insurers. GKV schemes have legal prevention mandates and growing budgets under the social code, but almost no infrastructure to connect spending to outcomes. Section 140a selective contracts provide the procurement mechanism. Our platform gives GKV innovation directors what they currently lack: causal evidence about which behavioural interventions would reduce chronic disease incidence in their specific member populations, deployed within their own data environments, with the analytical rigour to withstand actuarial scrutiny.

PflegeAssistent serves long-term care facilities. Germany has 16,100 residential care facilities with increasingly complex resident populations — multimorbidity, polypharmacy, cognitive decline. Polypharmacy affects 26 to 39 percent of adults over 65 across the EU. Adverse drug reactions in this population cause preventable hospitalisations at rates that represent a systemic failure of medication safety infrastructure. Our platform provides predictive early warning — fall risk, medication interaction risk, rapid decline indicators — using the facility’s own data, within its own governance.

These three layers share common federated infrastructure. Each one generates independent revenue while feeding data quality and institutional credibility into the others. The federated network effect — where every institution that joins improves the model for all participants, without any raw data crossing boundaries — creates the compounding dynamics that distinguish infrastructure economics from application economics.

What the evidence shows about trust

A user acceptability study conducted by the Berlin Institute for Innovation in early 2026 tested how people respond to AI-supported prevention systems. The findings have direct implications for system design.

Users do not object to AI-generated health guidance. They object to opacity. When recommendations included causal explanations — this is why we think your risk changed, and this is the behavioural pathway — trust increased substantially. When recommendations arrived as algorithmic outputs without reasoning, trust collapsed. This was not a gradual difference. It was categorical.

Users do not object to AI-generated health guidance. They object to opacity. When recommendations included causal explanations, trust increased substantially. When recommendations arrived as algorithmic outputs without reasoning, trust collapsed. This was not a gradual difference. It was categorical.

Three user segments emerged. Detail-seeking users engage with layered cause-and-effect explanations. Cognitive-load-sensitive users prefer concise guidance with optional depth. Reactance-prone users respond only to framing that emphasises their autonomy. A prevention system that does not adapt to these preferences will fail — not because the science is wrong, but because the communication is.

The EU AI Act’s transparency requirements are not a compliance overhead in this context. They are a design specification for the only kind of prevention system that will achieve the institutional and individual trust required for sustained engagement.

What we know and what we do not

We know that the causal inference methodology is sound. TMLE is well-established, doubly robust, and increasingly deployed across epidemiological disciplines worldwide. We know that federated architectures can preserve privacy at levels consistent with the strongest international guidelines while maintaining analytical performance. We know that no existing platform integrates continuous behavioural monitoring, causal attribution, and dynamic risk updating for chronic disease prevention. We know that the European regulatory trajectory favours sovereignty-compliant infrastructure. And we know, from the Berlin Institute study, that users trust causal explanations more than opaque algorithmic outputs.

What we do not yet know is whether our specific causal estimates, applied to our specific behavioural data streams, will produce effect sizes that withstand validation through the full cascade from retrospective analysis to randomised controlled trial. That is an empirical question. It is the question our validation programme is designed to answer. We will publish the results regardless of what they show.

This is not the kind of statement you typically find in a company blog post. But we think the standard in health technology communication should be higher than it currently is. If we are going to ask European health institutions to trust our infrastructure with their members’ prevention, the least we can do is be honest about what we have established and what remains to be proven.

We are building Prevention Intelligence as a Service in Berlin, within the European regulatory environment, with European institutional partners, using methods drawn from the frontier of causal inference. The science is specified. The architecture is operational. The validation is underway. And the need — 700 billion euros in annual chronic disease costs, 4 million European deaths per year, an ageing continent with a narrowing window — is not waiting.

Daniel Townsend, PhD is CEO of Loretta Health. Learn more at loretta.care or reach us at hello@loretta.care.

Teilen

Bereit, souveräne Gesundheits-KI zu erkunden?

PIaaS entdecken