.avif)
.avif)
The Future of Early Cancer Detection is AI-Native

The future of preventative medicine is already being deployed at scale, and some of it originated right here at the Merantix AI Campus. Vara has achieved what many in the sector thought impossible: moving beyond the lab to become a market leader, with nearly 50% of all mammograms in Germany now running through their AI system. This isn't just a technical milestone; it is a fundamental shift in how we approach high-stakes oncology.
By proving that AI can act as an independent reader in the world’s largest prospective study, Vara is setting a new global benchmark for clinical confidence. To understand how they navigated the jump from a research-backed startup to a structural pillar of the healthcare system, our Head of Community and Growth, Sif, sat down with founder Jonas Muff, CEO and Founder of Vara. They discussed the transition from "decision support" to "autonomous AI," the critical role of physician-led transparency, and how the next generation of medical infrastructure will be defined by measurable efficiency.
In 2025, Vara became the first AI system verified as an independent second reader in breast cancer screening. What does this milestone really change in clinical practice?
We have built an AI system that can independently find cancer on a mammogram, which means we can potentially replace one of the two radiologists currently involved in a double-reading breast cancer screening setting. What we have done in the last couple of years is the largest prospective trial in the world with almost half a million women in Germany. We’ve been able to show that we can find significantly more cancer (the detection rate increases) and at the same time, the recall rate decreases.
This all proves that it is actually possible to bring something into massive adoption even here in Germany. Currently, 50% of all clinics in the country have adopted our AI. However, they are still using it as a decision support system because the system doesn't allow AI to take decisions itself yet. The technology is there and the evidence is there; now we are pushing to change the guidelines and the public insurance systems to pay for the impact we’ve proven.
Where do you see the biggest gaps today between how medical AI is built and how it is evaluated or regulated?
The main problem is that we lack an incentive system and a standardized reimbursement pathway for medical devices that mirrors what exists for pharmaceuticals. In Germany, the AMNOG system for pharma allows for fast market access where you price the product and then back it up with data. Nothing like that exists for medical devices.
We are a high-risk AI "Software as a Medical Device" product, and there is no standardized pathway where people look at outcomes. Even with our study of half a million women (the largest in the world in our field) we are looking at a process of one or two years just for an evaluation of whether an additional study is required. That additional study could take another four or five years, even though the evidence is there today. We need a legal pathway for an accelerated, outcome-oriented reimbursement scheme.
What have you learned about the reality of clinical AI adoption versus the perceived value of "quality improvements"?
This is going to sound a bit bad, but I learned the hard way that quality improvements don’t matter nearly as much as people think. If we help radiologists find more cancers, they care (it would be too hard to say they don’t) but when it comes to the invoice, they don’t care as much as you’d expect.
The reason is simple: they aren’t paid more by the healthcare system for finding more cancers or producing fewer false positives. In fact, they are sued if they miss a cancer, so they are incentivized to find more even at the expense of efficiency. Early adopters care about quality, but for the mass market, the value proposition must be efficiency. The system is screaming for efficiency. To drive adoption, you have to show them the real impact on their time, which is why we’ve considered things like embedded time-tracking in the product.
How does the role of the radiologist evolve as AI moves from a "tool" to an "independent reader"?
At some point, radiology centers will need to disclose whether they use AI or not, and patients will choose their physician based on that. We already see this in private settings where radiologists buy AI as a competitive advantage to advocate their quality to patients and referring gynecologists.
Regarding transparency, the radiologist needs to be in a position to explain the impact of the AI to the patient. Our responsibility as manufacturers is to provide "real-world monitoring"—dashboards where they can see exactly how the AI has changed their specific results. The doctor remains in charge, but the AI becomes a data source, much like the images themselves. They aggregate the AI's findings into a final decision and must be able to explain why they agreed or disagreed with the system. That is the level of transparency that is actually practical and scalable.
Thanks to Jonas for chatting with Sif, and congrats on all the great work Vara is doing!
Vara AI brings radiologists clinically proven, CE-marked intelligence, so cancers are detected earlier and every read is more confident.
Become a part of the AI Campus.
There are many ways to join our community. Sign up to our newsletter below, or select one of the other two options and get in touch with us:

.avif)
.avif)
.avif)