.avif)
.avif)
AI Risk Depends on the System You’re Actually Building

A Conversation with Franziska Weindauer, CEO of TÜV AI.Lab
AI adoption is accelerating in high-impact domains (healthcare, manufacturing, mobility) while clear, shared standards and processes for how to evaluate these systems are still evolving. The real question is no longer whether AI works, but how we define when it is safe, robust, and trustworthy.
To get some more concrete answers to these questions, Sif, our Head of Community and Growth, spoke with Franziska Weindauer, CEO of TÜV AI.Lab. She shared detailed insights into the future (and current state of affairs) of AI regulation, and what trusted AI looks like in practice.
From your point of view, what is the most misunderstood aspect of AI safety or evaluation today?
That the requirements for AI safety and trustworthiness are always the same. If we take a closer look at the AI Act, there are low-risk areas that come with almost no regulation. That is right, because the interest in innovation clearly prevails here.
And then there are other areas where AI comes with certain risks. Where life and limb or fundamental rights are at stake, the requirements for AI safety are naturally higher. We need to find a balance here: one that allows innovation to continue to quickly come to market, but in a way that keeps critical risks in check.
That's why we always have to look at what kind of AI application we're dealing with. To determine the appropriate risk class according to the AI Act, we've developed a free tool called the TÜV AI Act Risk Navigator. It allows you to quickly find out which risk class a system or model falls into.
Do you think Europe can compete globally in the field of AI with its approach to trustworthy AI?
Yes, for sure! In the public debate, we often reduce the term AI on chatbots and frontier models. That’s become the shorthand. But we have 70–80 years of AI research and applications that we tend to ignore in the public debate — especially machine learning systems embedded in real-world processes.
In Europe especially, we should think much more about how we create value with AI beyond chatbots. It’s not that chatbots aren’t a thing. But the real value comes from integrating AI in medicine, production lines, cars — high-risk, high-impact environments. That’s where the greatest benefit, and the greatest responsibility, lies.
We have realised that, for AI technologies to be accepted and widely adopted in sensitive areas, people must be able to trust them. Assessments and certifications provide this proof of trustworthiness and often improve AI quality in the process, too. Trustworthiness and quality are USPs that set Europe apart in global competition.
What are the biggest challenges regarding AI compliance?
With traditional product safety, you can define very concrete test criteria. If a window in a sensitive building needs category 2 safety glass, you can specify the material, the thickness, the structure. You can measure it. You can easily prove compliance.
With AI, it’s completely different. Every use case is different. There are many possible metrics within the AI Act criteria of transparency, bias, robustness, accuracy, and so on – and it’s up to the provider to decide which measurements, testing and documentation methods are required for their system. Nevertheless, the abstract requirements of the laws and underlying standards must be met.
We just don’t have 30 years of evidence telling us “this configuration is safe.” That interpretative space makes compliance harder. And it’s why following the AI Act is challenging, not because the principles are unclear, but because applying them requires judgement.
What’s one truth about AI evaluation you wish every founder understood at Seed or Series A stage?
Start early.
First, understand whether you actually have to comply with regulation – as mentioned before, many domains don’t fall under strict requirements).
But if you operate in a regulated space, for example medical devices, or if you want to become a supplier in global value chains, certification will matter. And if you start documenting too late, you will have to redo everything.
You need to document decisions from the beginning. Set up standard operating procedures. Write down why you are doing things a certain way. Certification is not something you “add” before market entry. It shapes how you build, from the very beginning.
What does ‘robustness’ actually mean in practice?
It means the system remains reliable when inputs change. For example, that automatic traffic sign recognition also works when it is raining, or that a medical diagnosis made with AI support is still valid even if there are perturbations on the CT scan.
It means having solid data governance. It means designing systems that meet certain requirements independent of the specific underlying technology.
That’s why a technology-neutral approach, like the AI Act, is more productive than reacting politically to every new incident.
If we define core requirements that AI systems must meet, regardless of architecture, we mitigate most risks structurally, instead of chasing them episodically.
Thanks to Franziska for sitting down with Sif and sharing!
The TÜV AI.Lab aims to make Europe a hotspot for safe and trustworthy AI by operationalizing regulatory requirements into practical assessment, testing and certification approaches.
Become a part of the AI Campus.
There are many ways to join our community. Sign up to our newsletter below, or select one of the other two options and get in touch with us:

.avif)
.avif)
.avif)