

Safe & Secure AI Community: Dr. Gesina Schwalbe
No items found.

Location
Google Maps
Methods for the safety assurance of perception DNNs in the context of AD – an overview
Assuring the safety of perception functions is a critical ingredient towards automated driving. This encompasses measures all along the development lifecycle to avoid, decrease, mitigate, and handle risks.
Join the next Safe & Secure AI Community meetup, as we welcome for a second time Dr. Gesina Schwalbe, Doctorate at Continental, to get an overview of the broad spectrum of methods and metrics that can be applied to increase and prove safety when relying on deep neural network based perception functions.
Register below to attend in person at the AI Campus or join online here
More events
Europe’s Hub for AI.
Europe’s Hub for AI.
Europe’s Hub for AI.
Europe’s Hub for AI.
Europe’s Hub for AI.
Europe’s Hub for AI.
Europe’s Hub for AI.
Europe’s Hub for AI.
Join us
Become a part of the AI Campus.
There are many ways to join our community. Sign up to our newsletter below, or select one of the other two options and get in touch with us:
Newsletter Signup: