For this instalment of the Safe AI Tech Community chats, the focus will be “Proposal for AI Safety and Security Framework” leading to topics like: the overview of the ongoing activities, collaboration tools, as well as an intriguing presentation contributed by IAV.
The application of AI in critical situations has three requirements based on a proper risk analysis for the target application.
First, it requires AI architectures that have the potential for an application in critical areas, e.g. by being multi-staged or having supervisory components.
The second requirement is a strict process for a safety argumentation. This safety argumentation is providing evidence that the trained model architecture mitigates all identified risks to an appropriate level. The evidence can be life-cycle-based, e.g. documentation of sufficient coverage with training, or performance based.
Lastly, a toolbox for an efficient generation of this evidence is essential for an application of AI beyond a proof-of-concept.
This talk suggests a framework with these elements. Its applicability is being demonstrated with an automotive use case.
Here are some IMPORTANT details:
The event will again be held as a 2G+ event; in our case meaning that you (i) should be vaccinated or recovered – ideally also having received a booster shot – and (ii) additionally ought to bring a negative Covid test dating to the same day (i.e., 06/05). If you cannot make it to the AI Campus, but would be interested in dialling in let us know and we will again try to set up an online participation option.