Jan Reich

Jan Reich has led the safety engineering department at the Fraunhofer Institute for Experimental Software Engineering IESE since 2024. This department focuses on systematic assurance, validation, and safety monitoring methods, which enable the safe market entry of innovative systems. The main focus is on autonomous systems that operate in complex environments. These systems often include innovations like machine learning. Previously, Jan Reich worked as an Expert Scientist at the institute. He handled the topic "Dynamic Assurances for Connected Autonomous Systems." In the German lighthouse project "PEGASUS Verification and Validation Methods (VVM)" of the VDA flagship initiative "Automated Connected Driving," he coordinated the safety argumentation framework. This framework aimed to approve highly automated driving systems. -- Jan Reich leitet seit 2024 die Abteilung für Sicherheitstechnik am Fraunhofer-Institut für Experimentelles Software Engineering IESE. Diese Abteilung konzentriert sich auf systematische Sicherungs-, Validierungs- und Sicherheitsüberwachungsmethoden, die den sicheren Markteintritt innovativer Systeme ermöglichen. Der Schwerpunkt liegt auf autonomen Systemen, die in komplexen Umgebungen eingesetzt werden. Diese Systeme beinhalten oft Innovationen wie maschinelles Lernen. Zuvor war Jan Reich als Fachwissenschaftler am Institut tätig. Er befasste sich mit dem Thema „Dynamische Versicherungen für vernetzte autonome Systeme“. Im deutschen Leuchtturmprojekt „PEGASUS Verification and Validation Methods (VVM)“ der VDA-Leitinitiative „Automatisiertes vernetztes Fahren“ koordinierte er den Sicherheitsargumentationsrahmen. Dieser Rahmen zielte darauf ab, hochautomatisierte Fahrsysteme zuzulassen.

GSN with safeTbox: Tool for Safety Argumentation

GSN with safeTbox: A state-of-the-art Professional Tool for Safety Argumentation

In today’s rapidly evolving automotive and aerospace industries, ensuring system safety and regulatory compliance is critical. The need for more structured and clearer safety cases is required by existing and upcoming regulation and standards (e.g. AFGBV (German L4 law), EU…

LLM-Human Co-Engineering to increase efficiency of Safety Engineering Processes (HARA)

Unlocking Automotive Safety: How LLM-Augmented Tools Are Transforming Hazard and Risk Assessments In the ever-evolving landscape of automotive safety engineering, the Hazard Analysis and Risk Assessment (HARA) process is crucial. This procedure demands extensive engineering expertise to meet the requirements…