This is a question that we, Dr.-Ing. Rasmus Adler as “Program Manager Autonomous Systems” at Fraunhofer IESE, and Dr. Patrik Feth as member of the group “Advanced Safety Functions & Standards” at SICK AG, are confronted with again and again. In this post, we will therefore address the issue of standards for autonomous systems and provide an overview of initiatives aimed at regulating the use of Artificial Intelligence in safety-critical systems. We originally prepared this overview for ourselves, as we wanted to make a deliberate choice regarding which research groups and standardization committees we would get involved in with our expertise. With this post we want to strengthen exchanges within the research and standardization community in the field of safety assurance of AI and create synergies. We are looking forward to getting feedback and will also regularly update this post accordingly.
In principle, there is broad agreement that Artificial Intelligence needs boundaries. However, the notions of what »Artificial Intelligence (AI)» is still differ greatly. Also, there is no consensus yet as to how to implement such boundaries in the form of laws and standards. Yet, many efforts are underway worldwide to reach a consensus, and initial work results are already available as well.
Are there already standards for autonomous systems?
The definition of AI is currently being discussed by the ISO/IEC JTC 1/SC 42 committee, for example. »Ethics Guidelines for Trustworthy AI« have been drawn up at the European level by a high-level expert group. These Guidelines proceed on the assumption that all legal rights and obligations that apply to the processes and activities involved in developing, deploying and using AI systems remain mandatory and must be duly observed (Seite 8). These laws include the
(German) Product Safety Act with the associated machinery directive, which are of particular importance in the context of safety.
The current laws do not, however, make any concrete provisions regarding the development of safety-critical systems. It is only required by various means to comply with the state of the art and the state of the practice. This is where standards come into play, because standards should reflect this state in the best possible way. To be able to do so, they must be regularly updated in terms of new technological developments. Traditionally, such adjustments have rather been reactive in nature. Industry representatives agree on a minimum level that can be regarded as the current standard. Regarding the use of AI in safety-critical applications, however, a proactive approach is increasingly being taken. Safety experts from research and application jointly develop recommendations for action and application rules. In the following, we will focus on work and working groups using this proactive approach.
Already published standards for autonomous systems (including technical reports, DIN SPECs, DKE application rules, etc.)
Here we list already published documents from standardization committees that concern AI and autonomous systems. At this time, many other documents are under preparation and will be published in the near future. Please see the list below for current initiatives. We will gladly extend the list with additional elements. Simply use the comment function below.
- DIN SPEC 92001-1
The aim of DIN SPEC 92001 is to establish a quality-assuring and transparent lifecycle for AI modules. In the first part of the planned 92001 series, a framework is being set up for this.
https://www.din.de/de/wdc-beuth:din21:288723757 - UL4600
UL4600 places the focus on setting up a safety case for autonomous systems and provides a framework for this. Fraunhofer IESE is on the review committee in order to provide support with its industry experience and its research expertise.
https://edge-case-research.com/ul4600/ - ISO/PAS 21448
Developed for the automotive sector, this standard addresses the limits of meaningful usability of algorithms and sensor systems and considers the new error class of functional deficiencies.
https://www.iso.org/standard/70939.html
- ISO/IEC 20546
This standard defines basic terminology for Big Data. However, the terms Artificial Intelligence or Machine Learning are not mentioned in this document.
https://www.iso.org/standard/68305.html - ISO/IEC TR 20547-2
The 20547 series is intended to establish a reference architecture for Big Data. In this second part, use cases are listed.
https://www.iso.org/standard/71276.html - ISO/IEC TR 20547-5
The fifth part of the 20547 series provides an overview of standards that are relevant for Big Data, both existing standards and standards currently in development.
https://www.iso.org/standard/72826.html
Whitepapers, reports, and similar documents
The list below does not contain any standards, but we believe that the included documents reflect the generally accepted state of the art quite well. We will gladly add additional elements to this list. Please use the comment function below for this.
- High-Level Expert Group on AI (European Commission): Ethics Guidelines for Trustworthy AI
These guidelines set out a framework for achieving trustworthy AI. Three elementary components are identified here: The AI should be lawful, ethical, and robust. Under the aspect of robustness, safety is mentioned explicitly. The document contains an assessment list for trustworthy AI.
https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligence - Expert Report of the Data Ethics Commission
The Data Ethics Commission was commissioned by the German Federal Government with the development of ethical standards, guidelines, and concrete recommendations for action aimed at protecting the individual, preserving social coexistence, and safeguarding and promoting prosperity in the information age. This document summarizes the results. [only available in German]
http://s.fhg.de/mcz - IEEE: Ethically Aligned Design
In this document, the IEEE summarizes its recommendations aimed at designing the development of standards, certification, regulation, and legislation for the development of autonomous and intelligent systems in such a way that this benefits societal well-being holistically.
https://ethicsinaction.ieee.org/ - SASWG: Safety Assurance Objectives for Autonomous Systems
Having emerged from a working group of the Safety-Critical System Club, this document lists objectives for the validation of autonomous systems at different levels of abstraction.
https://scsc.uk/ga - Safety First for Automated Driving
In this cross-industry whitepaper, Daimler together with Aptiv, Audi, Baidu, BMW, Continental, Fiat Chrysler Automobiles, HERE, Infineon, Intel, and Volkswagen examines the topic of safety for automated driving in accordance with SAE Level 3 and Level 4. It also addresses the use of AI methods (Machine Learning) required for automated driving.
https://newsroom.intel.com/wp-content/uploads/sites/11/2019/07/Intel-Safety-First-for-Automated-Driving.pdf - Mind the gaps: Assuring the safety of autonomous systems from an engineering, ethical, and legal perspective
We have included this publication because it offers a good overview of technical, ethical, and legal safety-related issues and their interfaces.
https://www.sciencedirect.com/science/article/abs/pii/S0004370219301109?dgcid=rss_sd_all - Considerations in Assuring Safety of Increasingly Autonomous Systems
We have included this technical report from NASA because it summarizes what needs to be considered when technical systems take over safety-critical tasks that had previously been solved by humans with their “intelligence”.
https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20180006312.pdf
Ongoing Initiatives in Research and Standardization
Many organizations are undertaking activities to further develop the state of the art in science and technology, or to document this state of the art in standardization projects. We will gladly add further elements to this list. Simply use the comment function below for this purpose.
Standardization Initiatives
- DIN.ONE – Plattform und die deutsche Normungsroadmap KI
DIN and DKE are collaborating with the German Federal Government and representatives from industry, research, and civil society to draw up a standardization roadmap on Artificial Intelligence. This also includes standardization with regard to safety.
https://din.one/pages/viewpage.action?pageId=33620030 - Standardization Council Industrie 4.0
The Standardization Council has the task to coordinate standardization and regulation work in the field of Industrie4.0 in Germany and beyond. It represents the interests of industry in the field of national, European and international standardization in the context of the digitization of industry and actively promotes international cooperation. Fraunhofer IESE and SICK are in the working group „safe trustworthy AI-systems“. Fraunhofer IESE is additionally in the working groups „Human and AI“ as well as „Data modelling and semantics“. - ISO/IEC JTC 1/SC42 WG1
Working group 1 of the SC42 is concerned with the fundamentals of AI standardization, such as terminology, concepts, and frameworks.
https://www.iso.org/committee/6794475.html - ISO/IEC JTC 1/SC42 WG2
Working group 2 of the SC42 emerged from a formerly independent SC on the topic of “Big Data”. Here, topics concerning data and data quality continue to be worked on.
https://www.iso.org/committee/6794475.html - ISO/IEC JTC 1/SC42 WG3
The focal topic of working group 3 of the SC42 is trustworthiness. Here, standards on risk management, on robustness of neural networks, as well as on ethical and social topics related to AI are being prepared.
https://www.iso.org/committee/6794475.html - ISO/IEC JTC 1/SC42 WG4
Arbeitsgruppe 4 des SC42 sammelt Use Cases rund um KI.
https://www.iso.org/committee/6794475.html - ISO/IEC JTC 1/SC42 WG5
Working group 5 of the SC42 is the youngest group on the panel and has the mandate to deal with computational aspects and characteristics of AI.
https://www.iso.org/committee/6794475.html - DKE AK801.0.8
In this working group of DKE, in which both Fraunhofer IESE and SICK are active, an application rule for the development of autonomous/cognitive systems is currently being drawn up. The focus in this application rule is on the execution of a trustworthiness analysis and the establishment of a trustworthiness assurance case. It is planned to publish a first version of this application rule in 2020.
https://www.dke.de/de/news/2019/referenzmodell-vertrauenswuerdige-ki-vde-anwendungsregel - DIN SPECs on AI
In the shortened procedure of an SPEC, DIN is currently making efforts to publish additional standards on the topic of AI. An
overview includes those SPECs, where free downloadable drafts exists like DIN SPEC 13266 „Guideline for the development of deep learning image recognition systems“ [in German] and DIN SPEC 92001-1 „Artificial Intelligence – Life Cycle Processes and Quality Requirements – Part 1: Quality Meta Model“. - IEEE 2846 WG
This WG is working on „A Formal Model for Safety Considerations in Automated Vehicle Decision Making“. The purpose of this standard is to define a parameterized formal model for automated vehicle decision making that enables industry and government alike to align on a common definition of what it means for an automated vehicle to drive safely balancing safety and practicability. - FG-AI4AD
The FG-AI4AD supports standardization activities for services and applications enabled by AI systems in autonomous and assisted driving. The FG aims to create international harmonization on the definition of a minimal performance threshold for these AI systems (such as AI as a Driver).
Further Initiatives
- Assuring Autonomy International Program
This initiative led by the University of York is explicitly concerned with the assurance and regulation of robotics and autonomous systems. Currently, a freely accessible Body of Knowledge is being built up here, which is to become the reference source on this topic in the future. The “Assuring Autonomy” program is thematically very close to the Autonomous Systems program of Fraunhofer IESE. In order to create synergies, a strategic collaboration is currently being prepared.
https://www.york.ac.uk/assuring-autonomy/ - Safety-Crictial System Club: Group Autonomous Systems
Fraunhofer IESE is member of the working group Autonomous Systems of the Safety-Critical Systems Club. The group aims to produce clear guidance on how autonomous systems and autonomy technologies should be managed in a safety related context, in a way that reflects emerging best practice.
https://scsc.uk/ga - Partnership on AI
An initiative led from the USA that identifies the use of safety-critical applications as the first of its thematic pillars. The partners comprise more than 90 organizations, including Amazon, Apple, Facebook, Google, and Microsoft. Companies from traditional safety-critical domains are not represented to date.
https://www.partnershiponai.org/ - The Autonomous
The Autonomous is an open platform that brings together the autonomous mobility ecosystem to align on relevant safety subjects. Besides an annual event in Vienna, the Autonomous is hosting Chapter Events and Workshops throughout the year to work on global reference solutions on Safety from Architecture, Security, AI, and Regulation standpoint.
Looking for more information and input on that topic?
If you want to learn more about the state of the art and the challenges of using AI safely in autonomous systems, please feel free to attend our 4-day seminar to become a certified »Data Scientist Specialized in Assuring Safety«. The seminar also provides an up-to-date insight into the state of standardization.
Please also read the Fraunhofer IESE Blog post about the definition of autonomous systems:
Autonomous or merely highly automated-what is actually the difference?