Systems with software components based on Artificial Intelligence (AI) enable solutions that cannot currently be realized using traditional software. However, as their learned behavior is based on data examples, this behavior must be checked using statistical methods on test data. Depending on the area of use, incorrect behavior of the AI can result in high financial costs or even put people at risk.
In order to create the necessary acceptance for the use of AI, methods for safety assurance and certification are therefore required. Since the reliability of these methods is dependent on the quality of the test data, ensuring sufficient quality of the test data is one of the crucial factors for a meaningful verification of dependability.
In the Software Campus project DAITA (Dependable AI Test dAta), Fraunhofer IESE identified core quality characteristics of test data and developed a framework for the qualitative evaluation and improvement of this data. The results contribute to proving the dependability of AI-based components and to possible certification.