
SAFEXPLAIN is developing an open software platform designed to make Artificial Intelligence (AI) explainable and compliant with safety standards for use in safety-critical systems. The project addresses the challenge of integrating Deep Learning (DL) into sectors like automotive, rail, and space, where traditional "black-box" AI is a barrier due to stringent safety requirements. SAFEXPLAIN aims to achieve certifiable AI by architecting transparent DL solutions with explainability and traceability, and by devising safety patterns for various DL usage levels. This initiative seeks to enable the European industry to adopt DL functionalities in critical autonomous systems, ensuring trustworthiness and competitiveness while potentially reducing CO2 emissions.

SAFEXPLAIN is developing an open software platform designed to make Artificial Intelligence (AI) explainable and compliant with safety standards for use in safety-critical systems. The project addresses the challenge of integrating Deep Learning (DL) into sectors like automotive, rail, and space, where traditional "black-box" AI is a barrier due to stringent safety requirements. SAFEXPLAIN aims to achieve certifiable AI by architecting transparent DL solutions with explainability and traceability, and by devising safety patterns for various DL usage levels. This initiative seeks to enable the European industry to adopt DL functionalities in critical autonomous systems, ensuring trustworthiness and competitiveness while potentially reducing CO2 emissions.