January 2024–December 2028

Research project on trustworthy AI: “FAITH Fostering Artificial Intelligence Trust for Humans towards the optimization of trustworthiness through large-scale pilots in critical domains“.

FAITH aims to provide the practitioners and stakeholders of AI systems not only with a comprehensive analysis of the foundations of AI trustworthiness but also with an operational playbook for how to assess, and build trustworthy AI systems and how to continuously measure their trustworthiness.

FAITH adopts a human-centric, trustworthiness assessment framework (FAITH AI_TAF) which enables the testing/measuring/optimization of risks associated with AI trustworthiness in critical domains. FAITH AI_TAF builds upon NIST Artificial Intelligence Risk Management Framework (AI RMF), upon the requirements imposed by the EU legislative instruments, upon ENISA guidelines on how to achieve trustworthiness by design and upon stakeholder’s intelligence and users’ engagement. Seven (7) Large Scale Pilot activities in seven (7) critical and diverse domains (robotics, education, media, transportation, healthcare, active ageing, and industry) will validate the FAITH holistic estimation of trustworthiness of selected sectoral AI systems. To this end, the proposed framework will be validated across two large scale piloting iterations/phases across focusing on assessing: (i) generic threats of trustworthiness, and (ii) domain-specific threats and risks of trustworthiness. In addition, FAITH AI_TAF will be used to identify potential associations (in the context of cross-fertilisation) among the domains towards the development of a domain-independent, human-centric, risk management driven framework for AI trustworthiness evaluation.

To successfully realize the project’s vision FAITH has

  • assembled a number of highly innovative organisations representing industry, SMEs, research and academia,
  • proposed FAITH AI_TAF, a robust assessment framework
  • defined seven (7) large scale pilots (LSPs) to validate the FAITH AI_TAF in diverse application domains, ranging from healthcare to education, and from media to critical infrastructure management, and finally
  • has defined a concrete project plan that will allow the consortium to successfully perform the large scale pilots by taking a holistic view of the trustworthiness of AI systems over all stages of their lifecycle.