Affiliation to Z-Inspection® for Trustworthy AI
04/06/24

The Information Management Unit is active in trustworthy AI research, we are affiliated with the Z-Inspection® Initiative and aim to adopt the Z-Inspection® process to the assessment of the trustworthiness of real-life AI systems and applications.
Z-Inspection® is a holistic process used to evaluate the trustworthiness of AI-based technologies at different stages of the AI lifecycle. It focuses, in particular, on the identification and discussion of ethical issues and tensions through the elaboration of socio-technical scenarios. It uses the European Union’s High-Level Expert Group’s (EU HLEG) guidelines for trustworthy AI.
The Z-Inspection® process is distributed under the terms and conditions of the Creative Commons (Attribution-Non Commercial-Share Alike CC BY-NC-SA) license.
Z-Inspection® is listed in the new OECD Catalogue of AI Tools & Metrics.
In our trustworthy AI research we focus on designing and developing human-centric AI trustworthy technologies. Our work involves considering transparency, fairness, privacy, and accountability in AI development and deployment. We work on computational methods, formalisms, and socio-technical practices we need to effectively audit, verify, and validate AI-based intelligent systems while ensuring their trustworthines. We aim to:
- address all the stages of the AI system lifecycle,
- Incorporate human before/on/over the loop; and
- explicitly take into account the business context of the AI system
Recent News
704, 2025
New journal paper on Trustworthy AI
Our latest research publication, “Trustworthiness Optimisation Process: A Methodology [...]
1906, 2024
Two new journal papers published on AI topics
Two new papers by IMU researchers on Trustworthy AI [...]
406, 2024
Presentation at Trustworthy AI conference
At the 4th conference of the TAILOR Network of Excellence [...]