The use of Artificial Intelligence is reshaping every aspect of business, economy, and society by transforming experiences and relationships amongst stakeholders and citizens.
However, the use of AI also poses considerable risks and challenges, raising concerns about whether AI systems are worthy of human trust.
Due to the far-reaching socio-technical consequences of AI, organizations and government bodies have already started implementing frameworks and legislations for enforcing trustworthy AI, like
- the European Union’s “EU AI Act”, the first-ever legal framework on AI, which addresses the risks of AI and
- the U.S. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, with directives to enhance safety, privacy, equity and competition.
AI Trust has also been identified by companies like Gartner as one of the top Strategic Technology trends.
Our Mission
Our mission is to do technically innovative research which helps produce human-centered trustworthy artificial intelligence.
Our lab was founded in response to a surge of both academic and industrial interest in the fairness, transparency, privacy, robustness and interpretability of AI systems. In our research we aim to strengthen the trust of humans in Artificial Intelligence. To establish trust, an AI application must be verifiably designed to function in a manner that mitigates bias, ensures transparency and security, ensures data sovereignty, ensures non-discrimination and promotes societal and environmental well-being.
We share the perspective of Mozilla Foundation that we need to move towards a world of AI that is helpful — rather than harmful — to human beings. This means that (i) human agency is a core part of how AI is built; and (ii) that corporate accountability for AI is real and enforced.
Our aim is to help develop AI technology that is built and deployed in ways that support accountability and agency, and advance individual and collective well-being.
Our Approach
We focus on the issue of “trust” in AI systems and aim to design and develop methods, tools and best practices for the evaluation, design, and development of machine intelligence that provides guarantees for Trustworthy AI systems.
We develop solutions that support and advance trustworthy AI in areas like:
- smart manufacturing systems,
- formal and informal education,
- innovative healthcare,
- green mobility,
- circular industry and
- intelligent public administration.
We develop methods, tools and operational processes for the identification and mitigation of the negative impacts of AI applications and the causes of those impacts.
Our work aims to:
- consider all the stages of the AI system lifecycle
- Incorporate human before/on/over the loop
- is highly affected by the application domain
- accounts for all stakeholders that contribute to/are affected by the development.
In our work we take into account international approaches and standards like:
- The Assessment List for Trustworthy Artificial Intelligence that was developed by the High-Level Expert Group on Artificial Intelligence
- set up by the European Commission to help assess whether an AI system complies with the seven requirements of Trustworthy AI.
- The NIST AI Risk Management Framework (AI RMF) of the US National Institute of Standards and Technology
- that is intended to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems; and
- the ISO/IEC 42001:2023 an international standard
- that aims to ensure that AI systems are developed, deployed, and managed in a responsible, transparent, secure, and accountable manner. and
- the ISO/IEC TR 24028:2020 document on trustworthiness in artificial intelligence
We have adopted a risk-based approach and work on all four phases of the AI Trustworthiness lifecycle:
Identify; Assess; Explore; and Enhance.
Identify and record possible limitations and vulnerabilities based on the agreed requirements and goals of the system
Assess, analyze, and quantify the risks and impact for each Trustworthiness aspect
Explore solutions based on methods that enforce, augment, and mitigate trustworthiness issues
Enhance: based on the previous stages, search for optimal solutions that can be applied to the AI system and enhance its trustworthiness