The use of Artificial Intelligence is reshaping every aspect of business, economy, and society by transforming experiences and relationships amongst stakeholders and citizens.

However, the use of AI also poses considerable risks and challenges, raising concerns about whether AI systems are worthy of human trust.

Due to the far-reaching socio-technical consequences of AI, organizations and government bodies have already started implementing frameworks and legislations for enforcing trustworthy AI, like

AI Trust has also been identified by companies like Gartner as one of the top Strategic Technology trends.

Our Mission

Our mission is to do technically innovative research which helps produce human-centered trustworthy artificial intelligence.

Our lab was founded in response to a surge of both academic and industrial interest in the fairness, transparency, privacy, robustness and interpretability of AI systems. In our research we aim to strengthen the trust of humans in Artificial Intelligence. To establish trust, an AI application must be verifiably designed to function in a manner that mitigates bias, ensures transparency and security, ensures data sovereignty, ensures non-discrimination and promotes societal and environmental well-being.

We share the perspective of Mozilla Foundation that we need to move towards a world of AI that is helpful — rather than harmful — to human beings. This means that (i) human agency is a core part of how AI is built; and (ii) that corporate accountability for AI is real and enforced.

Our aim is to help develop AI technology that is built and deployed in ways that support accountability and agency, and advance individual and collective well-being.

Our Approach

We focus on the issue of “trust” in AI systems and aim to design and develop methods, tools and best practices for the evaluation, design, and development of machine intelligence that provides guarantees for Trustworthy AI systems.

We develop solutions that support and advance trustworthy AI in areas like:

  • smart manufacturing systems,
  • formal and informal education,
  • innovative healthcare,
  • green mobility,
  • circular industry and
  • intelligent public administration.

We develop methods, tools and operational processes for the identification and mitigation of the negative impacts of AI applications and the causes of those impacts.

Our work aims to:

  • consider all the stages of the AI system lifecycle
  • Incorporate human before/on/over the loop
  • is highly affected by the application domain
  • accounts for all stakeholders that contribute to/are affected by the development.

In our work we take into account international approaches and standards like:

We have adopted a risk-based approach and work on all four phases of the AI Trustworthiness lifecycle:

Identify; Assess; Explore; and Enhance.

Identify and record possible limitations and vulnerabilities based on the agreed requirements and goals of the system

Assess, analyze, and quantify the risks and impact for each Trustworthiness aspect

Explore solutions based on methods that enforce, augment, and mitigate trustworthiness issues

Enhance: based on the previous stages, search for optimal solutions that can be applied to the AI system and enhance its trustworthiness