AI Trust Lens

Visual Metrics for Trustworthy AI

Project Team

Prof. Dr.-Ing. Armin Grasnick, Augmented and Virtual Reality (IU International University of Applied Sciences)

Prof. Dr. Anne Schwerk, Artificial Intelligence (IU International University of Applied Sciences)

Project Summary

The ‘AI Trust Lens’ project aims to develop a visual analysis tool that functions like a lens to analyze the trustworthiness of AI systems. Based on the extended Z-Inspection® framework, which includes nine dimensions, the AI Trust Lens enables users to focus on different aspects of an AI system, zoom in, and view them from different perspectives.

The Lens Concept

The ‘AI Trust Lens’ works metaphorically like a virtual camera with different settings:

Project Goals

Methodology

Camera Design

Development of the visual metaphor of the ‘camera’ for the user interface and implementation of the various lens modes with corresponding visualizations.

Metrics Development

Specific metrics have been developed for all dimensions, with special consideration given to the quantification of dimensions such as ‘Democratic values’ and ‘Concentration of power’.

Data Visualization

Creation of dynamic, interactive visualizations for each camera mode, along with seamless transitions between modes for an uninterrupted analysis experience.

AI Integration

Trustworthy AI algorithms have been implemented to analyze input data and generate recommendations. Additionally, an ‘auto-focus’ feature automatically highlights critical areas needing attention.

User Interface

The interface is designed intuitively, using a camera-analogue layout, with integrated help functions and context information to assist users with complex concepts.

Expected Results

The AI Trust Lens aims to be an innovative, camera-based tool for extensively analyzing the trustworthiness of AI systems. It enhances accessibility and comprehensibility regarding complex ethical and social aspects of AI. Decision-makers are supported through intuitive visual analyses and AI-supported recommendations, ultimately promoting a holistic, multidimensional approach to AI ethics.

Outlook

In the future, the AI Trust Lens has the potential to improve the way we evaluate and understand AI systems. By combining the comprehensive Z-Inspection® framework with an intuitive visual metaphor, it allows users to dive deep into AI system complexities while maintaining an overview. This approach could set a standard for future AI assessment tools and contribute significantly to the development of trustworthy AI systems. The AI Trust Lens could become an indispensable tool for developers, auditors, and regulators to understand and optimize the ethical and societal impact of AI systems.

Back to Main Page