Explainable AI Research Group
The XAI (Explainable Artificial Intelligence) research group at LIACS, Leiden University focuses on making AI and Evolutionary Computing (EC) systems more understandable and interpretable. Led by Dr. Niki van Stein, the team delves into methods and techniques that allow for the explanation of AI decision-making processes, aiming to enhance transparency and trust in AI technologies. Their work spans various scientific domains and industry applications, such as predictive maintenance, the analysis of heuristic optimization algorithms and the development of novel explainable AI methods. This interdisciplinary effort involves collaboration with experts in machine learning, optimization, and domain-specific experts to develop explainable systems that are both effective and user-friendly.
Research projects:
GenAIDE
Exploring disruptive AI technologies and evolving computer-aided engineering.
FIND
Large AI models for a resilient high-tech industry.
LLaMEA
Large Language Evolutionary Algorithm for the automatic design of algorithms.
AI for Oversight
AI for Oversight ICAI lab
XAIPre
eXplainable AI for Predictive Maintenance
Complex Lens Design
Optimization of Complex Lens Designs
CIMPLO
Cross-Industry Predictive Maintenance Optimization Platform
People involved:

Qi Huang

Sofoklis Kitharidis

Bernd Wagner

Christiaan Lamers

Kirill Antonov

Alexander Zeiser

Jiahuizi Luo

Iryna Kovalchyk

Ananta Shahane

Tobias Preintner

Farrukh Baratov

Haoran Yin


