The Explainable AI (XAI) group at LIACS studies intelligent systems in complex, partially known and fully opaque environments. Across predictive maintenance, black-box optimisation, multi-objective modelling, automated algorithm design and generative AI, we focus on one central goal: making complex systems understandable and reliable enough to trust.

Banner Researching the intersection of Evolutionary Computation, XAI, and Automated Algorithm Design.

Our mission

In the fast-paced world of advanced technology, where complex algorithms and models make decisions that affect our lives on a daily basis, it is imperative to understand and trust the reasoning behind these automated systems. We therefore develop and analyse methods that help people make better decisions when structure is hidden, data is imperfect, and stakes are high. Our work targets real-world domains such as engineering, infrastructure, sustainability, and AI-driven design.

We aim to deliver models and tools that are not only powerful, but also:

  • explainable,
  • reproducible,
  • and scientifically justified.

Our vision

Over the next years, we aim to build a coherent research framework connecting explainable AI, real-world-inspired benchmarking, class-specific optimisation and generative methods in order to advance the state-of-the-art in AI and solve real-world problems effectively and efficiently.

Alongside scientific progress, we are building a research culture where people can do ambitious work without sacrificing wellbeing or psychological safety. We believe strong science and a healthy team are inseparable.

Why this matters

Without this direction, AI risks becoming opaque and brittle: systems that look impressive but fail under real-world conditions. At the same time, research teams risk becoming high-pressure environments where people feel isolated or unsafe to speak openly.

We see both as unacceptable. A group committed to explainability must also foster openness, honesty, and trust in how it works together.

Group lead and collaborations

The group is led by Dr. Niki van Stein and is part of the Natural Computing Cluster at LIACS, Leiden University. We work closely with academic and industry partners to ensure that our methods are both scientifically rigorous and practically relevant.

Project websites

See the following websites for additional information about research projects of Niki van Stein.

People involved:

Qi Huang
Qi Huang
Sofoklis Kitharidis
Sofoklis Kitharidis
Bernd Wagner
Bernd Wagner
Christiaan Lamers
Christiaan Lamers
Kirill Antonov
Kirill Antonov
Alexander Zeiser
Alexander Zeiser
Jiahuizi Luo
Jiahuizi Luo
Iryna Kovalchyk
Iryna Kovalchyk
Ananta Shahane
Ananta Shahane
Tobias Preintner
Tobias Preintner
Farrukh Baratov
Farrukh Baratov
Haoran Yin
Haoran Yin
Furong Ye
Furong Ye