The Start of CIMPLO
The project was conceived with the vision of optimizing maintenance strategies across various sectors, leveraging AI to predict and prevent potential failures before they occur and optimize the maintenance schedule accordingly. Our collaboration with industry and academic institutions laid the groundwork for a platform that promises enhanced operational efficiency in maintenance practices.
Key Takeaways
Interdisciplinary Collaboration: Working across industries taught us the importance of diverse perspectives in solving complex problems. The synergy between academic research and practical, industry-specific challenges propelled our project to new heights.
Advancements in AI: CIMPLO pushed us to explore the limits of current AI technologies in predictive maintenance, driving home the necessity for continuous innovation and adaptation in our algorithms and methodologies.
The Importance of Data: A critical lesson was the role of high-quality, relevant data in training our models. The project underscored the need for comprehensive data collection and management strategies to inform and refine AI predictions.
User-Centric Design: Engaging with end-users early and often was key. Their insights helped tailor the CIMPLO platform to meet real-world needs, ensuring its applicability and effectiveness across different sectors.
Looking Forward: XAIpre
As we close the chapter on CIMPLO, our sights are set on XAIpre (Explainable AI for Predictive Maintenance). Building on CIMPLO’s foundation, XAIpre aims to address the challenges of interpretability and trust in AI decisions, making the outcomes of predictive maintenance algorithms as transparent and understandable as possible.
Final Thoughts
The journey of CIMPLO was more than a project; it was a testament to the power of collaborative innovation and the potential of AI to transform industries. As we move forward, the lessons learned will serve as guiding principles for our future endeavors, driving us toward a future where AI and predictive maintenance work hand in hand, not just to predict the future but to intelligently shape it.
Read more about the CIMPLO project in this recent blog post by Commit2Data.
]]>This year at Gecco I have the honor to present two posters.
One of them is about an extension to our BIAS toolbox, called Deep-BIAS See full paper. For the other poster see here.
Structural Bias occurs when a heuristic search algorithm is biased towards certain parts of the search space due to its algorithmic structure (and not due to the objective function it tries to optimize).
It is important to detect structural bias as this is unwanted behaviour in general for heuristic optimization algorithms, as they should be able to find optima independent from where they are in the search space.
BIAS is a toolbox to find structural bias in heuristic optimization algorithms and provides feedback to the algorithm designer or practisioner about whether there is structural bias present and what type of bias it is.
Deep-BIAS is an extension that introduces a deep-learning model to effectively detect structural bias and predict the type instead of using over 30 statistical tests. The Deep-learning model proved to be on-par with detecting bias and much better in classifying the type of bias, compared to the statistical test approach.
We recommend every optimization algorithm researcher and practisioner to use the BIAS toolbox and verify if the algorithm used or created does not suffer from structural bias. See our Github for setup instructions and a quick start.
]]>This year at Gecco I have the honor to present two posters.
One of them is about the method DoE2Vec, Read the full paper. For the other poster see this post
DoE2Vec stands for “Design of Experiments to Vector”. It is a deep-learning model that aims to learn optimization landscape characteristics in order to use these features for down-stream tasks such as algorithm selection and performance prediction.
The method uses a deep variational auto-encoder and is trained on 250.000 randomly generated function landscapes. It performs well in reconstructing the popular BBOB benchmark functions and the features it learns internally can be succesfully used in a classification task.
There are however limitations to this approach. The model targets to minimize the recontruction loss, but this loss is actually not that relevant in learning important landscape characteristics. We plan to improve the model performance by developing a new loss function or a by using a different architecture.
]]>Structural Bias (SB) refers to a phenomenon observed in search heuristics, where certain algorithms used to solve optimization problems exhibit a preference or inclination toward specific regions or areas within the search space. In simpler terms, it means that these algorithms tend to focus on particular areas while neglecting others (independent of the problem). Detecting and understanding structural bias in the search space is essential because it can significantly impact the effectiveness and fairness of algorithmic comparison.
Watch! Watch the video of our Hot-off-the-press talk on Gecco 2023.
Detecting structural bias in optimization heuristics is important because it allows us to identify if an algorithm is biased toward specific regions during the search process. This bias could be based on various factors, such as specific algorithm components or hyper-parameter settings, or a biased algorithm in general. For example, if an algorithm consistently favours solutions located in the centre of the search space, it may overlook potentially better solutions elsewhere. This can result in suboptimal or incomplete outcomes in optimization problems or decision-making systems.
In the case that we detect or suspect that a heuristical search algorithm shows structurally biased behaviour, the first thing we can do is verify and inspect the type of structural bias. One way to do this is by visual inspection of the behaviour of the algorithm in the search space. By running the optimizer on a special function called f0, which is a function that returns a uniform random number between 0 and 1, we can de-couple the algorithm behaviour from the objective function. A better and more robust way to verify if the algorithm exhibits structural biased behaviour is by using the statistical functions and deep-learning framework of our BIAS Toolbox. Using the BIAS Toolbox we can verify the existence, strength and type of structural bias. This can give insights into the cause of the bias. For example, an algorithm with boundary bias (exploring the boundaries more than the rest of the search space) can be caused by biased boundary correction mechanisms. Once the type of structural bias is known, we can perform ablation studies to vary the hyper-parameters and modules of an algorithm. Each variant of the algorithm can then be verified using the BIAS toolbox. Correlations can then be observed between the hyper-parameters and the presence of structural bias. Using this information we can pick hyper-parameters that do not suffer from structural bias, or improve certain algorithmic components.
Structural bias can affect the performance of an algorithm on a specific problem, or even on a benchmark set, both positively and negatively. It depends on the location of local and global optima in the problems and whether the biased algorithm performs well or not. In general, for a set of functions where the global optimum can be anywhere in the search space, structural bias is always a disadvantage. The strength of the bias (how much the algorithm is attracted to certain parts of the search space) determines how much the performance is affected. If the strength of the bias is relatively low compared to the (wanted) bias caused by the objective function, the performance might not be affected at all.
When developing algorithms it is important to understand what Structural Bias is and how it can affect the performance and evaluation of your algorithms. Be wary of algorithm components that steer into a direction in the search space independent of the objective function. For example, a mutation operator for an evolutionary algorithm, that is more likely to increase a dimension than decrease it or a mutation operator that puts an individual on the boundary when it overpasses the boundary, are both clear cases of structurally biased algorithm components. It is not always easy however to detect what parts of an algorithm can induce bias, it is therefore always advised to inspect the performance of your algorithms using the BIAS Toolbox.
Installing and using the BIAS toolbox is super easy. All you need is Python 3 and R installed on your system. Once R and Python are properly installed, you can install the BIAS toolbox using pip and test your algorithm using the example below.
pip install struct-bias
#example of using the BIAS toolbox to test a Differential Evolution algorithm
from scipy.optimize import differential_evolution
import numpy as np
from BIAS import BIAS, f0, install_r_packages
#run first time to install required R packages
install_r_packages()
#In this example we use a 5 dimensional search space.
#We recommend to do the test with at least 2 dimensions.
dim = 5
bounds = [(0,1)] * dim
#do 30 independent runs (5 dimensions),
samples = []
print("Performing optimization method 30 times on f0 as objective.")
for i in np.arange(30):
result = differential_evolution(f0, bounds, maxiter=1000)
samples.append(result.x)
samples = np.array(samples)
test = BIAS()
print(test.predict(samples, show_figure=True))
#Use the deep-learning module of the toolbox to verify results
# and predict the type of SB
y, preds = test.predict_deep(samples)
test.explain(samples, preds, filename="explanation-de.png")
We advise you to use a large enough evaluation budget for your algorithm, the tests from our paper use 10.000 evaluations, however, 1.000 evaluations are generally enough to discover any structural bias. The number of runs should be either 30, 50, 100 or 600. We advise using 100 individual runs.
See for more details our Github repository.
Using the BIAS toolbox we have tested over 3000 popular (and less popular) optimization heuristics and their variants.
To read more about the specific outcomes see our paper on Differential Evolution variants; Emergence of structural bias in differential evolution for an in-depth analysis of different DE variants.
And see our BIAS Toolbox paper; BIAS: A Toolbox for BenchmarkingStructural Bias in the Continuous Domain, for additional results including CMA-ES and others.
See also our blog post about Deep-Bias, the deep-learning extension of our Structural BIAS toolbox.
]]>