Evolutionary Intelligence Seminar

Upcoming


28 November Alexander Chebykin & Evi Sijben

Past

17 October
Sarah L. Thomson
Title: Bridging Explainable AI and Fitness Landscapes

Fitness landscapes are typically used to gain insight into algorithm search dynamics on optimisation problems; as such, it could be said that they explain algorithms and that they are a natural bridge between explainable AI (XAI) and evolutionary computation. Despite this, there is very little existing literature which utilises landscapes for XAI, or which applies XAI techniques to landscape analysis. In this talk, a recent paper in landscape analysis for neural architecture search is presented. This work considers the problem of channel configuration and leverages a tool called local optima networks to better understand the optimisation. The talk will conclude with suggestions for and discussions of possible future avenues for the intersection of explainable AI and fitness landscapes.

Biography:
Dr Sarah L. Thomson is a lecturer at Edinburgh Napier University, Scotland. Her PhD was in the fitness landscapes of evolutionary computation (EC). Since then, she has applied EC to problems in healthcare, aviation, agriculture, and logistics. She still has a passion for fundamental research and has continued to work in fitness landscapes since her PhD. More recently, she has branched into explainable AI and neural architecture search, and is particularly interested in combining evolutionary computing with machine learning and XAI.
ORCID: https://orcid.org/my-orcid?orcid=0000-0001-6971-7817
LinkedIn: www.linkedin.com/in/sarah-l-thomson-21a156135
Twitter/X: www.twitter.com/silverhaxt


22 August
Marcus Gallagher
Title: What was the question again? A reflection on benchmarking and research methods in numerical black-box optimisation

Abstract: In the last 40 or so years, a huge amount of research has explored the solving of continuous optimisation problems numerically via algorithms and computers. In this talk, I will take a step back and consider the general aims, research questions, and methods in this area and how current research aligns with these general directions. This leads naturally into a discussion of several aspects of algorithmic benchmarking, which has become a significant focus in some of the recent literature. I will offer some suggestions for significant research opportunities and hopefully open some points of discussion regarding optimisation research, both good and bad.

Biography: Marcus Gallagher is an Associate Professor in the School of Electrical Engineering and Computer Science at the University of Queensland, Australia. His research interests include evolutionary computation and metaheuristic optimisation, machine learning, exploratory landscape analysis and benchmarking algorithms.

8 August
Tobias Moxter
Title: Semantic Representations in Genetic Programming for Symbolic Regression: Explicit and Model-based Perspectives on Locality


Abstract: Machine learning is rapidly changing the world in exciting and formidable ways. Its unprecedented ability to capture complex, non-linear patterns has made it a powerful tool, enabling profound scientific and technological breakthroughs. Yet, against the backdrop of numerous beneficial instances, we are observing many erroneous and, unfortunately, harmful applications. Perhaps the most substantial driver for the repeated occurrence of "AI failures" is our limited ability to understand the rules discovered by machine learning algorithms, particularly by neural networks and deep learning. This thesis contributes to genetic programming (GP), a meta-heuristic addressing the need for explanations by evolving human-readable instructions to form accurate yet comprehensible programs. Since GP generates programs using the framework of evolutionary algorithms, it relies on intelligent, cost-effective variation operators. A research direction yielding promising results focuses increasingly on leveraging semantics, i.e., how programs behave, rather than regarding solely syntax, i.e., how programs are written. Locality, loosely understood as the correspondence of neighborhood structure between syntax and semantics, is widely expected to bear significant benefits for GP. This thesis contributes to the research into GP for symbolic regression (GPSR) by introducing an explicit semantic program representation along with an encode-perturb-decode scheme capable of producing semantically similar offspring. Novel approaches to search space analysis are applied to GPSR to investigate the role of locality and its relation to landscape modality in semantic representations. Results suggest that higher locality by itself may be an insufficient indicator of performance in GPSR. Finally, deep learning is leveraged to study the properties and potential of learned semantic representations with modern model architectures.
Experiments on known, synthetic benchmark problems demonstrate notable improvements of the explicit encoding over classic variation operators, while positive characteristics of the learned representation justify further research into model-based semantic algorithms for GPSR.

11 July 2023
Damy Ha
Title: Constructing Continuous Bayesian Networks with GOMEA

Abstract: For years, the field of artificial intelligence (AI) has seen significant growth, with neural networks being a popular choice for predictive modeling due to their high accuracy. However, in healthcare there is a reluctance to widely adopt these AI models as it is difficult, if not impossible to interpret the underlying mechanics of such models. To address this issue on interpretability, interpretable machine learning techniques such as Bayesian networks can be used. However, Bayesian networks come at the cost of being less accurate. To enhance the predictive power of Bayesian networks while keeping the model interpretable, an existing Bayesian network search algorithm has been extended to handle continuous data in an interpretable way.

27 June 2023
Arkadiy Dushatskiy
Title: Multi-Objective Population Based Training

Abstract: Population Based Training (PBT) is an efficient hyperparameter optimization algorithm. PBT is a single-objective algorithm, but many real-world hyperparameter optimization problems involve two or more conflicting objectives. In this work, we therefore introduce a multi-objective version of PBT, MO-PBT. Our experiments on diverse multi-objective hyperparameter optimization problems (Precision/Recall, Accuracy/Fairness, Accuracy/Adversarial Robustness) show that MO-PBT outperforms random search, single-objective PBT, and the state-of-the-art multi-objective hyperparameter optimization algorithm MO-ASHA.

Monika Grewal
Title: Learning Clinically Acceptable Segmentation of Organs at Risk in Cervical Cancer Radiation Treatment from Clinically Available Annotations

Abstract: Deep learning models benefit from training with a large dataset (labeled or unlabeled). Following this motivation, we present an approach to learn a deep learning model for the automatic segmentation of Organs at Risk (OARs) in cervical cancer radiation treatment from a large clinically available dataset of Computed Tomography (CT) scans containing data inhomogeneity, label noise, and missing annotations. We employ simple heuristics for automatic data cleaning to minimize data inhomogeneity and label noise. Further, we develop a semi-supervised learning approach utilizing a teacher-student setup, annotation imputation, and uncertainty-guided training to learn in presence of missing annotations. Our experimental results show that learning from a large dataset with our approach yields a significant improvement in the test performance despite missing annotations in the data. Further, the contours generated from the segmentation masks predicted by our model are found to be equally clinically acceptable as manually generated contours.

18 April 2023
Andrew Lensen
Title: Genetic Programming, Explainability, and Interdisciplinary AI

Biography: Dr. Andrew Lensen is a Senior Lecturer (Pūkenga Matua) in Artificial Intelligence (Atamai Horihori) at Te Herenga Waka—Victoria University of Wellington, Aotearoa/New Zealand. His core research interests center around explainable AI, genetic programming, unsupervised learning (nonlinear dimensionality reduction), and real-world/interdisciplinary AI in New Zealand. He is also interested in the social and ethical implications of AI and aspects of deep learning, such as embedding/manifold learning methods.

Date: 4 April 2023
Mafalda Malafaia and Thalea Schlender
Title: Modelling discontinuities in GP-GOMEA

Abstract: Often it can be easier to capture real-world relationships by considering that a phenomenon may be described by multiple relationships across its input space. As such a categorical variable may induce different subsolutions. In a medical application, this could, for instance, imitate the notion of segregating patients that doctors make use of. Then, different relationships are established, depending on which cluster a patient belongs to. The original GP-GOMEA has difficulty modelling these relationships.
Our work, therefore, adds the capability of utilising both categorical and numeric features, as well as the capability to model these discontinuities. Specifically, we do this by adding if statements and syntactical constraints. Although this increases the branching factor of the tree from 2 to 3, the resulting piece-wise relationships are often more understandable. Nonetheless, our work also aims to enhance the variation to improve search efficiency. This work is currently in progress.

Title: Multi-Modal Pipeline
Abstract: Many real-world problems give pieces of data in different modalities. A medical application, for instance, may have valuable demographic and health characteristics, such as age or weight. Next to this, however, it may also provide essential information via imaging in the form of MRI or CT scans. To utilise the most information, these sources must be considered together. We, thus, aim to build an image feature-engineering approach, which can take other modalities into consideration. In combination with tabular data such image features may then be used to evolve explainable semantic regression models.
Specifically, our work tests various neural network configurations that fuse different modalities together in either an early or intermediate stage. Moreover, we consider the use of unsupervised pretraining of parts of the network. This work is currently in progress.

Date: 7 March 2023
Monika Grewal
Title: Multi-Objective Learning using HV Maximization

Abstract: Real-world problems are often multi-objective, with decision-makers unable to specify a priori which trade-off between the conflicting objectives is preferable. Intuitively, building machine learning solutions in such cases would entail providing multiple predictions that span and uniformly cover the Pareto front of all optimal trade-off solutions. We propose a novel approach for multi-objective training of neural networks to approximate the Pareto front during inference. In our approach, we train the neural networks multi-objectively using a dynamic loss function, wherein each network's losses (corresponding to multiple objectives) are weighted by their hypervolume maximizing gradients. Experiments on different multi-objective problems show that our approach returns well-spread outputs across different trade-offs on the approximated Pareto front without requiring the trade-off vectors to be specified a priori. Further, results of comparisons with the state-of-the-art approaches highlight the added value of our proposed approach, especially in cases where the Pareto front is asymmetric.

Georgios Andreadis
Title: MOREA: a GPU-accelerated Evolutionary Algorithm for Multi-Objective Deformable Registration of 3D Medical Images

Abstract: Finding a realistic deformation that transforms one image into another, in case large deformations are required, is considered a key challenge in medical image analysis. Having a proper image registration approach to achieve this could unleash a number of applications requiring information to be transferred between images. Clinical adoption is currently hampered by many existing methods requiring extensive configuration effort before each use, or not being able to (realistically) capture large deformations. A recent multi-objective approach that uses the Multi-Objective Real-Valued Gene-pool Optimal Mixing Evolutionary Algorithm (MO-RV-GOMEA) and a dual-dynamic mesh transformation model has shown promise, exposing the trade-offs inherent to image registration problems and modeling large deformations in 2D. This work builds on this promise and introduces MOREA: the first evolutionary algorithm-based multi-objective approach to deformable registration of 3D images capable of tackling large deformations. MOREA includes a 3D biomechanical mesh model for physical plausibility and is fully GPU-accelerated. This talk is based on a paper currently under submission at GECCO’23.

Date: 24 January 2023
Giorgia Nadizar
Title: Augmenting Genetic Programming search with Deep Learning Models for Symbolic Regression

Abstract: Symbolic Regression (SR) has recently gained momentum within the scientific community as it can yield accurate, yet more interpretable results w.r.t. black-box models.
The two core state-of-the-art methods for solving SR, namely Genetic Programming (GP) and deep-learning models (DLM), both suffer from issues that hinder their large-scale applicability.
GP models are scalable and precise, but due to their lack of locality they require a lot of computational power (and time) to achieve good results, whereas DLMs provide solutions significantly faster, yet are not able to scale to large real-world problems. In this talk, I will provide some insights into the ongoing work targeted at combining GP and DLMs into an efficient and scalable search algorithm for solving SR.