Home | Research | Groups | Eyke Hüllermeier

Research Group Eyke Hüllermeier

Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models

Eyke Hüllermeier

heads the Chair of Artificial Intelligence and Machine Learning at LMU Munich.

His research interests are centered around methods and theoretical foundations of artificial intelligence, with a specific focus on machine learning and reasoning under uncertainty. He has published more than 300 articles on these topics in top-tier journals and major international conferences, and several of his contributions have been recognized with scientific awards.

Team members @MCML

Link to Viktor Bengs

Viktor Bengs

Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Jonas Hanselle

Jonas Hanselle

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Paul Hofman

Paul Hofman

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Alireza Javanmardi

Alireza Javanmardi

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Timo Kaufmann

Timo Kaufmann

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Yunpu Ma

Yunpu Ma

Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Valentin Margraf

Valentin Margraf

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Maximilian Muschalik

Maximilian Muschalik

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Mohammad Hossein Shaker Ardakani

Mohammad Hossein Shaker Ardakani

Artificial Intelligence & Machine Learning

A3 | Computational Models

Publications @MCML

[42]
A. Javanmardi, D. Stutz and E. Hüllermeier.
Conformalized Credal Set Predictors.
38th Conference on Neural Information Processing Systems (NeurIPS 2024). Vancouver, Canada, Dec 10-15, 2024. To be published. Preprint at arXiv. arXiv.
Abstract

Credal sets are sets of probability distributions that are considered as candidates for an imprecisely known ground-truth distribution. In machine learning, they have recently attracted attention as an appealing formalism for uncertainty representation, in particular due to their ability to represent both the aleatoric and epistemic uncertainty in a prediction. However, the design of methods for learning credal set predictors remains a challenging problem. In this paper, we make use of conformal prediction for this purpose. More specifically, we propose a method for predicting credal sets in the classification task, given training data labeled by probability distributions. Since our method inherits the coverage guarantees of conformal prediction, our conformal credal sets are guaranteed to be valid with high probability (without any assumptions on model or distribution). We demonstrate the applicability of our method to natural language inference, a highly ambiguous natural language task where it is common to obtain multiple annotations per example.

MCML Authors
Link to Alireza Javanmardi

Alireza Javanmardi

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[41]
C. Damke and E. Hüllermeier.
CUQ-GNN: Committee-Based Graph Uncertainty Quantification Using Posterior Networks.
European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD 2024). Vilnius, Lithuania, Sep 09-13, 2024. DOI.
MCML Authors
Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[40]
A. Vahidi, L. Wimmer, H. A. Gündüz, B. Bischl, E. Hüllermeier and M. Rezaei.
Diversified Ensemble of Independent Sub-Networks for Robust Self-Supervised Representation Learning.
European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD 2024). Vilnius, Lithuania, Sep 09-13, 2024. DOI.
Abstract

Ensembling a neural network is a widely recognized approach to enhance model performance, estimate uncertainty, and improve robustness in deep supervised learning. However, deep ensembles often come with high computational costs and memory demands. In addition, the efficiency of a deep ensemble is related to diversity among the ensemble members, which is challenging for large, over-parameterized deep neural networks. Moreover, ensemble learning has not yet seen such widespread adoption for unsupervised learning and it remains a challenging endeavor for self-supervised or unsupervised representation learning. Motivated by these challenges, we present a novel self-supervised training regime that leverages an ensemble of independent sub-networks, complemented by a new loss function designed to encourage diversity. Our method efficiently builds a sub-model ensemble with high diversity, leading to well-calibrated estimates of model uncertainty, all achieved with minimal computational overhead compared to traditional deep self-supervised ensembles. To evaluate the effectiveness of our approach, we conducted extensive experiments across various tasks, including in-distribution generalization, out-of-distribution detection, dataset corruption, and semi-supervised settings. The results demonstrate that our method significantly improves prediction reliability. Our approach not only achieves excellent accuracy but also enhances calibration, improving on important baseline performance across a wide range of self-supervised architectures in computer vision, natural language processing, and genomics data.

MCML Authors
Link to Lisa Wimmer

Lisa Wimmer

Statistical Learning & Data Science

A1 | Statistical Foundations & Explainability

Link to Hüseyin Anil Gündüz

Hüseyin Anil Gündüz

Statistical Learning & Data Science

A1 | Statistical Foundations & Explainability

Link to Bernd Bischl

Bernd Bischl

Prof. Dr.

Statistical Learning & Data Science

A1 | Statistical Foundations & Explainability

Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Mina Rezaei

Mina Rezaei

Dr.

Statistical Learning & Data Science

Education Coordination

A1 | Statistical Foundations & Explainability


[39]
F. Fumagalli, M. Muschalik, P. Kolpaczki, E. Hüllermeier and B. Hammer.
KernelSHAP-IQ: Weighted Least Square Optimization for Shapley Interactions.
41st International Conference on Machine Learning (ICML 2024). Vienna, Austria, Jul 21-27, 2024. URL.
MCML Authors
Link to Maximilian Muschalik

Maximilian Muschalik

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[38]
M. Herrmann, F. J. D. Lange, K. Eggensperger, G. Casalicchio, M. Wever, M. Feurer, D. Rügamer, E. Hüllermeier, A.-L. Boulesteix and B. Bischl.
Position: Why We Must Rethink Empirical Research in Machine Learning.
41st International Conference on Machine Learning (ICML 2024). Vienna, Austria, Jul 21-27, 2024. URL.
Abstract

We warn against a common but incomplete understanding of empirical research in machine learning (ML) that leads to non-replicable results, makes findings unreliable, and threatens to undermine progress in the field. To overcome this alarming situation, we call for more awareness of the plurality of ways of gaining knowledge experimentally but also of some epistemic limitations. In particular, we argue most current empirical ML research is fashioned as confirmatory research while it should rather be considered exploratory.

MCML Authors
Link to Moritz Herrmann

Moritz Herrmann

Dr.

Biometry in Molecular Medicine

Coordinator for Reproducibility & Open Science

A1 | Statistical Foundations & Explainability

Link to Giuseppe Casalicchio

Giuseppe Casalicchio

Dr.

Statistical Learning & Data Science

A1 | Statistical Foundations & Explainability

Link to Marcel Wever

Marcel Wever

Dr.

* Former member

A3 | Computational Models

Link to Matthias Feurer

Matthias Feurer

Prof. Dr.

Statistical Learning & Data Science

A1 | Statistical Foundations & Explainability

Link to David Rügamer

David Rügamer

Prof. Dr.

Data Science Group

A1 | Statistical Foundations & Explainability

Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Anne-Laure Boulesteix

Anne-Laure Boulesteix

Prof. Dr.

Biometry in Molecular Medicine

A1 | Statistical Foundations & Explainability

Link to Bernd Bischl

Bernd Bischl

Prof. Dr.

Statistical Learning & Data Science

A1 | Statistical Foundations & Explainability


[37]
Y. Sale, V. Bengs, M. Caprio and E. Hüllermeier.
Second-Order Uncertainty Quantification: A Distance-Based Approach.
41st International Conference on Machine Learning (ICML 2024). Vienna, Austria, Jul 21-27, 2024. URL.
MCML Authors
Link to Viktor Bengs

Viktor Bengs

Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[36]
C. Damke and E. Hüllermeier.
Linear Opinion Pooling for Uncertainty Quantification on Graphs.
40th Conference on Uncertainty in Artificial Intelligence (UAI 2024). Barcelona, Spain, Jul 16-18, 2024. URL. GitHub.
MCML Authors
Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[35]
Y. Sale, P. Hofman, T. Löhr, L. Wimmer, T. Nagler and E. Hüllermeier.
Label-wise Aleatoric and Epistemic Uncertainty Quantification.
40th Conference on Uncertainty in Artificial Intelligence (UAI 2024). Barcelona, Spain, Jul 16-18, 2024. URL.
MCML Authors
Link to Paul Hofman

Paul Hofman

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Lisa Wimmer

Lisa Wimmer

Statistical Learning & Data Science

A1 | Statistical Foundations & Explainability

Link to Thomas Nagler

Thomas Nagler

Prof. Dr.

Computational Statistics & Data Science

A1 | Statistical Foundations & Explainability

Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[34]
A. Vahidi, S. Schoßer, L. Wimmer, Y. Li, B. Bischl, E. Hüllermeier and M. Rezaei.
Probabilistic Self-supervised Learning via Scoring Rules Minimization.
12th International Conference on Learning Representations (ICLR 2024). Vienna, Austria, May 07-11, 2024. URL. GitHub.
Abstract

In this paper, we propose a novel probabilistic self-supervised learning via Scoring Rule Minimization (ProSMIN), which leverages the power of probabilistic models to enhance representation quality and mitigate collapsing representations. Our proposed approach involves two neural networks; the online network and the target network, which collaborate and learn the diverse distribution of representations from each other through knowledge distillation. By presenting the input samples in two augmented formats, the online network is trained to predict the target network representation of the same sample under a different augmented view. The two networks are trained via our new loss function based on proper scoring rules. We provide a theoretical justification for ProSMIN's convergence, demonstrating the strict propriety of its modified scoring rule. This insight validates the method's optimization process and contributes to its robustness and effectiveness in improving representation quality. We evaluate our probabilistic model on various downstream tasks, such as in-distribution generalization, out-of-distribution detection, dataset corruption, low-shot learning, and transfer learning. Our method achieves superior accuracy and calibration, surpassing the self-supervised baseline in a wide range of experiments on large-scale datasets like ImageNet-O and ImageNet-C, ProSMIN demonstrates its scalability and real-world applicability.

MCML Authors
Link to Lisa Wimmer

Lisa Wimmer

Statistical Learning & Data Science

A1 | Statistical Foundations & Explainability

Link to Yawei Li

Yawei Li

Statistical Learning & Data Science

A1 | Statistical Foundations & Explainability

Link to Bernd Bischl

Bernd Bischl

Prof. Dr.

Statistical Learning & Data Science

A1 | Statistical Foundations & Explainability

Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Mina Rezaei

Mina Rezaei

Dr.

Statistical Learning & Data Science

Education Coordination

A1 | Statistical Foundations & Explainability


[33]
V. Bengs, B. Haddenhorst and E. Hüllermeier.
Identifying Copeland Winners in Dueling Bandits with Indifferences.
27th International Conference on Artificial Intelligence and Statistics (AISTATS 2024). Valencia, Spain, May 02-04, 2024. URL.
Abstract

We consider the task of identifying the Copeland winner(s) in a dueling bandits problem with ternary feedback. This is an underexplored but practically relevant variant of the conventional dueling bandits problem, in which, in addition to strict preference between two arms, one may observe feedback in the form of an indifference. We provide a lower bound on the sample complexity for any learning algorithm finding the Copeland winner(s) with a fixed error probability. Moreover, we propose POCOWISTA, an algorithm with a sample complexity that almost matches this lower bound, and which shows excellent empirical performance, even for the conventional dueling bandits problem. For the case where the preference probabilities satisfy a specific type of stochastic transitivity, we provide a refined version with an improved worst case sample complexity.

MCML Authors
Link to Viktor Bengs

Viktor Bengs

Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[32]
P. Kolpaczki, M. Muschalik, F. Fumagalli, B. Hammer and E. Hüllermeier.
SVARM-IQ: Efficient Approximation of Any-order Shapley Interactions through Stratification.
27th International Conference on Artificial Intelligence and Statistics (AISTATS 2024). Valencia, Spain, May 02-04, 2024. URL.
Abstract

Addressing the limitations of individual attribution scores via the Shapley value (SV), the field of explainable AI (XAI) has recently explored intricate interactions of features or data points. In particular, extensions of the SV, such as the Shapley Interaction Index (SII), have been proposed as a measure to still benefit from the axiomatic basis of the SV. However, similar to the SV, their exact computation remains computationally prohibitive. Hence, we propose with SVARM-IQ a sampling-based approach to efficiently approximate Shapley-based interaction indices of any order. SVARM-IQ can be applied to a broad class of interaction indices, including the SII, by leveraging a novel stratified representation. We provide non-asymptotic theoretical guarantees on its approximation quality and empirically demonstrate that SVARM-IQ achieves state-of-the-art estimation results in practical XAI scenarios on different model classes and application domains.

MCML Authors
Link to Maximilian Muschalik

Maximilian Muschalik

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[31]
P. Kolpaczki, V. Bengs, M. Muschalik and E. Hüllermeier.
Approximating the Shapley Value without Marginal Contributions.
38th Conference on Artificial Intelligence (AAAI 2024). Vancouver, Canada, Feb 20-27, 2024. DOI.
Abstract

The Shapley value, which is arguably the most popular approach for assigning a meaningful contribution value to players in a cooperative game, has recently been used intensively in explainable artificial intelligence. Its meaningfulness is due to axiomatic properties that only the Shapley value satisfies, which, however, comes at the expense of an exact computation growing exponentially with the number of agents. Accordingly, a number of works are devoted to the efficient approximation of the Shapley value, most of them revolve around the notion of an agent's marginal contribution. In this paper, we propose with SVARM and Stratified SVARM two parameter-free and domain-independent approximation algorithms based on a representation of the Shapley value detached from the notion of marginal contribution. We prove unmatched theoretical guarantees regarding their approximation quality and provide empirical results including synthetic games as well as common explainability use cases comparing ourselves with state-of-the-art methods.

MCML Authors
Link to Viktor Bengs

Viktor Bengs

Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Maximilian Muschalik

Maximilian Muschalik

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[30]
J. Lienen and E. Hüllermeier.
Mitigating Label Noise through Data Ambiguation.
38th Conference on Artificial Intelligence (AAAI 2024). Vancouver, Canada, Feb 20-27, 2024. DOI.
Abstract

Label noise poses an important challenge in machine learning, especially in deep learning, in which large models with high expressive power dominate the field. Models of that kind are prone to memorizing incorrect labels, thereby harming generalization performance. Many methods have been proposed to address this problem, including robust loss functions and more complex label correction approaches. Robust loss functions are appealing due to their simplicity, but typically lack flexibility, while label correction usually adds substantial complexity to the training setup. In this paper, we suggest to address the shortcomings of both methodologies by 'ambiguating' the target information, adding additional, complementary candidate labels in case the learner is not sufficiently convinced of the observed training label. More precisely, we leverage the framework of so-called superset learning to construct set-valued targets based on a confidence threshold, which deliver imprecise yet more reliable beliefs about the ground-truth, effectively helping the learner to suppress the memorization effect. In an extensive empirical evaluation, our method demonstrates favorable learning behavior on synthetic and real-world noise, confirming the effectiveness in detecting and correcting erroneous training labels.

MCML Authors
Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[29]
M. Muschalik, F. Fumagalli, B. Hammer and E. Hüllermeier.
Beyond TreeSHAP: Efficient Computation of Any-Order Shapley Interactions for Tree Ensembles.
38th Conference on Artificial Intelligence (AAAI 2024). Vancouver, Canada, Feb 20-27, 2024. DOI.
Abstract

While shallow decision trees may be interpretable, larger ensemble models like gradient-boosted trees, which often set the state of the art in machine learning problems involving tabular data, still remain black box models. As a remedy, the Shapley value (SV) is a well-known concept in explainable artificial intelligence (XAI) research for quantifying additive feature attributions of predictions. The model-specific TreeSHAP methodology solves the exponential complexity for retrieving exact SVs from tree-based models. Expanding beyond individual feature attribution, Shapley interactions reveal the impact of intricate feature interactions of any order. In this work, we present TreeSHAP-IQ, an efficient method to compute any-order additive Shapley interactions for predictions of tree-based models. TreeSHAP-IQ is supported by a mathematical framework that exploits polynomial arithmetic to compute the interaction scores in a single recursive traversal of the tree, akin to Linear TreeSHAP. We apply TreeSHAP-IQ on state-of-the-art tree ensembles and explore interactions on well-established benchmark datasets.

MCML Authors
Link to Maximilian Muschalik

Maximilian Muschalik

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[28]
E. Hüllermeier and R. Slowinski.
Preference learning and multiple criteria decision aiding: Differences, commonalities, and synergies -- Part I.
4OR (Jan. 2024). DOI.
MCML Authors
Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[27]
E. Hüllermeier and R. Slowinski.
Preference learning and multiple criteria decision aiding: Differences, commonalities, and synergies -- Part II.
4OR (Jan. 2024). DOI.
MCML Authors
Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[26]
F. Fumagalli, M. Muschalik, P. Kolpaczki, E. Hüllermeier and B. Hammer.
SHAP-IQ: Unified Approximation of any-order Shapley Interactions.
37th Conference on Neural Information Processing Systems (NeurIPS 2023). New Orleans, LA, USA, Dec 10-16, 2023. URL.
MCML Authors
Link to Maximilian Muschalik

Maximilian Muschalik

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[25]
Y. Sale, P. Hofman, L. Wimmer, E. Hüllermeier and T. Nagler.
Second-Order Uncertainty Quantification: Variance-Based Measures.
Preprint at arXiv (Dec. 2023). arXiv.
MCML Authors
Link to Paul Hofman

Paul Hofman

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Lisa Wimmer

Lisa Wimmer

Statistical Learning & Data Science

A1 | Statistical Foundations & Explainability

Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Thomas Nagler

Thomas Nagler

Prof. Dr.

Computational Statistics & Data Science

A1 | Statistical Foundations & Explainability


[24]
J. Hanselle, J. Fürnkranz and E. Hüllermeier.
Probabilistic Scoring Lists for Interpretable Machine Learning.
26th International Conference on Discovery Science (DS 2023). Porto, Portugal, Oct 09-11, 2023. DOI.
MCML Authors
Link to Jonas Hanselle

Jonas Hanselle

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[23]
J. Brandt, E. Schede, S. Sharma, V. Bengs, E. Hüllermeier and K. Tierney.
Contextual Preselection Methods in Pool-based Realtime Algorithm Configuration.
Conference on Lernen. Wissen. Daten. Analysen (LWDA 2023). Marburg, Germany, Oct 09-11, 2023. PDF.
MCML Authors
Link to Viktor Bengs

Viktor Bengs

Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[22]
J. Hanselle, J. Kornowicz, S. Heid, K. Thommes and E. Hüllermeier.
Comparing Humans and Algorithms in Feature Ranking: A Case-Study in the Medical Domain.
Conference on Lernen. Wissen. Daten. Analysen (LWDA 2023). Marburg, Germany, Oct 09-11, 2023. PDF.
MCML Authors
Link to Jonas Hanselle

Jonas Hanselle

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[21]
S. Haas and E. Hüllermeier.
Rectifying Bias in Ordinal Observational Data Using Unimodal Label Smoothing.
European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD 2023). Turin, Italy, Sep 18-22, 2023. DOI.
MCML Authors
Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[20]
M. Muschalik, F. Fumagalli, B. Hammer and E. Hüllermeier.
iSAGE: An Incremental Version of SAGE for Online Explanation on Data Streams.
European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD 2023). Turin, Italy, Sep 18-22, 2023. DOI.
MCML Authors
Link to Maximilian Muschalik

Maximilian Muschalik

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[19]
A. Javanmardi, Y. Sale, P. Hofman and E. Hüllermeier.
Conformal Prediction with Partially Labeled Data.
12th Symposium on Conformal and Probabilistic Prediction with Applications (COPA 2023). Limassol, Cyprus, Sep 13-15, 2023. URL.
MCML Authors
Link to Alireza Javanmardi

Alireza Javanmardi

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Paul Hofman

Paul Hofman

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[18]
M. Caprio, Y. Sale, E. Hüllermeier and I. Lee.
A novel Bayes' Theorem for Upper Probabilities.
International Workshop on Epistemic Uncertainty in Artificial Intelligence (Epi UAI 2023). Pittsburgh, PA, USA, Aug 04, 2023. DOI.
MCML Authors
Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[17]
Y. Sale, M. Caprio and E. Hüllermeier.
Is the Volume of a Credal Set a Good Measure for Epistemic Uncertainty?.
39th Conference on Uncertainty in Artificial Intelligence (UAI 2023). Pittsburgh, PA, USA, Aug 01-03, 2023. URL.
MCML Authors
Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[16]
L. Wimmer, Y. Sale, P. Hofman, B. Bischl and E. Hüllermeier.
Quantifying Aleatoric and Epistemic Uncertainty in Machine Learning: Are Conditional Entropy and Mutual Information Appropriate Measures?.
39th Conference on Uncertainty in Artificial Intelligence (UAI 2023). Pittsburgh, PA, USA, Aug 01-03, 2023. URL.
MCML Authors
Link to Lisa Wimmer

Lisa Wimmer

Statistical Learning & Data Science

A1 | Statistical Foundations & Explainability

Link to Paul Hofman

Paul Hofman

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Bernd Bischl

Bernd Bischl

Prof. Dr.

Statistical Learning & Data Science

A1 | Statistical Foundations & Explainability

Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[15]
M. K. Belaid, R. Bornemann, M. Rabus, R. Krestel and E. Hüllermeier.
Compare-xAI: Toward Unifying Functional Testing Methods for Post-hoc XAI Algorithms into a Multi-dimensional Benchmark.
1st World Conference on eXplainable Artificial Intelligence (xAI 2023). Lisbon, Portugal, Jul 26-28, 2023. DOI.
MCML Authors
Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[14]
M. Muschalik, F. Fumagalli, R. Jagtani, B. Hammer and E. Hüllermeier.
iPDP: On Partial Dependence Plots in Dynamic Modeling Scenarios.
1st World Conference on eXplainable Artificial Intelligence (xAI 2023). Lisbon, Portugal, Jul 26-28, 2023. Best Paper Award. DOI.
MCML Authors
Link to Maximilian Muschalik

Maximilian Muschalik

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[13]
V. Bengs, E. Hüllermeier and W. Waegeman.
On Second-Order Scoring Rules for Epistemic Uncertainty Quantification.
40th International Conference on Machine Learning (ICML 2023). Honolulu, Hawaii, Jul 23-29, 2023. URL.
MCML Authors
Link to Viktor Bengs

Viktor Bengs

Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[12]
M. Wever, M. Özdogan and E. Hüllermeier.
Cooperative Co-Evolution for Ensembles of Nested Dichotomies for Multi-Class Classification.
Genetic and Evolutionary Computation Conference (GECCO 2023). Lisbon, Portugal, Jul 15-19, 2023. DOI.
MCML Authors
Link to Marcel Wever

Marcel Wever

Dr.

* Former member

A3 | Computational Models

Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[11]
T. Tornede, A. Tornede, J. Hanselle, F. Mohr, M. Wever and E. Hüllermeier.
Towards Green Automated Machine Learning: Status Quo and Future Directions.
Journal of Artificial Intelligence Research 77 (Jun. 2023). DOI.
MCML Authors
Link to Jonas Hanselle

Jonas Hanselle

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Marcel Wever

Marcel Wever

Dr.

* Former member

A3 | Computational Models

Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[10]
A. K. Wickert, C. Damke, L. Baumgärtner, E. Hüllermeier and M. Mezini.
UnGoML: Automated Classification of unsafe Usages in Go.
IEEE/ACM 20th International Conference on Mining Software Repositories (MSR 2023). Melbourne, Australia, May 15-16, 2023. FOSS (Free, Open Source Software) Impact Paper Award. DOI.
MCML Authors
Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[9]
J. Brandt, E. Schede, B. Haddenhorst, V. Bengs, E. Hüllermeier and K. Tierney.
AC-Band: A Combinatorial Bandit-Based Approach to Algorithm Configuration.
37th Conference on Artificial Intelligence (AAAI 2023). Washington, DC, USA, Feb 07-14, 2023. DOI.
MCML Authors
Link to Viktor Bengs

Viktor Bengs

Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[8]
V. Bengs and E. Hüllermeier.
Multi-armed bandits with censored consumption of resources.
Machine Learning 112.1 (Jan. 2023). DOI.
MCML Authors
Link to Viktor Bengs

Viktor Bengs

Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[7]
S. Legler, T. Janjic, M. H. Shaker and E. Hüllermeier.
Machine learning for estimating parameters of a convective-scale model: A comparison of neural networks and random forests.
32nd Workshop of Computational Intelligence of the VDI/VDE-Gesellschaft für Mess- und Automatisierungstechnik (GMA). Berlin, Germany, Dec 01-02, 2022. PDF.
MCML Authors
Link to Mohammad Hossein Shaker Ardakani

Mohammad Hossein Shaker Ardakani

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[6]
V. Bengs, E. Hüllermeier and W. Waegeman.
Pitfalls of Epistemic Uncertainty Quantification through Loss Minimisation.
36th Conference on Neural Information Processing Systems (NeurIPS 2022). New Orleans, LA, USA, Nov 28-Dec 09, 2022. PDF.
MCML Authors
Link to Viktor Bengs

Viktor Bengs

Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[5]
J. Brandt, V. Bengs, B. Haddenhorst and E. Hüllermeier.
Finding optimal arms in non-stochastic combinatorial bandits with semi-bandit feedback and finite budget.
36th Conference on Neural Information Processing Systems (NeurIPS 2022). New Orleans, LA, USA, Nov 28-Dec 09, 2022. PDF.
MCML Authors
Link to Viktor Bengs

Viktor Bengs

Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[4]
A. Campagner, J. Lienen, E. Hüllermeier and D. Ciucci.
Scikit-Weak: A Python Library for Weakly Supervised Machine Learning.
International Joint Conference on Rough Sets (IJCRS 2022). Suzhou, China, Nov 11-14, 2022. DOI.
MCML Authors
Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[3]
E. Schede, J. Brandt, A. Tornede, M. Wever, V. Bengs, E. Hüllermeier and K. Tierney.
A Survey of Methods for Automated Algorithm Configuration.
Journal of Artificial Intelligence Research 75 (Oct. 2022). DOI.
MCML Authors
Link to Marcel Wever

Marcel Wever

Dr.

* Former member

A3 | Computational Models

Link to Viktor Bengs

Viktor Bengs

Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[2]
E. Schede, J. Brandt, A. Tornede, M. Wever, V. Bengs, E. Hüllermeier and K. Tierney.
A Survey of Methods for Automated Algorithm Configuration.
31st International Joint Conference on Artificial Intelligence and the 25th European Conference on Artificial Intelligence (IJCAI-ECAI 2022). Vienna, Austria, Jul 23-29, 2022. Extended Abstract. DOI.
MCML Authors
Link to Marcel Wever

Marcel Wever

Dr.

* Former member

A3 | Computational Models

Link to Viktor Bengs

Viktor Bengs

Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models


[1]
V. Nguyen, M. H. Shaker and E. Hüllermeier.
How to measure uncertainty in uncertainty sampling for active learning.
Machine Learning 111.1 (2022). DOI.
MCML Authors
Link to Mohammad Hossein Shaker Ardakani

Mohammad Hossein Shaker Ardakani

Artificial Intelligence & Machine Learning

A3 | Computational Models

Link to Eyke Hüllermeier

Eyke Hüllermeier

Prof. Dr.

Artificial Intelligence & Machine Learning

A3 | Computational Models