20.07.2024

Teaser image to MCML at ICML 2024

MCML at ICML 2024

The 41st International Conference on Machine Learning (ICML 2024). Vienna, Austria, July 21-27, 2024

We are happy to announce that MCML researchers are represented with 29 papers at ICML 2024:

K. Ahn, A. Jadbabaie and S. Sra.
How to Escape Sharp Minima with Random Perturbations.
URL.
K. Bouchiat, A. Immer, H. Yèche, G. Ratsch and V. Fortuin.
Improving Neural Additive Models with Bayesian Principles.
URL.
X. Cheng, Y. Chen and S. Sra.
Transformers Implement Functional Gradient Descent to Learn Non-Linear Functions In Context.
URL.
T. Decker, A. R. Bhattarai, J. Gu, V. Tresp and F. Buettner.
Provably Better Explanations with Optimized Aggregation of Feature Attributions.
URL.
S. Eckman, B. Plank and F. Kreuter.
Position: Insights from Survey Methodology can Improve Training Data.
URL.
U. Fischer Abaigar, C. Kern and F. Kreuter.
The Missing Link: Allocation Performance in Causal Machine Learning. (Workshop paper).
arXiv.
D. Frauen, V. Melnychuk and S. Feuerriegel.
Fair Off-Policy Learning from Observational Data.
URL.
D. Fuchsgruber, T. Wollschläger, B. Charpentier, A. Oroz and S. Günnemann.
Uncertainty for Active Learning on Graphs.
URL.
F. Fumagalli, M. Muschalik, P. Kolpaczki, E. Hüllermeier and B. Hammer.
KernelSHAP-IQ: Weighted Least Square Optimization for Shapley Interactions.
URL.
K. Gatmiry, Z. Li, S. J. Reddi and S. Jegelka.
Simplicity Bias via Global Convergence of Sharpness Minimization.
URL.
K. Gatmiry, N. Saunshi, S. J. Reddi, S. Jegelka and S. Kumar.
Can Looped Transformers Learn to Implement Multi-step Gradient Descent for In-context Learning?.
URL.
M. Herrmann, F. J. D. Lange, K. Eggensperger, G. Casalicchio, M. Wever, M. Feurer, D. Rügamer, E. Hüllermeier, A.-L. Boulesteix and B. Bischl.
Position: Why We Must Rethink Empirical Research in Machine Learning.
URL.
P. Holl and N. Thuerey.
Φ-Flow: Differentiable Simulations for PyTorch, TensorFlow and Jax.
URL.
M. Juergens, N. Meinert, V. Bengs, E. Hüllermeier and W. Waegeman.
Is Epistemic Uncertainty Faithfully Represented by Evidential Deep Learning Methods?.
URL.
G. Kaissis, S. Kolek, B. Balle, J. Hayes and D. Rückert.
Beyond the Calibration Point: Mechanism Comparison in Differential Privacy.
URL.
M. Lindauer, F. Karl, A. Klier, J. Moosbauer, A. Tornede, A. C. Mueller, F. Hutter, M. Feurer and B. Bischl.
Position: A Call to Action for a Human-Centered AutoML Paradigm.
URL.
K. Lin and R. Heckel.
Robustness of Deep Learning for Accelerated MRI: Benefits of Diverse Training Data.
URL.
C. Morris, F. Frasca, N. Dym, H. Maron, I. I. Ceylan, R. Levie, D. Lim, M. M. Bronstein, M. Grohe and S. Jegelka.
Position: Future Directions in the Theory of Graph Machine Learning.
URL.
T. Papamarkou, M. Skoularidou, K. Palla, L. Aitchison, J. Arbel, D. Dunson, M. Filippone, V. Fortuin, P. Hennig, J. M. H. Lobato, A. Hubin, A. Immer, T. Karaletsos, M. E. Khan, A. Kristiadi, Y. , S. Mandt, C. Nemeth, M. A. Osborne, T. G. J. Rudner, D. Rügamer, Y. W. Teh, M. Welling, A. G. Wilson and R. Zhang.
Position: Bayesian Deep Learning in the Age of Large-Scale AI.
URL.
D. Rügamer, C. Kolb, T. Weber, L. Kook and T. Nagler.
Generalizing orthogonalization for models with non-linearities.
URL.
Y. Sale, V. Bengs, M. Caprio and E. Hüllermeier.
Second-Order Uncertainty Quantification: A Distance-Based Approach.
URL.
J. Schweisthal, D. Frauen, M. van der Schaar and S. Feuerriegel.
Meta-Learners for Partially-Identified Treatment Effects Across Multiple Environments.
URL.
Y. Shen, N. Daheim, B. Cong, P. Nickl, G. M. Marconi, C. Bazan, R. Yokota, I. Gurevych, D. Cremers, M. E. Khan and T. Möllenhoff.
Variational Learning is Effective for Large Deep Networks.
URL. GitHub.
E. Sommer, L. Wimmer, T. Papamarkou, L. Bothmann, B. Bischl and D. Rügamer.
Connecting the Dots: Is Mode Connectedness the Key to Feasible Sample-Based Inference in Bayesian Neural Networks?.
URL.
Y. Sun, J. Liu, Z. Wu, Z. Ding, Y. Ma, T. Seidl and V. Tresp.
SA-DQAS: Self-attention Enhanced Differentiable Quantum Architecture Search. (Workshop paper).
arXiv.
B. Tahmasebi and S. Jegelka.
Sample Complexity Bounds for Estimating Probability Divergences under Invariances.
URL.
B. Tahmasebi, A. Soleymani, D. Bahri, S. Jegelka and P. Jaillet.
A Universal Class of Sharpness-Aware Minimization Algorithms.
URL.
D. Tramontano, Y. Kivva, S. Salehkaleybar, M. Drton and N. Kiyavash.
Causal Effect Identification in LiNGAM Models with Latent Confounders.
URL.
T. Wollschläger, N. Kemper, L. Hetzel, J. Sommer and S. Günnemann.
Expressivity and Generalization: Fragment-Biases for Molecular GNNs.
URL.

20.07.2024


Related

Link to MCML at ACL 2024

14.08.2024

MCML at ACL 2024

We are happy to announce that MCML researchers are represented at the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024).


Link to MCML at UAI 2024

15.07.2024

MCML at UAI 2024

We are happy to announce that MCML researchers are represented at the 40th Conference on Uncertainty in Artificial Intelligence (UAI 2024).


Link to MCML at CVPR 2024

14.06.2024

MCML at CVPR 2024

We are happy to announce that MCML researchers are represented at the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2024.


Link to MCML at NAACL 2024

14.06.2024

MCML at NAACL 2024

We are happy to announce that MCML researchers are represented at the 2024 Annual Conference of the North American Chapter of the Association for Computational …


Link to MCML at CHI 2024

05.05.2024

MCML at CHI 2024

We are happy to announce that MCML researchers are represented at the CHI 2024 conference on Human Factors in Computing Systems.