21.06.2024

Teaser image to Call for papers

Call for Papers

Symposium on "Scaling AI Assessments – Tools, Ecosystems, and Business Models"

Our partners from the AI Competence Center LAMARR invite submissions of papers for this year's Symposium on "Scaling AI Assessments – Tools, Ecosystems, and Business Models".

Against the backdrop of the importance of Trustworthy AI, as well as its implementation and establishment, the event is primarily aimed at practitioners from the TIC sector, tech start-ups offering solutions to the above-mentioned challenges, and researchers from the field of Trustworthy AI. As the symposium will also discuss the EU AI Act and its legal dimensions, legal experts are also cordially invited.

#event #research
Subscribe to RSS News feed

Related

Link to Rethinking AI in Public Institutions - Balancing Prediction and Capacity

09.10.2025

Rethinking AI in Public Institutions - Balancing Prediction and Capacity

Unai Fischer Abaigar explores how AI can make public decisions fairer, smarter, and more effective.

Link to MCML-LAMARR Workshop at University of Bonn

08.10.2025

MCML-LAMARR Workshop at University of Bonn

MCML and Lamarr researchers met in Bonn to exchange ideas on NLP, LLM finetuning, and AI ethics.

Link to Three MCML Members Win Best Paper Award at AutoML 2025

08.10.2025

Three MCML Members Win Best Paper Award at AutoML 2025

MCML PI Matthias Feurer and Director Bernd Bischl’s paper on overtuning won Best Paper at AutoML 2025, offering insights for robust HPO.

Link to Machine Learning for Climate Action - with researcher Kerstin Forster

29.09.2025

Machine Learning for Climate Action - With Researcher Kerstin Forster

Kerstin Forster researches how AI can cut emissions, boost renewable energy, and drive corporate sustainability.

Link to Making Machine Learning More Accessible with AutoML

26.09.2025

Making Machine Learning More Accessible With AutoML

Matthias Feurer discusses AutoML, hyperparameter optimization, OpenML, and making machine learning more accessible and efficient for researchers.

Back to Top