14.06.2024
Ten Accepted Papers (3 Main, 2 Findings, and 5 Workshops)
Annual Conference of the North American Chapter of the Association for Computational Linguistics, Mexico City, Mexico, Jun 16-21, 2024
We are happy to announce that MCML researchers have contributed a total of 10 papers to NAACL 2024: 3 Main, 2 Findings, and 5 Workshop papers. Congrats to our researchers!
Main Track (3 papers)
zrLLM: Zero-Shot Relational Learning on Temporal Knowledge Graphs with Large Language Models.
NAACL 2024 - Annual Conference of the North American Chapter of the Association for Computational Linguistics. Mexico City, Mexico, Jun 16-21, 2024. DOI
Divergent Token Metrics: Measuring degradation to prune away LLM components -- and optimize quantization.
NAACL 2024 - Annual Conference of the North American Chapter of the Association for Computational Linguistics. Mexico City, Mexico, Jun 16-21, 2024. DOI
Rehearsal-Free Modular and Compositional Continual Learning for Language Models.
NAACL 2024 - Annual Conference of the North American Chapter of the Association for Computational Linguistics. Mexico City, Mexico, Jun 16-21, 2024. DOI
Findings Track (2 papers)
GenTKG: Generative Forecasting on Temporal Knowledge Graph with Large Language Models.
Findings @NAACL 2024 - Findings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics. Mexico City, Mexico, Jun 16-21, 2024. DOI GitHub
OFA: A Framework of Initializing Unseen Subword Embeddings for Efficient Large-scale Multilingual Continued Pretraining.
Findings @NAACL 2024 - Findings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics. Mexico City, Mexico, Jun 16-21, 2024. DOI
Workshops (5 papers)
Leveraging (Sentence) Transformer Models with Contrastive Learning for Identifying Machine-Generated Text.
SemEval @NAACL 2024 - 18th International Workshop on Semantic Evaluation at the Annual Conference of the North American Chapter of the Association for Computational Linguistics. Mexico City, Mexico, Jun 16-21, 2024. DOI
TOPCAT: Topic-Oriented Protocol for Content Analysis of Text – A Preliminary Study.
NLP+CSS @NAACL 2024 - 6th Workshop on Natural Language Processing and Computational Social Science at the Annual Conference of the North American Chapter of the Association for Computational Linguistics. Mexico City, Mexico, Jun 16-21, 2024. URL
MoSECroT: Model Stitching with Static Word Embeddings for Crosslingual Zero-shot Transfer.
Insights from Negative Results @NAACL 2024 - 5th Workshop on Insights from Negative Results in NLP at the Annual Conference of the North American Chapter of the Association for Computational Linguistics. Mexico City, Mexico, Jun 16-21, 2024. DOI
A Study of the Class Imbalance Problem in Abusive Language Detection.
WOAH @NAACL 2024 - 8th Workshop on Online Abuse and Harms at the Annual Conference of the North American Chapter of the Association for Computational Linguistics. Mexico City, Mexico, Jun 16-21, 2024. DOI
MaiNLP at SemEval-2024 Task 1: Analyzing Source Language Selection in Cross-Lingual Textual Relatedness.
SemEval @NAACL 2024 - 18th International Workshop on Semantic Evaluation at the Annual Conference of the North American Chapter of the Association for Computational Linguistics. Mexico City, Mexico, Jun 16-21, 2024. DOI
Related
18.12.2025
"See, Don’t Assume": Revealing and Reducing Gender Bias in AI
ICLR 2025 research led by Zeynep Akata’s team reveals and reduces gender bias in popular vision-language AI models.
16.12.2025
Fabian Theis Featured in Handelsblatt on the Future of AI in Precision Medicine
MCML PI Fabian Theis discusses AI-driven precision medicine and its growing impact on individualized healthcare and biomedical research.
16.12.2025
Gitta Kutyniok Featured in VDI Nachrichten on AI Ethics
Gitta Kutyniok discusses measurable criteria for ethical AI, promoting safe and responsible autonomous decision-making.
16.12.2025
Hinrich Schütze Featured in WirtschaftsWoche on Innovative AI Approaches
Hinrich Schütze discusses Giotto.ai’s efficient AI models, highlighting memory separation and context-aware decoding to improve robustness.