05.12.2023
MCML at EMNLP 2023
17 Accepted Papers (9 Main, 7 Findings, and 1 Workshop)
Conference on Empirical Methods in Natural Language Processing, Singapore, Dec 06-10, 2023
We are happy to announce that MCML researchers have contributed a total of 17 papers to EMNLP 2023: 9 Main, 7 Findings, and 1 Workshop papers. Congrats to our researchers!
Main Track (9 papers)
A Study on Accessing Linguistic Information in Pre-Trained Language Models by Using Prompts.
EMNLP 2023 - Conference on Empirical Methods in Natural Language Processing. Singapore, Dec 06-10, 2023. DOI
What Comes Next? Evaluating Uncertainty in Neural Text Generators Against Human Production Variability.
EMNLP 2023 - Conference on Empirical Methods in Natural Language Processing. Singapore, Dec 06-10, 2023. DOI
Automatic transcription of handwritten Old Occitan language.
EMNLP 2023 - Conference on Empirical Methods in Natural Language Processing. Singapore, Dec 06-10, 2023. DOI
Language Models with Rationality.
EMNLP 2023 - Conference on Empirical Methods in Natural Language Processing. Singapore, Dec 06-10, 2023. DOI
Establishing Trustworthiness: Rethinking Tasks and Model Evaluation.
EMNLP 2023 - Conference on Empirical Methods in Natural Language Processing. Singapore, Dec 06-10, 2023. DOI
GradSim: Gradient-Based Language Grouping for Effective Multilingual Training.
EMNLP 2023 - Conference on Empirical Methods in Natural Language Processing. Singapore, Dec 06-10, 2023. DOI
Counting the Bugs in ChatGPT's Wugs: A Multilingual Investigation into the Morphological Capabilities of a Large Language Model.
EMNLP 2023 - Conference on Empirical Methods in Natural Language Processing. Singapore, Dec 06-10, 2023. DOI
ACTOR: Active Learning with Annotator-specific Classification Heads to Embrace Human Label Variation.
EMNLP 2023 - Conference on Empirical Methods in Natural Language Processing. Singapore, Dec 06-10, 2023. DOI
From Dissonance to Insights: Dissecting Disagreements in Rationale Construction for Case Outcome Classification.
EMNLP 2023 - Conference on Empirical Methods in Natural Language Processing. Singapore, Dec 06-10, 2023. DOI
Findings Track (7 papers)
GlotLID: Language Identification for Low-Resource Languages.
Findings @EMNLP 2023 - Findings of the Conference on Empirical Methods in Natural Language Processing. Singapore, Dec 06-10, 2023. DOI GitHub
MEAL: Stable and Active Learning for Few-Shot Prompting.
Findings @EMNLP 2023 - Findings of the Conference on Empirical Methods in Natural Language Processing. Singapore, Dec 06-10, 2023. DOI GitHub
Language-Agnostic Bias Detection in Language Models with Bias Probing.
Findings @EMNLP 2023 - Findings of the Conference on Empirical Methods in Natural Language Processing. Singapore, Dec 06-10, 2023. DOI GitHub
Mitigating Data Imbalance and Representation Degeneration in Multilingual Machine Translation.
Findings @EMNLP 2023 - Findings of the Conference on Empirical Methods in Natural Language Processing. Singapore, Dec 06-10, 2023. DOI
Crosslingual Transfer Learning for Low-Resource Languages Based on Multilingual Colexification Graphs.
Findings @EMNLP 2023 - Findings of the Conference on Empirical Methods in Natural Language Processing. Singapore, Dec 06-10, 2023. DOI
Subspace Chronicles: How Linguistic Information Emerges, Shifts and Interacts during Language Model Training.
Findings @EMNLP 2023 - Findings of the Conference on Empirical Methods in Natural Language Processing. Singapore, Dec 06-10, 2023. DOI
Unleashing the Multilingual Encoder Potential: Boosting Zero-Shot Performance via Probability Calibration.
Findings @EMNLP 2023 - Findings of the Conference on Empirical Methods in Natural Language Processing. Singapore, Dec 06-10, 2023. DOI
Workshops (1 paper)
Multilingual Word Embeddings for Low-Resource Languages using Anchors and a Chain of Related Languages.
MRL @EMNLP 2023 - 3rd Workshop on Multi-lingual Representation Learning at the Conference on Empirical Methods in Natural Language Processing. Singapore, Dec 06-10, 2023. DOI
Related
24.02.2026
Cosmology: Measuring the Expansion of the Universe With Cosmic Fireworks
Daniel Gruen leads LMU’s campaign on rare SN Winny to refine the Hubble constant and address the Hubble tension in cosmology.
19.02.2026
COSMOS – Teaching Vision-Language Models to Look Beyond the Obvious
Presented at CVPR 2025, COSMOS shows how smarter training helps VLMs learn from details and context, improving AI understanding without larger models.
05.02.2026
Daniel Rückert and Fabian Theis Awarded Google.org AI for Science Grant
Daniel Rueckert and Fabian Theis receive Google.org AI funding to develop multiscale AI models for biomedical disease simulation.