05.06.2022

Teaser image to

MCML Researchers With Two Papers at NAACL 2022

Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2022). Seattle, WA, USA, 10.06.2022–15.06.2022

We are happy to announce that MCML researchers are represented with two papers at NAACL 2022. Congrats to our researchers!

Findings Track (2 papers)

V. Steinborn, P. Dufter, H. Jabbar and H. Schütze.
An Information-Theoretic Approach and Dataset for Probing Gender Stereotypes in Multilingual Masked Language Models.
NAACL 2022 - Findings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics. Seattle, WA, USA, Jun 10-15, 2022. DOI
Abstract

Bias research in NLP is a rapidly growing and developing field. Similar to CrowS-Pairs (Nangia et al., 2020), we assess gender bias in masked-language models (MLMs) by studying pairs of sentences with gender swapped person references.Most bias research focuses on and often is specific to English.Using a novel methodology for creating sentence pairs that is applicable across languages, we create, based on CrowS-Pairs, a multilingual dataset for English, Finnish, German, Indonesian and Thai.Additionally, we propose SJSD, a new bias measure based on Jensen–Shannon divergence, which we argue retains more information from the model output probabilities than other previously proposed bias measures for MLMs.Using multilingual MLMs, we find that SJSD diagnoses the same systematic biased behavior for non-English that previous studies have found for monolingual English pre-trained MLMs. SJSD outperforms the CrowS-Pairs measure, which struggles to find such biases for smaller non-English datasets.

MCML Authors
Link to Profile Hinrich Schütze

Hinrich Schütze

Prof. Dr.

Computational Linguistics


M. Zhao, F. Mi, Y. Wang, M. Li, X. Jiang, Q. Liu and H. Schütze.
LMTurk: Few-Shot Learners as Crowdsourcing Workers in a Language-Model-as-a-Service Framework.
NAACL 2022 - Findings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics. Seattle, WA, USA, Jun 10-15, 2022. DOI
Abstract

Vast efforts have been devoted to creating high-performance few-shot learners, i.e., large-scale pretrained language models (PLMs) that perform well with little downstream task training data. Training PLMs has incurred significant cost, but utilizing the few-shot learners is still challenging due to their enormous size. This work focuses on a crucial question: How to make effective use of these few-shot learners? We propose LMTurk, a novel approach that treats few-shotlearners as crowdsourcing workers. The rationale is that crowdsourcing workers are in fact few-shot learners: They are shown a few illustrative examples to learn about a task and then start annotating. LMTurk employs few-shot learners built upon PLMs as workers. We show that the resulting annotations can be utilized to train models that solve the task well and are small enough to be deployable in practical scenarios. Active learning is integrated into LMTurk to reduce the amount of queries made to PLMs, minimizing the computational cost of running PLM inference passes. Altogether, LMTurk is an important step towards making effective use of current PLMs.

MCML Authors
Link to Profile Hinrich Schütze

Hinrich Schütze

Prof. Dr.

Computational Linguistics


05.06.2022


Subscribe to RSS News feed

Related

Link to When Clinical Expertise Meets AI Innovation – with Michael Ingrisch

25.06.2025

When Clinical Expertise Meets AI Innovation – With Michael Ingrisch

The new research film features Michael Ingrisch, who shows how AI and clinical expertise can solve real challenges in radiology together.

Link to Autonomous Driving: From Infinite Possibilities to Safe Decisions— with Matthias Althoff

23.06.2025

Autonomous Driving: From Infinite Possibilities to Safe Decisions— With Matthias Althoff

The new research film features Matthias Althoff explaining how his team verifies autonomous vehicle safety using EDGAR and rigorous testing.

Link to ERC Advanced Grant for Massimo Fornasier

20.06.2025

ERC Advanced Grant for Massimo Fornasier

Massimo Fornasier was awarded ERC Advanced Grant to develop advanced algorithms for solving complex nonconvex optimization problems.

Link to ERC Advanced Grant for Albrecht Schmidt

18.06.2025

ERC Advanced Grant for Albrecht Schmidt

Albrecht Schmidt receives ERC Advanced Grant for research on personalized generative AI to support memory, planning, and creativity.

Link to Better Data, Smarter AI: Why Quality Matters – with Frauke Kreuter

11.06.2025

Better Data, Smarter AI: Why Quality Matters – With Frauke Kreuter

In our new research film, Frauke Kreuter explains how data quality shapes fair, reliable, and socially responsible AI systems.