Home  | News

11.08.2025

Tiny logo
Teaser image to MCML at USENIX-Security 2025

Two Accepted Papers

34th USENIX Security Symposium, Seattle, WA, USA, Aug 13-15, 2025

We are happy to announce that MCML researchers have contributed a total of 2 papers to USENIX-Security 2025. Congrats to our researchers!

Main Track (2 papers)

T. Benoit • Y. WangM. DannehlJ. Kinder
BLens: Contrastive Captioning of Binary Functions using Ensemble Embedding.
USENIX-Security 2025 - 34th USENIX Security Symposium. Seattle, WA, USA, Aug 13-15, 2025. PDF

M. Windl • O. Akgul • N. Malkin • L. F. Cranor
Privacy Solution or Menace? Investigating Perceptions of Radio-Frequency Sensing.
USENIX-Security 2025 - 34th USENIX Security Symposium. Seattle, WA, USA, Aug 13-15, 2025. URL

#research #top-tier-work #kinder #schmidt
Subscribe to RSS News feed

Related

Link to Tom Sterkenburg Wins Karl-Heinz Hoffmann Prize of the Bavarian Academy of Sciences

08.12.2025

Tom Sterkenburg Wins Karl-Heinz Hoffmann Prize of the Bavarian Academy of Sciences

MCML JRG Leader Tom Sterkenburg receives the Karl-Heinz Hoffmann Prize of the BAdW for his interdisciplinary research.

Link to World’s First Complete 3D Model of All Buildings Released

04.12.2025

World’s First Complete 3D Model of All Buildings Released

Xiaoxiang Zhu’s team releases GlobalBuildingAtlas, a high-res 3D map of 2.75B buildings for advanced urban and climate analysis.

Link to When to Say "I’m Not Sure": Making Language Models More Self-Aware

04.12.2025

When to Say "I’m Not Sure": Making Language Models More Self-Aware

ICLR 2025 research by the groups of David Rügamer, and Bernd Bischl introduces methods to make LLMs more reliable by expressing uncertainty.

Link to

28.11.2025

MCML at NeurIPS 2025

MCML researchers are represented with 47 papers at NeurIPS 2025 (38 Main, and 9 Workshops).

Link to Seeing the Bigger Picture – One Detail at a Time

27.11.2025

Seeing the Bigger Picture – One Detail at a Time

FLAIR, introduced by Zeynep Akata’s group at CVPR 2025, brings fine-grained, text-guided detail recognition to vision-language models.

Back to Top