28.11.2024

Teaser image to Enhancing the integrity of social media with AI

Enhancing the integrity of social media with AI

Researcher in focus: Dominik Bär

Dominik Bär is a researcher at the Institute of Artificial Intelligence in Management at LMU, working within the research group of Stefan Feuerriegel. He is pursuing his PhD with a focus on social media analytics.

Hi Dominik! What are you focusing on in your research?

My research explores how AI can enhance the integrity of social media to create a positive impact on society. Specifically, I focus on identifying and mitigating harmful content like hate speech and misinformation, as well as auditing political campaigns. For example, I explore how AI can help reduce online hate speech and study how algorithms influence the spread of political ads. To do so, my approach combines insights from social sciences with state-of-the-art AI methods in natural language processing and machine learning.

What could be measures to reduce online hate speech?

In a current project, we’re investigating whether AI-generated counterspeech can help reduce online hate speech. To do this, we ran a randomized controlled trial. First, we used a machine learning model to detect hate speech. For each detected post, we generated counterspeech and randomly assigned users to one of five groups. Some received AI-generated counterspeech, others got a general message like “Stop posting hate speech.” We also tested two strategies: one encouraging empathy and the other warning about the consequences of hate speech. A control group received no response. We then measured how effective the counterspeech was by checking if users deleted their hateful posts, posted less hate speech in the future, or used a less toxic tone in later posts.

What would you consider the biggest emerging societal threats from social media?

Social media platforms host an abundance of problematic content. In my opinion, one of the most significant threats is the spread of misinformation and how it affects public opinion and behavior. Additionally, the spread of hate speech is problematic since it harms individuals and polarizes societies. I am also concerned about the lack of transparency in how social media platforms distribute content. Especially in political campaigns, this can lead to manipulation and a lack of accountability.

What challenges are there in preventing the spread of harmful content or fake news on social media?

I think there are two main challenges in tackling harmful content: The first challenge is the high prevalence and diversity of harmful content. On the one hand, this makes manual moderation unfeasible. On the other hand, AI-based methods must be extremely well-calibrated and adaptable to effectively identify and flag such content without overreach or bias. The second challenge is the lack of transparency from social media platforms. This makes it difficult for researchers to fully understand how harmful content spreads, limiting our ability to develop effective countermeasures.

How do you envision the future of AI and machine learning in the context of social media analytics?

I see the future of AI and machine learning in social media analytics evolving along two dimensions. First, AI will continue to be a powerful research tool, enabling us to process and analyze the massive volumes of data generated on social media platforms. This will allow us to generate deeper insights into user behavior, content trends, and information flow. Second, AI itself will increasingly become a subject of study, especially regarding its influence on social media environments—for instance, in understanding the spread of AI-generated fake news and its broader impact on public discourse.

To wrap things up: What do you do besides your research?

I’m originally from the area around Garmisch-Partenkirchen, so I grew up in the Alps. I love heading out to the mountains, skiing, hiking, mountain biking, and riding my road bike. I also enjoy going to FC Bayern matches at the stadium. I really appreciate the city and the surrounding area and that I can pursue all my hobbies here quite easily.

About Dominik Bär:

Dominik Bär is a PhD student at the Institute of Artificial Intelligence in Management at LMU. His research explores how the use of computational methods can enhance the integrity of social media platforms, with a particular emphasis on combating harmful content like hate speech and misinformation, as well as auditing political campaigns. He approaches these issues from a social science perspective, aiming to inform the public and develop effective countermeasures.

28.11.2024


Subscribe to RSS News feed

Related

Link to Understanding Vision Loss and the Need for Early Treatment

11.12.2024

Understanding Vision Loss and the Need for Early Treatment

Researcher in focus: Jesse Grootjen is writing his doctoral thesis at LMU, focusing on enhancing human abilities through digital technologies.


Link to MCML Featured in Bildung+ Schule Digital Magazine

10.12.2024

MCML Featured in Bildung+ Schule Digital Magazine

MCML partners in the KITrans project, featured in Bildung+ Schule Digital, bringing interactive AI education to classrooms.


Link to MCML Director Daniel Cremers on AI's Role in Improving Lives

09.12.2024

MCML Director Daniel Cremers on AI's Role in Improving Lives

MCML Director Daniel Cremers discusses how AI simplifies daily life, tackles ethical challenges, and shapes the future with innovative research.


Link to AI and weather predictions

04.12.2024

AI and weather predictions

Researcher in focus: Kevin Höhlein, PhD student at TUM, applies data science and machine learning to analyze meteorological data.


Link to Artificial Intelligence as a radio host

26.11.2024

Artificial Intelligence as a radio host

Our PI Sven Nyholm discusses ethical concerns surrounding an AI project, highlighting issues of consent and transparency.