Ruhr-Uni-Bochum
HGI

Copyright: HGI, stock.adobe.com: chinnarach

AI Labels in Practice: Do They Protect Us from Image-Based Misinformation?

CASA researchers are conducting studies to examine the expectations, benefits, and risks of AI labels from the users' perspective.

Close-up of a smartphone screen showing a button that says “powered using AI”

Generated using AI

A giant AI-generated octopus lies on the beach, with several people standing next to it; a text box reads 'Warning: Misinformation'

In the study, participants had to determine whether a statement was true or false based on visual cues. This image is an AI-generated image paired with a false statement. Copyrigtht: www.snopes.com/fact-check/giant-octopus-indonesian-coast/ or Instagram User @best_of_ai

A man in a red sweater is sitting on the sofa, working on his laptop. He is the researcher Jonas Ricker.

Jonas Ricker's research examines whether labeling AI-generated images helps users assess the accuracy of information. Copyright: Michael Schwettmann

Just a few years ago, using generative artificial intelligence (GenAI) required specialized expertise to create content from scratch. Today, the technology is much more accessible: typically, a short text prompt is all it takes to generate new images, videos, or audio. Legislators and platform providers have long been aware that this development does not only have positive effects on our digital society. One possible countermeasure: Special labels identify images as AI-generated.

Researchers from the Faculty of Computer Science at Ruhr University Bochum and the CISPA Helmholtz Center for Information Security, including Jonas Ricker, PhD, a member of the Horst Görtz Institute for IT Security, and Sandra Höltervennhoff (formerly HGI/CASA, now CISPA), have explored this idea from the user’s perspective. In their paper “That’s another doom I haven’t thought about”: A User Study on AI Labels as a Safeguard Against Image-Based Misinformation, which was presented at the ACM CHI 2026 conference (Barcelona, Spain), they analyzed the usage patterns of AI labels.

A Focus on People

While there are high hopes that such labels will mitigate the societal risks of GenAI – including the spread of misinformation – they primarily serve the purpose of transparency. They do not indicate whether content is factual or fictional, but merely whether it was created using AI. For users to handle such information appropriately, it is crucial that their expectations align with how the label actually works.

Based on these basic assumptions, the researchers examined users’ expectations and perceptions, as well as the impact of mislabeling on the user experience. To this end, they conducted two studies.

True or False? Putting AI Labels to the Test

While participants in the first study initially found AI labels helpful, this perception changed during practical application. In the second, main study, which included a total of 1,342 participants, real and AI-generated images were each paired with a corresponding piece of information – either false or true. The task was to determine whether the statement was true or false. The examples were taken from fact-checking websites.

“The labels did not prompt participants to focus more on the accuracy of the claims. Nor did they influence the confidence with which participants made their judgments,” explains Jonas Ricker. Instead, they tended to rely on the presence or absence of the label when making their decisions. This led to the unintended side effect that true information accompanied by AI-generated images was often judged to be false. At the same time, confidence in facts illustrated with real photos—but taken out of context—increased.

Risks Associated with Mislabeling

Another issue that was examined is so-called mislabeling. Whether labels are assigned based on the platform’s users, information is extracted from metadata, or software checks all images before upload: errors can occur in any system and would have serious consequences for basic trust in the labeling process.

The study’s findings underscore that AI labels must be used with caution in practice.

 

Original publication:
Sandra Höltervennhoff, Jonas Ricker, Maike M. Raphael, Charlotte Schwedes, Rebecca Weil, Asja Fischer, Thorsten Holz, Lea Schönherr, and Sascha Fahl. 2026. "That’s another doom I haven’t thought about": A User Study on AI Labels as a Safeguard Against Image-Based Misinformation. In Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems (CHI ’26), April 13–17, 2026, Barcelona, Spain. ACM, New York, NY, USA, 32 pages. https://doi.org/10.1145/3772318.3791006

The publication received an Honorable Mention Award at CHI’26.

 

Press Contact:
Jonas Ricker, jonas.ricker(at)rub.de


Infobox: Interdisciplinarity as a guiding principle at CASA

The idea for the paper originated as part of a cross-disciplinary project at the CASA International Graduate School.

Interdisciplinary work is a fundamental principle at CASA and a hallmark of research in Bochum. For this reason, all CASA PhD students spend at least six weeks in the research group of another CASA Principal Investigator. During this time, they work on an interdisciplinary research project that advances their dissertation, drawing on the expertise of the respective Principal Investigator.