Copyright: HGI, chinnarach

New Findings from AI Research: Humans Can Hardly Recognize AI-Generated Media

International study reveals challenges in the detection of AI-generated media by humans.

Jonas Ricker conducts research on KI-generated media.

Artificial intelligences are able to create images that humans cannot distinguish from photographs.

Copyright: CASA, Michael Schwettmann

What do Barack Obama, Taylor Swift, and the Pope have in common? There are artificially generated media of them, as there are of many other public figures. These include text, audio as well as pictures – and many people perceive them as real, as shown by a representative international study conducted by researchers from Ruhr-University Bochum, CISPA Helmholtz Center, Leibniz University Hannover, and TU Berlin. The findings were presented by researchers Joel Frank, Franziska Herbert, Jonas Ricker, Lea Schönherr, Thorsten Eisenhofer, Asja Fischer, Markus Dürmuth, and Thorsten Holz at the 45th IEEE Symposium on Security and Privacy in San Francisco, held from May 20th to 23rd, 2024, in San Francisco, California.

Difficulties in the detection of AI-generated media
"Human perception of AI-generated media has, until now, only been a matter of speculation”, explains CASA-PhD Jonas Ricker. For this reason, the research team conducted a representative online survey on AI-generated media such as images, text, and audio files with over 3,000 participants from Germany, the United States, and China. The result: Study participants across all media types and countries predominantly classified AI-generated media as human-made. "We found that only a few factors determine whether people are good at recognizing AI-generated media. These differences are not very significant across different age groups and backgrounds such as education, political orientation, or media literacy," says Ricker.

AI-generated media as a threat for democracy
This could become a danger for a society where artificial intelligence is indispensable, knows Thorsten Holz from CISPA. "Artificially generated content can be abused in various ways. This year, we have important elections, such as the European Parliament elections or the presidential election in the United States: AI-generated media can be easily used for political propaganda. I see this as a major threat to our democracy." Against this backdrop, it is not only important that automated detection of these media is at the forefront of research – but also that people's media literacy in this area continues to evolve, according to the scientists.

Diversity of AI-generated media in the international study
The quantitative study was conducted as an online survey between June 2022 and September 2022 in China, Germany, and the United States. Participants were randomly assigned to one of the three media groups, "text," "image," or "audio," and viewed 50% real and 50% AI-generated media. In addition, socio-biographical data, knowledge about AI-generated media, and factors such as media literacy, holistic thinking, general trust, cognitive reflection, and political orientation were collected. After data cleaning, 2,609 records remained (822 USA, 875 Germany, 922 China), which were included in the analysis.

For the study, researchers extensively collected human and machine-generated media in the areas of audio, images, and texts. Language samples were generated using a text-to-speech pipeline based on Tacotron 2 and Hifi-GAN. Using research findings on synthetic faces created with StyleGAN2, the researchers selected 16 pairs of real and machine-generated images. Artificially generated articles were generated using OpenAI's Davinci GPT-3 model, while real articles came from National Public Radio (USA), Tagesschau (Germany), and China Central Television (China).

Starting Points for Further Research
The result of the study provides important takeaways for cybersecurity research: "Criminals could, for example, use AI-generated media for perfected social engineering attacks. We need new defense mechanisms in this area," says Jonas Ricker. To this end, the researchers plan laboratory studies in which participants will be more closely questioned about their differentiation mechanisms. Another step would also be the further development of technical measures in the field of automated fact-checking, explain the scientists in their work.

Chair for Machine Learning
Faculty of Computer Science
Ruhr University Bochum
Universitätsstr. 150
44801 Bochum, Germany
0234 / 3223486

General note: In case of using gender-assigning attributes we include all those who consider themselves in this gender regardless of their own biological sex.