Jump to main content Jump to footer Skip navigation Jump to navigation start
Back to overview
This image shows an individual with orange hair interacting with a large, abstract digital mirrored structure. The structure is composed of squares in varying shades of green, orange, white, and black which are pieced together to reflect the individual’s figure. The figure's hand is extended as if pointing to or interacting with the mirrored structure. Behind the  structure are streams of binary code (0s and 1s) in orange, flowing towards the digital grid.
© Yutong Liu & Kingston School of Art / https://betterimagesofai.org

Trust Under Pressure: Training on the Impact of Generative AI on Science Communication

On March 18, an online training session by the OeAD Center for Citizen Science took place, in which two experts outlined how AI slop and deepfakes are transforming the virtual space.
2 min read · 23. March 2026

The two researchers Jaro Krieger-Lamina and Thomas Sommerer addressed the impacts of artificial intelligence in their presentations. They provided insights from media studies and technology assessment, particularly reflecting on the role of science communication and mediation.

Thomas Sommerer explained the typical characteristics of so-called "AI slop" (AI trash), such as visual exaggerations or culturally shaped stereotypes. The widespread dissemination of such AI-generated content on the internet is also due to the fact that human-produced media require more time and resources. The simple and rapid use of AI tools, however, favors the mass spread of disinformation and low-quality content.

In this context, Sommerer emphasized the importance of highlighting the human aspects of research more strongly in science communication. Instead of standardized media releases and professionally staged images, science communication should be more firmly rooted in everyday life and designed authentically.

Jaro Krieger-Lamina shed light on the possibilities and limitations of tools for detecting AI-generated content. Given the rapid technological advances, such content is hardly distinguishable from authentic media with the naked eye. Even technical instruments only provide probability values and are only conditionally suitable for reliable classifications. A system with quality seals for non-AI-generated content is theoretically conceivable but practically difficult to implement, among other reasons due to the rapid technological development.

Krieger-Lamina therefore recommended using generative AI only in specific contexts, such as pattern recognition, strictly regulated systems like coding, or as content inspiration. Communicators should consciously define standards there to maintain the trustworthiness of their content.

This module is part of a series that was conducted by the OeAD Center for Citizen Science last year on the topic of artificial intelligence in science communication. The next training session will take place on June 10, 2026, from 14:00 to 16:00. The experts Amrei Bahr and Matthias Begenat will present on the topic "Using AI responsibly: (new) quality standards for science communication." Register here.

Back to overview
YouTube is deactivated

We need your consent to use YouTube videos. For more information, see our Privacy Policy.

Vimeo is deactivated

We need your consent to use Vimeo videos. For more information, see our Privacy Policy.

OpenStreetMap is deactivated

We need your consent to use OpenStreetMap. For more information, see our Privacy Policy.

Issuu is deactivated

We need your consent to use Issuu. For more information, see our Privacy Policy.

privacy_overlay.arcgis.title

privacy_overlay.arcgis.description

privacy_overlay.peertube.title

privacy_overlay.peertube.description