Jump to main content Jump to footer Skip navigation Jump to navigation start
Back to overview
This image shows an individual with orange hair interacting with a large, abstract digital mirrored structure. The structure is composed of squares in varying shades of green, orange, white, and black which are pieced together to reflect the individual’s figure. The figure's hand is extended as if pointing to or interacting with the mirrored structure. Behind the  structure are streams of binary code (0s and 1s) in orange, flowing towards the digital grid.
© Yutong Liu & Kingston School of Art / https://betterimagesofai.org

Training series on AI in science communication continues

Following three successful modules in 2025, the OeAD Centre for Citizen Science will once again offer training courses starting in March 2026.
2 min read · 17. December 2025

Building on last year’s training series on Artificial Intelligence in Science Communication, the thematic engagement with AI will now be continued and further deepened. The new series draws on existing experience and addresses current challenges arising from the increasing use of AI-based systems in science communication. The positive response to the previous modules has highlighted the strong demand for structured guidance, professional exchange, and practice-oriented reflection, as the field is developing dynamically and increasingly shaping everyday work in science and communication.

The 2026 programme opens with the module “Deepfakes, Bias & Hallucinations: Dealing with AI-generated Content in Science Communication.”

So-called AI slop is increasingly spreading in the digital space, raising new questions around factuality, transparency, and trustworthiness in science communication. While AI-generated texts, images, and videos open up new communicative possibilities, they also entail risks such as hallucinations, biased representations, or manipulations that are difficult to detect. This training addresses these key challenges, focusing on identifying typical AI artefacts and biases, using practical tools for fact-checking, and adopting a reflective approach to deepfakes and misinformation.

Speakers: Thomas Sommerer and Jaro Krieger-Lamina
Date: 18 March 2026, 2:00–4:00 p.m.
Registration

The series will continue in June and autumn 2026, including a module on quality standards for the use of generative AI.

Back to overview
YouTube is deactivated

We need your consent to use YouTube videos. For more information, see our Privacy Policy.

Vimeo is deactivated

We need your consent to use Vimeo videos. For more information, see our Privacy Policy.

OpenStreetMap is deactivated

We need your consent to use OpenStreetMap. For more information, see our Privacy Policy.

Issuu is deactivated

We need your consent to use Issuu. For more information, see our Privacy Policy.

privacy_overlay.arcgis.title

privacy_overlay.arcgis.description

privacy_overlay.peertube.title

privacy_overlay.peertube.description