Building on last year’s training series on Artificial Intelligence in Science Communication, the thematic engagement with AI will now be continued and further deepened. The new series draws on existing experience and addresses current challenges arising from the increasing use of AI-based systems in science communication. The positive response to the previous modules has highlighted the strong demand for structured guidance, professional exchange, and practice-oriented reflection, as the field is developing dynamically and increasingly shaping everyday work in science and communication.
The 2026 programme opens with the module “Deepfakes, Bias & Hallucinations: Dealing with AI-generated Content in Science Communication.”
So-called AI slop is increasingly spreading in the digital space, raising new questions around factuality, transparency, and trustworthiness in science communication. While AI-generated texts, images, and videos open up new communicative possibilities, they also entail risks such as hallucinations, biased representations, or manipulations that are difficult to detect. This training addresses these key challenges, focusing on identifying typical AI artefacts and biases, using practical tools for fact-checking, and adopting a reflective approach to deepfakes and misinformation.
Speakers: Thomas Sommerer and Jaro Krieger-Lamina
Date: 18 March 2026, 2:00–4:00 p.m.
Registration
The series will continue in June and autumn 2026, including a module on quality standards for the use of generative AI.