EVENTS

Our events in the areas of Big Data and Research Innovation include a diverse set of topics such as Future, Strategy, Technology, Applications, and Management.

If you feel that your event or event series should be part of this event calendar, just contact us!

images/02_events/informatik%20kolloq.jpg#joomlaImage://local-images/02_events/informatik kolloq.jpg?width=800&height=300
Monday, June 10th, 2024 | 16:15 p.m

Informatikkolloquium: Challenges and Threats in Generative AI: Misuse and Exploits

Konrad-Zuse-Hörsaal (Raum B-201), Vogt-Kölln-Straße 30

Generative AI (genAI) is becoming more integrated into our daily lives, raising questions about potential threats within genAI systems and the misuse of their output. In this talk, we will take a closer look at the resulting challenges and security threats associated with generative AI. These relate to two possible categories: malicious inputs used to inject into generative models, and computer-generated output that is indistinguishable from human-generated content.

In the first case, specially designed inputs are used to exploit models such as LLMs, to disrupt alignment or to steal sensitive information. Existing attacks show that content filters of LLMs can be easily bypassed with specific inputs and that private information can be leaked. We demonstrate that even with full white-box access, it is difficult to prevent prompt injection attacks, and this provides only limited protection. This talk will therefore cover an alternative for protecting intellectual property by obfuscating sensitive inputs.

In the second threat scenario, generative models are utilized to produce fake content that is impossible to distinguish from human-generated content. This fake content is often used for fraudulent and manipulative purposes, and impersonation and realistic fake news are already possible using a variety of techniques. As these models continue to evolve, detecting these fraudulent activities will become increasingly difficult, while the attacks themselves will become easier to automate and require less expertise. This talk will provide an overview of the current challenges we are facing in detecting fake media in human and machine interactions.

The final part of the presentation will deal with the use of generative models in security applications. This includes benchmarking and fixing vulnerable code, as well as understanding the capabilities of these models by investigating their code deobfuscation abilities.

Bio
Lea Schönherr is a tenure track faculty at CISPA Helmholtz Center for Information Security since 2022. She obtained her PhD from Ruhr-Universität Bochum, Germany, in 2021 and is a recipient of two fellowships from UbiCrypt (DFG Graduate School) and Casa (DFG Cluster of Excellence). Her research interests are in the area of information security with a focus on adversarial machine learning and generative models to defend against real-world threats. She is particularly interested in language as an interface to machine learning models and in combining different domains such as audio, text, and images. She has published several papers on threat detection and defense of speech recognition systems and generative models.

Institution

  • CISPA Helmholtz Center for Information Security

Universität Hamburg
Adeline Scharfenberg
Diese E-Mail-Adresse ist vor Spambots geschützt! Zur Anzeige muss JavaScript eingeschaltet sein. 

Universität Hamburg
Adeline Scharfenberg
Diese E-Mail-Adresse ist vor Spambots geschützt! Zur Anzeige muss JavaScript eingeschaltet sein. 

Universität Hamburg
Adeline Scharfenberg
Diese E-Mail-Adresse ist vor Spambots geschützt! Zur Anzeige muss JavaScript eingeschaltet sein.