digital technologies

Events

images/02_events/informatik%20kolloq.jpg#joomlaImage://local-images/02_events/informatik kolloq.jpg?width=800&height=300
Monday, June 10th, 2024 | 16:15 p.m

Informatikkolloquium: Challenges and Threats in Generative AI: Misuse and Exploits

Konrad-Zuse-Hörsaal (Raum B-201), Vogt-Kölln-Straße 30

Generative AI (genAI) is becoming more integrated into our daily lives, raising questions about potential threats within genAI systems and the misuse of their output. In this talk, we will take a closer look at the resulting challenges and security threats associated with generative AI. These relate to two possible categories: malicious inputs used to inject into generative models, and computer-generated output that is indistinguishable from human-generated content.

In the first case, specially designed inputs are used to exploit models such as LLMs, to disrupt alignment or to steal sensitive information. Existing attacks show that content filters of LLMs can be easily bypassed with specific inputs and that private information can be leaked. We demonstrate that even with full white-box access, it is difficult to prevent prompt injection attacks, and this provides only limited protection. This talk will therefore cover an alternative for protecting intellectual property by obfuscating sensitive inputs.

In the second threat scenario, generative models are utilized to produce fake content that is impossible to distinguish from human-generated content. This fake content is often used for fraudulent and manipulative purposes, and impersonation and realistic fake news are already possible using a variety of techniques. As these models continue to evolve, detecting these fraudulent activities will become increasingly difficult, while the attacks themselves will become easier to automate and require less expertise. This talk will provide an overview of the current challenges we are facing in detecting fake media in human and machine interactions.

The final part of the presentation will deal with the use of generative models in security applications. This includes benchmarking and fixing vulnerable code, as well as understanding the capabilities of these models by investigating their code deobfuscation abilities.

Bio
Lea Schönherr is a tenure track faculty at CISPA Helmholtz Center for Information Security since 2022. She obtained her PhD from Ruhr-Universität Bochum, Germany, in 2021 and is a recipient of two fellowships from UbiCrypt (DFG Graduate School) and Casa (DFG Cluster of Excellence). Her research interests are in the area of information security with a focus on adversarial machine learning and generative models to defend against real-world threats. She is particularly interested in language as an interface to machine learning models and in combining different domains such as audio, text, and images. She has published several papers on threat detection and defense of speech recognition systems and generative models.

Institution

  • CISPA Helmholtz Center for Information Security
images/02_events/informatik%20kolloq.jpg#joomlaImage://local-images/02_events/informatik kolloq.jpg?width=800&height=300
Monday, May 27th, 2024 | 16:15 p.m

Informatikkolloquium: Designing End-to-End Privacy-Friendly and Deployable Systems

Konrad-Zuse-Hörsaal (Raum B-201), Vogt-Kölln-Straße 30

Dr. Wouter Lueks, CISPA

Digital technology creates risks to people's privacy in ways that did not exist before. I design end-to-end private systems to mitigate these real-world privacy risks. In this talk I will discuss my designs for two applications. These applications highlight key aspects of my work: I analyse security, privacy, and deployment requirements; and address these requirements by designing new cryptographic primitives and system architectures.

In the first part of this talk, I will present DatashareNetwork, a document search system for investigative journalists that enables them to locate relevant documents for their investigations. DatashareNetwork combines a novel multi-set private set intersection primitive with anonymous communication and authentication systems to create a decentralised and privacy-friendly document search system. In the second part of this talk, I will give an overview of my recent work in designing privacy friendly systems for humanitarian aid distribution. In collaboration with the International Committee for the Red Cross (ICRC) we designed systems for the registration and distribution of humanitarian aid that meets the requirements of the ICRC, while providing strong privacy protection for humanitarian aid distribution.

Bio:
Wouter Lueks is a tenure-track faculty member at the CISPA Helmholtz Center for Information Security in Saarbrücken, Germany. Before that he was a postdoctoral researcher at EPFL in Lausanne, Switzerland where he worked with Prof. Carmela Troncoso. He is interested in solving real-world problems by designing end-to-end privacy-friendly systems. To do so he combines privacy, applied cryptography, and systems research. His work has real-world impact. For instance, his designs for privacy-friendly contact tracing have been deployed in millions of phones around the world, and his secure document search system is being deployed by a large organization for investigative journalists. 

Institution

  • CISPA Helmholtz Center for Information Security
images/02_events/informatik%20kolloq.jpg#joomlaImage://local-images/02_events/informatik kolloq.jpg?width=800&height=300
Monday, July 08th, 2024 | 16:15 p.m

Informatikkolloquium: Intellectics: The Science of AI

Konrad-Zuse-Hörsaal (Raum B-201), Vogt-Kölln-Straße 30

Starting from a formalization of the main research goals of artificial intelligence (AI) using information and probability theory, the talk will show how our research on (dynamic and causal) probabilistic relational models, combined with pretrained embedding-based models, can contribute to the design of agents that (i) can handle non-trivial task descriptions and (ii) can appropriately interact with humans (and other agents) in so-called social mechanisms. With social mechanisms as a central topic of humanities-centered AI, the talk also tries to shed light on where and how AI regulation can be sensibly applied in the future.

Bio

Ralf Möller is Full Professor of Artificial Intelligence in Humanities and heads the Institute of Humanities-Centered AI (CHAI) at the Universität Hamburg. His main research area is artificial intelligence, in particular probabilistic relational modeling techniques and natural language technologies for information systems as well as machine learning and data mining for decision making of agents in social mechanisms. Ralf Möller is co-speaker of the Section for Artificial Intelligence of the German Informatics Society. He is also an affiliated professor at DFKI zu Lübeck, a branch of Deutsches Forschungszentrum für Künstliche Intelligence with several sites in Germany. DFKI is responsible for technology transfer of AI research results into industry and society.

Before joining the Universität Hamburg in 2024, Ralf Möller was Full Professor for Computer Science and headed the Institute of Information Systems at Universität zu Lübeck. In Lübeck he was also the head of the research department Stochastic Relational AI in Heathcare at DFKI. In his earlier carrier, Ralf Möller also was Associate Professor for Computer Science at Hamburg University of Technology from 2003 to 2014. From 2001 to 2003 he was Professor at the University of Applied Sciences in Wedel/Germany. In 1996 he received the degree Dr. rer. nat. from the University of Hamburg and successfully submitted his Habilitation thesis in 2001 also at the University of Hamburg.

Professor Möller was co-organizer of several national and international workshops on humanities-centered AI as well as on description logics. He also was co-organizer of the European Lisp Symposium 2011. In 2019, he co-chaired the organization of the International Conference on Big Knowledge ICBK19 in Beijing, and he is co-organizing the conference "Artificial Intelligence" KI2021 in Berlin with colleagues Stefan Edelkamp and Elmar Rueckert. Prof. Möller was an Associate Editor for the Journal of Knowledge and Information Systems, member of the Editorial Board of the Journal on Big Data Research, as well as Mathematical Reviews/MathSciNet Reviewer.

Institution

  • CISPA Helmholtz Center for Information Security
images/02_events/informatik%20kolloq.jpg#joomlaImage://local-images/02_events/informatik kolloq.jpg?width=800&height=300
Monday, June 17th, 2024 | 16:15 p.m

Informatikkolloquium: TBD

Konrad-Zuse-Hörsaal (Raum B-201), Vogt-Kölln-Straße 30

Xiao Zhang, PhD, CISPA

Institution

  • CISPA Helmholtz Center for Information Security
images/02_events/informatik%20kolloq.jpg#joomlaImage://local-images/02_events/informatik kolloq.jpg?width=800&height=300
Thursday, July 18th, 2024 | 15:00 p.m

Informatikkolloquium: Towards Reliable Machine Learning Models for Code

Informatikum, Room D-125, Vogt-Kölln-Straße 30

Machine learning (ML) models trained on code are increasingly integrated into various software engineering tasks. While they generally demonstrate promising performance, many aspects of their capabilities remain unclear. Specifically, there is a lack of understanding regarding what these models learn, why they learn it, how they operate, and when they produce erroneous outputs.

In this talk, I will present findings from a series of studies that (i) examine the abilities of these models to complement human developers, (ii) explore the syntax and representation learning capabilities of ML models designed for software maintenance tasks, and (iii) investigate the patterns of bugs these models exhibit. Additionally, I will discuss a novel self-refinement approach aimed at enhancing the reliability of code generated by Large Language Models (LLMs). This method focuses on reducing the occurrence of bugs before execution, autonomously and without the need for human intervention or predefined test cases.

Bio:
Foutse Khomh is a Full Professor of Software Engineering at Polytechnique Montréal,  a Canada Research Chair Tier 1 on Trustworthy Intelligent Software Systems, a Canada CIFAR AI Chair on Trustworthy Machine Learning Software Systems, an NSERC Arthur B. McDonald Fellow, an Honoris Genius Prize Laureate, and an FRQ-IVADO Research Chair on Software Quality Assurance for Machine Learning Applications. He received a Ph.D. in Software Engineering from the University of Montreal in 2011, with the Award of Excellence. He also received a CS-Can/Info-Can Outstanding Young Computer Science Researcher Prize for 2019. His research interests include software maintenance and evolution, machine learning systems engineering, cloud engineering, and dependable and trustworthy ML/AI. His work has received four ten-year Most Influential Paper (MIP) Awards, six Best/Distinguished Paper Awards at major conferences, and two Best Journal Paper of the Year Awards. He initiated and co-organized the Software Engineering for Machine Learning Applications (SEMLA) symposium and the RELENG (Release Engineering) workshop series. He is co-founder of the NSERC CREATE SE4AI: A Training Program on the Development, Deployment, and Servicing of Artificial Intelligence-based Software Systems and one of the Principal Investigators of the DEpendable Explainable Learning (DEEL) project. He is also a co-founder of Quebec's initiative on Trustworthy AI (Confiance IA Quebec) and Scientific co-director of the Institut de Valorisation des Données (IVADO). He is on the editorial board of multiple international software engineering journals (e.g., IEEE Software, EMSE, SQJ, JSEP) and is a Senior Member of IEEE.

Institution

  • CISPA Helmholtz Center for Information Security
images/02_events/TM%20Vorlessung%20Alle.jpg#joomlaImage://local-images/02_events/TM Vorlessung Alle.jpg?width=800&height=300
Tuesday, July 09th, 2024 | 18:15 - 19:45 p.m.

Public Lecture Series: Taming the Machines. Frontier AI Regulation: from Trustworthiness to Sustainability

UHH, Main Building, West Wing, Edmund-Siemers-Allee 1, Room 221

Taming the Machines — Horizons of Artificial Intelligence. The Ethics in Information Technology Public Lecture Series

This summer‘s „Taming the Machine“ lecture series sheds light on the ethical, political, legal, and societal dimensions of Artificial Intelligence (AI).
This lecture series brings together perspectives from ethics, politics, law, geography, and media studies to assess the potential for preserving and developing human values in the design, dissemination, and application of AI technologies. How does AI challenge our most fundamental social, political, and economic institutions? How can we bolster (or even improve) them in times of technological disruption? What regulations are needed to render AI environments fairer and more transparent? What needs to be done to make them more sustainable? In what sense could (and even should) we hold AI accountable?
To explore these and other related questions, this public lecture series invites distinguished international researchers to present and discuss their work. To get the latest updates and details how to attend the lectures, please visit http://uhh.de/inf-eit.

Prof. Dr. Philipp Hacker, European University Viadrina, Frankfurt (Oder), D
 
Current AI regulation in the EU and globally focus on trustworthiness and accountability, as seen in the AI Act and AI Liability instruments. Yet, they overlook a critical aspect: environmental sustainability. This talk addresses this gap by examining the ICT sector's significant environmental impact. AI technologies, particularly generative models like GPT-4, contribute substantially to global greenhouse gas emissions and water consumption.
The talk assesses how existing and proposed regulations, including EU environmental laws and the GDPR, can be adapted to prioritize sustainability. It advocates for a comprehensive approach to sustainable AI regulation, beyond mere transparency mechanisms for disclosing AI systems' environmental footprint, as proposed in the EU AI Act. The regulatory toolkit must include co-regulation, sustainability-by-design principles, data usage restrictions, and consumption limits, potentially integrating AI into the EU Emissions Trading Scheme. This multidimensional strategy offers a blueprint that can be adapted to other high-emission technologies and infrastructures, such as block chain, the meta-verse, or data centers. Arguably, it is crucial for tackling the twin key transformations of our society: digitization and climate change mitigation.

Institutions

  • UHH

Universität Hamburg
Adeline Scharfenberg
Diese E-Mail-Adresse ist vor Spambots geschützt! Zur Anzeige muss JavaScript eingeschaltet sein. 

Universität Hamburg
Adeline Scharfenberg
Diese E-Mail-Adresse ist vor Spambots geschützt! Zur Anzeige muss JavaScript eingeschaltet sein. 

Universität Hamburg
Adeline Scharfenberg
Diese E-Mail-Adresse ist vor Spambots geschützt! Zur Anzeige muss JavaScript eingeschaltet sein.