Generative AI (genAI) is becoming more integrated into our daily lives, raising questions about potential threats within genAI systems and the misuse of their output. In this talk, we will take a closer look at the resulting challenges and security threats associated with generative AI. These relate to two possible categories: malicious inputs used to inject into generative models, and computer-generated output that is indistinguishable from human-generated content.
In the first case, specially designed inputs are used to exploit models such as LLMs, to disrupt alignment or to steal sensitive information. Existing attacks show that content filters of LLMs can be easily bypassed with specific inputs and that private information can be leaked. We demonstrate that even with full white-box access, it is difficult to prevent prompt injection attacks, and this provides only limited protection. This talk will therefore cover an alternative for protecting intellectual property by obfuscating sensitive inputs.
In the second threat scenario, generative models are utilized to produce fake content that is impossible to distinguish from human-generated content. This fake content is often used for fraudulent and manipulative purposes, and impersonation and realistic fake news are already possible using a variety of techniques. As these models continue to evolve, detecting these fraudulent activities will become increasingly difficult, while the attacks themselves will become easier to automate and require less expertise. This talk will provide an overview of the current challenges we are facing in detecting fake media in human and machine interactions.
The final part of the presentation will deal with the use of generative models in security applications. This includes benchmarking and fixing vulnerable code, as well as understanding the capabilities of these models by investigating their code deobfuscation abilities.
Bio
Lea Schönherr is a tenure track faculty at CISPA Helmholtz Center for Information Security since 2022. She obtained her PhD from Ruhr-Universität Bochum, Germany, in 2021 and is a recipient of two fellowships from UbiCrypt (DFG Graduate School) and Casa (DFG Cluster of Excellence). Her research interests are in the area of information security with a focus on adversarial machine learning and generative models to defend against real-world threats. She is particularly interested in language as an interface to machine learning models and in combining different domains such as audio, text, and images. She has published several papers on threat detection and defense of speech recognition systems and generative models.
Institution
Abstract
Consensus and its variants, including set agreement and approximate agreement, play a central role in our understanding of asynchronous shared memory distributed computing. I will discuss some classical and recent results about these problems, including algorithms, hierarchies, impossibility results, and space complexity lower bounds.
Bio
Faith Ellen is a Professor of Computer Science at the University of Toronto and is currently serving as the Associate Chair, Graduate Students, in the Department of Computer Science. She received her Ph.D. from the University of California, Berkeley, in 1982. Her research interests span the theory of distributed computing, complexity theory and data structures. From 1997 to 2001, she was vice chair of SIGACT, the leading international society for theory of computation and, from 2006 to 2009, she was chair of the steering committee for PODC, the top international conference for theory of distributed computing. In 2014, she co-authoured the book, "Impossibility Results for Distributed Computing". Faith is a Fellow of the ACM.
Institution
Dr. Wouter Lueks, CISPA
Digital technology creates risks to people's privacy in ways that did not exist before. I design end-to-end private systems to mitigate these real-world privacy risks. In this talk I will discuss my designs for two applications. These applications highlight key aspects of my work: I analyse security, privacy, and deployment requirements; and address these requirements by designing new cryptographic primitives and system architectures.
In the first part of this talk, I will present DatashareNetwork, a document search system for investigative journalists that enables them to locate relevant documents for their investigations. DatashareNetwork combines a novel multi-set private set intersection primitive with anonymous communication and authentication systems to create a decentralised and privacy-friendly document search system. In the second part of this talk, I will give an overview of my recent work in designing privacy friendly systems for humanitarian aid distribution. In collaboration with the International Committee for the Red Cross (ICRC) we designed systems for the registration and distribution of humanitarian aid that meets the requirements of the ICRC, while providing strong privacy protection for humanitarian aid distribution.
Bio:
Wouter Lueks is a tenure-track faculty member at the CISPA Helmholtz Center for Information Security in Saarbrücken, Germany. Before that he was a postdoctoral researcher at EPFL in Lausanne, Switzerland where he worked with Prof. Carmela Troncoso. He is interested in solving real-world problems by designing end-to-end privacy-friendly systems. To do so he combines privacy, applied cryptography, and systems research. His work has real-world impact. For instance, his designs for privacy-friendly contact tracing have been deployed in millions of phones around the world, and his secure document search system is being deployed by a large organization for investigative journalists.
Institution
Starting from a formalization of the main research goals of artificial intelligence (AI) using information and probability theory, the talk will show how our research on (dynamic and causal) probabilistic relational models, combined with pretrained embedding-based models, can contribute to the design of agents that (i) can handle non-trivial task descriptions and (ii) can appropriately interact with humans (and other agents) in so-called social mechanisms. With social mechanisms as a central topic of humanities-centered AI, the talk also tries to shed light on where and how AI regulation can be sensibly applied in the future.
Bio
Ralf Möller is Full Professor of Artificial Intelligence in Humanities and heads the Institute of Humanities-Centered AI (CHAI) at the Universität Hamburg. His main research area is artificial intelligence, in particular probabilistic relational modeling techniques and natural language technologies for information systems as well as machine learning and data mining for decision making of agents in social mechanisms. Ralf Möller is co-speaker of the Section for Artificial Intelligence of the German Informatics Society. He is also an affiliated professor at DFKI zu Lübeck, a branch of Deutsches Forschungszentrum für Künstliche Intelligence with several sites in Germany. DFKI is responsible for technology transfer of AI research results into industry and society.
Before joining the Universität Hamburg in 2024, Ralf Möller was Full Professor for Computer Science and headed the Institute of Information Systems at Universität zu Lübeck. In Lübeck he was also the head of the research department Stochastic Relational AI in Heathcare at DFKI. In his earlier carrier, Ralf Möller also was Associate Professor for Computer Science at Hamburg University of Technology from 2003 to 2014. From 2001 to 2003 he was Professor at the University of Applied Sciences in Wedel/Germany. In 1996 he received the degree Dr. rer. nat. from the University of Hamburg and successfully submitted his Habilitation thesis in 2001 also at the University of Hamburg.
Professor Möller was co-organizer of several national and international workshops on humanities-centered AI as well as on description logics. He also was co-organizer of the European Lisp Symposium 2011. In 2019, he co-chaired the organization of the International Conference on Big Knowledge ICBK19 in Beijing, and he is co-organizing the conference "Artificial Intelligence" KI2021 in Berlin with colleagues Stefan Edelkamp and Elmar Rueckert. Prof. Möller was an Associate Editor for the Journal of Knowledge and Information Systems, member of the Editorial Board of the Journal on Big Data Research, as well as Mathematical Reviews/MathSciNet Reviewer.
Institution
Xiao Zhang, PhD, CISPA
Institution
Machine learning (ML) models trained on code are increasingly integrated into various software engineering tasks. While they generally demonstrate promising performance, many aspects of their capabilities remain unclear. Specifically, there is a lack of understanding regarding what these models learn, why they learn it, how they operate, and when they produce erroneous outputs.
In this talk, I will present findings from a series of studies that (i) examine the abilities of these models to complement human developers, (ii) explore the syntax and representation learning capabilities of ML models designed for software maintenance tasks, and (iii) investigate the patterns of bugs these models exhibit. Additionally, I will discuss a novel self-refinement approach aimed at enhancing the reliability of code generated by Large Language Models (LLMs). This method focuses on reducing the occurrence of bugs before execution, autonomously and without the need for human intervention or predefined test cases.
Bio:
Foutse Khomh is a Full Professor of Software Engineering at Polytechnique Montréal, a Canada Research Chair Tier 1 on Trustworthy Intelligent Software Systems, a Canada CIFAR AI Chair on Trustworthy Machine Learning Software Systems, an NSERC Arthur B. McDonald Fellow, an Honoris Genius Prize Laureate, and an FRQ-IVADO Research Chair on Software Quality Assurance for Machine Learning Applications. He received a Ph.D. in Software Engineering from the University of Montreal in 2011, with the Award of Excellence. He also received a CS-Can/Info-Can Outstanding Young Computer Science Researcher Prize for 2019. His research interests include software maintenance and evolution, machine learning systems engineering, cloud engineering, and dependable and trustworthy ML/AI. His work has received four ten-year Most Influential Paper (MIP) Awards, six Best/Distinguished Paper Awards at major conferences, and two Best Journal Paper of the Year Awards. He initiated and co-organized the Software Engineering for Machine Learning Applications (SEMLA) symposium and the RELENG (Release Engineering) workshop series. He is co-founder of the NSERC CREATE SE4AI: A Training Program on the Development, Deployment, and Servicing of Artificial Intelligence-based Software Systems and one of the Principal Investigators of the DEpendable Explainable Learning (DEEL) project. He is also a co-founder of Quebec's initiative on Trustworthy AI (Confiance IA Quebec) and Scientific co-director of the Institut de Valorisation des Données (IVADO). He is on the editorial board of multiple international software engineering journals (e.g., IEEE Software, EMSE, SQJ, JSEP) and is a Senior Member of IEEE.
Institution
Prof. Dr. Ibo van de Poel, Delft University of Technology, NL
Value alignment is important to ensure that AI systems remain aligned with human intentions, preferences, and values. It has been suggested that it can best be achieved by building AI systems that can track preferences or values in real-time. In my talk, I argue against this idea of real-time value alignment. First, I show that the value alignment problem is not unique to AI, but applies to any technology, thus opening up alternative strategies for attaining value alignment. Next, I argue that due to uncertainty about appropriate alignment goals, real-time value alignment may lead to harmful optimization and therefore will likely do more harm than good. Instead, it is better to base value alignment on a fallibilist epistemology, which assumes that complete certainty about the proper target of value alignment is and will remain impossible. Three alternative principles for AI value alignment are proposed: 1) adopt a fallibilist epistemology regarding the target of value alignment; 2) focus on preventing serious misalignments rather than aiming for perfect alignment; 3) retain AI systems under human control even if it comes at the cost of full value alignment.
Institutions
Prof. Dr. Kate Vredenburgh, London School of Economics, GB
Current AI regulation in the EU and globally focus on trustworthiness and accountability, as seen in the AI Act and AI Liability instruments. Yet, they overlook a critical aspect: environmental sustainability. This talk addresses this gap by examining the ICT sector's significant environmental impact. AI technologies, particularly generative models like GPT-4, contribute substantially to global greenhouse gas emissions and water consumption.
The talk assesses how existing and proposed regulations, including EU environmental laws and the GDPR, can be adapted to prioritize sustainability. It advocates for a comprehensive approach to sustainable AI regulation, beyond mere transparency mechanisms for disclosing AI systems' environmental footprint, as proposed in the EU AI Act. The regulatory toolkit must include co-regulation, sustainability-by-design principles, data usage restrictions, and consumption limits, potentially integrating AI into the EU Emissions Trading Scheme. This multidimensional strategy offers a blueprint that can be adapted to other high-emission technologies and infrastructures, such as block chain, the meta-verse, or data centers. Arguably, it is crucial for tackling the twin key transformations of our society: digitization and climate change mitigation.
Institutions
Taming the Machines — Horizons of Artificial Intelligence. The Ethics in Information Technology Public Lecture Series
This summer‘s „Taming the Machine“ lecture series sheds light on the ethical, political, legal, and societal dimensions of Artificial Intelligence (AI).Institutions
Prof. Dr. Philipp Hacker, European University Viadrina, Frankfurt (Oder), DE
Current AI regulation in the EU and globally focus on trustworthiness and accountability, as seen in the AI Act and AI Liability instruments. Yet, they overlook a critical aspect: environmental sustainability. This talk addresses this gap by examining the ICT sector's significant environmental impact. AI technologies, particularly generative models like GPT-4, contribute substantially to global greenhouse gas emissions and water consumption.
The talk assesses how existing and proposed regulations, including EU environmental laws and the GDPR, can be adapted to prioritize sustainability. It advocates for a comprehensive approach to sustainable AI regulation, beyond mere transparency mechanisms for disclosing AI systems' environmental footprint, as proposed in the EU AI Act. The regulatory toolkit must include co-regulation, sustainability-by-design principles, data usage restrictions, and consumption limits, potentially integrating AI into the EU Emissions Trading Scheme. This multidimensional strategy offers a blueprint that can be adapted to other high-emission technologies and infrastructures, such as block chain, the meta-verse, or data centers. Arguably, it is crucial for tackling the twin key transformations of our society: digitization and climate change mitigation.
Institutions
Speaker: Prof. Dr. Elena Esposito, Universität Bielefeld, DE
Institutions
Universität Hamburg
Adeline Scharfenberg
Universität Hamburg
Adeline Scharfenberg
Universität Hamburg
Adeline Scharfenberg