Unsere Gesellschaft wird immer älter – und viele Menschen einsamer. Daraus erwachsen neue Herausforderungen für Politik und Gesellschaft, die Altenpflege und Angehörige. Für ein erfülltes Leben im Alter sind soziale Beziehungen, kulturelle Teilhabe, persönliche Autonomie, Mobilität, Gesundheit und eine gute Betreuung von zentraler Bedeutung. Künstliche Intelligenz kann diese Faktoren positiv beeinflussen und neue Möglichkeiten schaffen: Mit KI und Robotik an ihrer Seite können ältere Menschen und Pflegepatient:innen wieder zu mehr Selbstständigkeit finden. Bislang ist der Einsatz von Robotern in der Pflege in Deutschland noch nicht ausgereift – aber es gibt vielversprechende Ansätze. Und dennoch sind KI-Anwendungen für die Nutzer:innen in ihrer Funktionsweise bisher intransparent, können Altersstereotype verstärken und zu Altersdiskriminierung beitragen.
Welche Einsatzmöglichkeiten von KI gibt es also in der Pflege und für ältere Menschen? Welchen Einfluss hat KI auf Nähe und Menschlichkeit? Was ist künstliche Empathie und brauchen wir sie? Und: Wie können ältere Menschen in Forschung und Entwicklung von KI-Systemen stärker mit einbezogen werden?
Über diese und weitere Fragen diskutieren wir in der vierten Ausgabe unserer Reihe „Was macht KI mit…?“. Mit dabei sind Claude Toussaint, Gründer und CEO von Navel Robotics, Prof. Dr. Barbara Klein, Dekanin des Fachbereichs Soziale Arbeit und Gesundheit und Sprecherin des Forschungszentrums FUTURE AGING an der Frankfurt University of Applied Sciences, Ria Hinken, „Frontfrau für Smart Aging“ und Leiterin des Projekts alterskompetenz.info, und Dr. Henner Gärtner, Professor für industrielle Logistik an der Hochschule für Angewandte Wissenschaften in Hamburg und Projektleiter des „Shared Guide Dog 4.0“.
Nach dem Talk haben Zuschauer:innen an Thementischen die Möglichkeit, mit den Referent:innen ins Gespräch zu kommen und KI-Anwendungen auszuprobieren, u.a. die VR-Brille des Start-Ups vJourney, präsentiert von Mitgründer Stefan Thomsen. Hanne Butting, CFO und CSO von Beyond Emotion, wird den den KI-gestützten Bilderrahmen BJOY vorstellen.
Moderation: Kathrin Drehkopf, NDR-Journalistin und Moderatorin beim Medienmagazin ZAPP.Eintritt frei. Anmeldung und weitere Informationen HIER.
Institution
Die Digitalisierung und Automatisierung von Dienstleistungen hat in den letzten Jahren erheblich zugenommen, insbesondere durch den Einsatz von Künstlicher Intelligenz (KI), und wird durch Anwendungen wie ChatGPT, Bard oder Midjourney im Alltag immer präsenter. Unternehmen setzen KI-Technologien ein, um Kundenservice, Beratung, Personalisierung, Verfügbarkeit und andere Aspekte von Dienstleistungen zu verbessern und zu automatisieren. Gleichzeitig stellt sich die Frage nach neuen Anforderungen an die Gestaltung von Dienstleistungen und den Auswirkungen von KI auf die Zukunft des deutschen Dienstleistungssektors.
Auf der (DF)² Jahreskonferenz 2023 werden wir diskutieren, wie die Einbindung KI-basierter Systeme Dienstleistungen transformieren und optimieren kann, indem sie beispielsweise Qualität verbessern, Prozesse beschleunigen und personalisierte Erfahrungen für Nutzer ermöglichen. Dabei beleuchten wir ebenfalls die Herausforderungen, welche sich aus dem Einsatz von KI ergeben, wie etwa Fragen der Gestaltung KI-unterstützter Dienstleistungsarbeit sowie Fragestellungen zur Datensicherheit, Privatsphäre und Ethik bei der Verwendung von Algorithmen in Entscheidungsprozessen.
Hier geht es direkt zur Anmeldeseite für die begrenzten Teilnahmeslots
Institutions
Professor Dr. Carsten Gerner-Beuerle, Professor of Commercial Law, Faculty of Laws, University College London
The Network for Artificial Intelligence and Law (NAIL) invites you to its next event. We are delighted to welcome Professor Dr. Carsten Gerner-Beuerle, Faculty of Laws, University College London. He will talk about the difficulties of assessing the precise risks to health, safety, fundamental rights and the rule of law when regulating artificial intelligence. The lecture will be followed by a discussion around the topic. The event will be held in English.
After the lecture and discussion, we would like to invite you to end the evening with us in a relaxed atmosphere, with pretzels and wine in the south lounge.
You can participate in presence or online. Please register for the event using the following link: Registration
Institutions
As the number of job applications increase, hiring managers have turned to artificial intelligence (AI) to help them make decisions faster, and perhaps better. Where once each manager did their own first rough cut of files, now third party software algorithms sort applications for many firms. Human mistakes are inevitable, but fortunately heterogenous. Not so with machine decision-making. Relying on the same AI systems means that each firm makes the same mistakes and suffering from the same biases. When the same person re-encounters the same model again and again, or models trained on the same dataset, she might be wrongly rejected again and again. In this talk, I will argue that it is wrong to allow the quirks of an algorithmic system to consistently exclude a small number of people from consequential opportunities, and I will suggest solutions that can help ameliorate the harm to individuals.
Prof. Dr. Kathleen A. Creel (Northeastern University, Boston, MA, USA)
Kathleen Creel is an Assistant Professor at Northeastern University, cross appointed between the Department of Philosophy and Religion and Khoury College of Computer Sciences. Her research explores the moral, political, and epistemic implications of machine learning as it is used in non-state automated decision making and in science. She co-leads Northeastern’s AI and Data Ethics Training Program and is a winner of the International Association of Computing and Philosophy’s 2023 Herbert Simon Award.
Institutions
The impact of artificial intelligence on society is so profound that it can be considered to be disruptive. AI does not only have radical consequences for society - as is expressed by the concept of ‘the Fourth Revolution’ and ‘Society 5.0’ that is emerging from that - but also for ethics itself. Technologies have become ethically disruptive, in the sense that they challenge and affect the very concepts with which we can do ethics in the first place. What do’ agency’, ‘responsibility’ and ‘empathy’ mean when artificial agents are entering society? What does ‘democratic representation’ mean when AI systems interfere with the very idea of representation itself? What can the notion of ‘the humane’ still mean when AI systems become an intrinsic part of human actions and decision-making? This talk will explore phenomenon of ethical disruption in detail, by investigating the various ways in which technologies – and not only human beings – can be ethically significant. Breaking the human monopoly on ethics and expanding it towards technology will make it possible to connect ethics more directly to practices of design. The resulting ‘Guidance Ethics Approach’ enables bottom-up ethical reflection that can foster the responsible design, implementation and use of new and emerging technologies.
Prof. Dr. Peter-Paul Verbeek (Universiteit van Amsterdam, NL)
Peter-Paul Verbeek (1970) is Rector Magnificus and professor of Philosophy and Ethics of Science and Technology at the University of Amsterdam. His research and teaching focus on the relationship between humans and technology, viewed from an ethical perspective and in close relation to design. He is chair of the UNESCO World Commission for the Ethics of Science and Technology (COMEST), editor-in-chief of the Journal of Human-Technology Relations, and editor of the Lexington book series in Postphenomenology and the Philosophy of Technology. More information: www.ppverbeek.nl
Institutions
This event will be part of TUHH’s flagship series "Lectures for Future" and marks the beginning of my appointment at TUHH as well as of our new Institute for Ethics in Technology. Prof. Dominic Wilkinson (University of Oxford), Prof. Alena Buyx (Technical University of Munich, Chair of the German Ethics Council) and Dr Andrew Graham (University of Oxford) will contribute to the event.
Programme:
Institution
Western societies are marked by diverse and extensive biases and inequality that are unavoidably embedded in the data used to train machine learning. Algorithms trained on biased data will, without intervention, produce biased outcomes and increase the inequality experienced by historically disadvantaged groups.
To tackle this issue the EU commission recently published the Artificial Intelligence Act – the world’s first comprehensive framework to regulate AI. The new proposal has several provisions that require bias testing and monitoring. But is Europe ready for this task?
In this session I will examine several EU legal frameworks including data protection as well as non-discrimination law and demonstrate how despite best attempts they fail to protect us against the novel risks posed by AI. I will also explain how current technical fixes such as bias tests - which are often developed in the US - are not only insufficient to protect marginalised groups but also clash with the legal requirements in Europe.
I will then introduce some of the solutions I have developed to test for bias, explain black box decisions and to protect privacy that were implemented by tech companies such as Google, Amazon, Vodaphone and IBM and fed into public policy recommendations and legal frameworks around the world.
Prof. Dr. Sandra Wachter (Oxford Internet Institute, University of Oxford, GB)
Sandra Wachter is Professor of Technology and Regulation at the Oxford Internet Institute at the University of Oxford where she researches the legal and ethical implications of AI, Big Data, and robotics as well as Internet and platform regulation. Her current research focuses on profiling, inferential analytics, explainable AI, algorithmic bias, diversity, and fairness, as well as governmental surveillance, predictive policing, human rights online, and health tech and medical law.
At the OII, Professor Sandra Wachter leads and coordinates the Governance of Emerging Technologies (GET) Research Programme that investigates legal, ethical, and technical aspects of AI, machine learning, and other emerging technologies. [more]
Institutions
With the launch of ChatGPT last year and the ensuing debate about the benefits and potential risks of generative AI, also the work on the European AI Act shifted into a higher gear. The European Council and Parliament, working on their respective compromise texts, had to find ways to accommodate this new phenomenon. The attempts to adapt the AI Act went hand in hand with a lively public debate on what was so new and different about generative AI, whether it raised new, not yet anticipated risks, and how to best address a technology whose societal implications are not yet well understood. Most importantly, was the AI Act outdated even before is adopted? In my presentation I would like to discuss the different approaches that the Council and Parliament adopted to governing Generative AI, the most salient points of discussion and the different approaches proposed to solve some of the key ethical and societal concerns around the rise of generative AI.
Prof. Dr. Natali Helberger (Universiteit van Amsterdam, NL)
Natali Helberger is Distinguished University Professor of Law and Digital Technology, with a special focus on AI, at the University of Amsterdam and a member of the Institute for Information Law (IViR). Her research on AI and automated decision systems focuses on its impact on society and governance. Helberger co-founded the Research Priority Area Information, Communication, and the Data Society, which has played a leading role in shaping the international discussion on digital communication and platform governance. She is a founding member of the Human(e) AI research program and leads the Digital Transformation Initiative at the Faculty of Law. Since 2021, Helberger has also been director of the AI, Media & Democracy Lab, and since 2022, scientific director of the Algosoc (Public Values in the Algorithmic Society) Gravitation Consortium. A major focus of the Algosoc program is to mentor and train the next generation of interdisciplinary researchers. She is a member of several national and international research groups and committees, including the Council of Europe's Expert Group on AI and Freedom of Expression.
Institutions
Taming the Machines — Horizons of Artificial Intelligence. The Ethics in Information Technology Public Lecture Series
This summer‘s „Taming the Machine“ lecture series sheds light on the ethical, political, legal, and societal dimensions of Artificial Intelligence (AI).Prof. Dr. Louise Amoore, Durham University, Durham, UK
Institutions
Taming the Machines — Horizons of Artificial Intelligence. The Ethics in Information Technology Public Lecture Series
This summer‘s „Taming the Machine“ lecture series sheds light on the ethical, political, legal, and societal dimensions of Artificial Intelligence (AI).Institutions
Taming the Machines — Horizons of Artificial Intelligence. The Ethics in Information Technology Public Lecture Series
This summer‘s „Taming the Machine“ lecture series sheds light on the ethical, political, legal, and societal dimensions of Artificial Intelligence (AI).Prof. Dr. Mathias Risse, John F. Kennedy School of Government, Harvard University, Cambridge, MA, USA
Institutions
Taming the Machines — Horizons of Artificial Intelligence. The Ethics in Information Technology Public Lecture Series
This summer‘s „Taming the Machine“ lecture series sheds light on the ethical, political, legal, and societal dimensions of Artificial Intelligence (AI).Prof. Dr. Andra Siibak, University of Tartu, Tartu, Estland
Present day children’s futures are decided by algorithms predicting their probability of success at school, their suitability for a job position, their likely recidivism or mental health problems. Advances in predictive analytics, artificial intelligence (AI) systems, behavioral-, and biometrics technologies, have started to be aggressively used for monitoring, aggregating, and analyzing children’s data. Such dataveillance happening both in homes, schools, and peer networks has a profound impact not only on children’s preferences, social relations, life chances, rights and privacy but also the "future of human agency - and ultimately, of society and culture" (Mascheroni & Siibak 2021: 169).
Building upon the findings of my different empirical case studies, I will showcase how the popular digital parenting practices and the growing datafication happening in the education sector, could create not only hypothetical data scares but also lead to real data scars in the lives of the young.
Institutions
Taming the Machines — Horizons of Artificial Intelligence. The Ethics in Information Technology Public Lecture Series
This summer‘s „Taming the Machine“ lecture series sheds light on the ethical, political, legal, and societal dimensions of Artificial Intelligence (AI).Vincent C. Müller is AvH Professor for Philosophy and Ethics of AI and Director of the Centre for Philosophy and AI Research (PAIR) at FAU Erlangen-Nuremberg
It is now frequently observed that there is no proper scope and no proper method in the discipline of AI-ethics. This has become an issue in the development towards maturity of the discipline, e.g. canonical problems, positions, arguments … secure steps forward. We propose a minimal, yet universal view of the field (again Müller 2020). Given this proposal, we will know the scope and the method, and we can appreciate the wide set of contributions.
Institutions
Taming the Machines — Horizons of Artificial Intelligence. The Ethics in Information Technology Public Lecture Series
This summer‘s „Taming the Machine“ lecture series sheds light on the ethical, political, legal, and societal dimensions of Artificial Intelligence (AI).Prof. Dr. Aimee van Wynsberghe, Rheinische Friedrich-Wilhelms-Universität Bonn, D
Institutions
Professor Ignacio Cofone, McGill University
The Hamburg Network for Artificial Intelligence and Law (NAIL) invites you to its next event. We are pleased to welcome Professor Ignacio Cofone from McGill University in Montreal, Canada. He presents his latest book in which he demonstrates why our legal system is unable to adequately protect our privacy in the reality of new data-driven technologies such as AI. The lecture is followed by a subsequent discussion on the topic. The event will be held in English.
The event will take place in person at the University of Hamburg, Faculty of Law, Rothenbaumchaussee 33, Room A125. No registration is required for this.
There is also the option of participating online. Please sign up by emailing nail@ile-hamburg.de.
Institutions
This year, open-domain conversational AI systems have finally reached the general public: they are widely accessible, excel w.r.t. the naturalness and fluency of their generated output, and provide helpful responses for many of the users’ requests. However, there are still many open challenges — in particular, relating to ethical aspects of their design: for instance, conversational AI systems are prone to encode and amplify unfair stereotypes and are exclusive to many speakers identifying as members of underrepresented cultural and subcultural groups. In this talk, I will present some of these challenges and discuss potential solutions towards responsible AI-based future communication.
You can find all the details and (eventually) the material at the Indico agenda: https://indico.desy.de/event/38994/
Ist KI das Zaubermittel gegen Lehrkräftemangel, wenn Algorithmen und Roboter künftig den Unterricht übernehmen? Oder schreibt die App den Schulaufsatz, erledigt die Hausaufgaben und macht Lernkontrolle unmöglich? Wie funktioniert KI genau und wo kann sie auch zu mehr Bildungsgerechtigkeit beitragen? Und was bedeuten die digitalen Tools für Schule und Unterricht, für Schüler:innen, aber auch für Lehrkräfte und Eltern?
Dazu, den Potenzialen und Risiken rund um ChatBots und digitale Tools in der Bildung, diskutieren wir mit KI-Vorreiter:innen, führenden Expert:innen, Schüler:innen und dem Publikum.
Bei der Live-Talkrunde dabei sind: die Abiturientin und Vorsitzende der Schüler:innenkammer Hamburg Lina Diedrichsen, Martina Mörth, Diplom-Psychologin und Leiterin des Berliner Zentrums für Hochschullehre, die Journalistin und Autorin des KI-Newsletters “Natürlich intelligent” Marie Kilg sowie Hamburgs Digital-Schulberaterin und Leiterin der Kompetenzstelle KI am Landesinstitut für Lehrerbildung und Schulentwicklung, Britta Kölling. Neben Menschen, die beruflich KI und ihre Nutzung in der Bildung aktiv mitgestalten, sollen auch die Stimmen von Schüler:innen, Eltern und der interessierten Öffentlichkeit in der Diskussion zu Wort kommen. Begeisterung und Besorgnis von Laien und Profis – alles hat hier Raum.
Moderation: Nina Heinrich, Journalistin, Redakteurin und Podcasterin
Eintritt frei. Anmeldung und weitere Informationen HIER. Den Live-Stream der Panel-Diskussion finden Sie ab 19.00 Uhr bei Youtube HIER
Bitte beachten Sie: Der Zugang zum Gebäude ist leider nicht barrierefrei.
Institution
Universität Hamburg
Adeline Scharfenberg
Universität Hamburg
Adeline Scharfenberg
Universität Hamburg
Adeline Scharfenberg