Aus einfachen Text- oder Bildeingaben können mit dem browserbasierten AI-Tool RunwayML Videoclips generiert und mit Magic Tools nachbearbeitet werden.
In 30 Minuten lernen Sie einen beispielhaften Workflow zur Generierung von Videoclips aus Einzelbildern oder Textprompts kennen. Dabei werden Ihnen die einzelnen Arbeitsschritte in RunwayML gezeigt.
Diese Online-Schulung richtet sich an Einsteiger*innen und es werden keine Vorkenntnisse
Anmeldung hier
Als virtuellen Lernort werden wir ZOOM nutzen. Der ZOOM-Link wird einen Tag vor Schulungsbeginn bis 13:00 Uhr versendet.
Institutions
Künstliche Intelligenz (KI) ist aus dem Hochschulalltag nicht mehr wegzudenken. Ihre Integration stellt alle Beteiligten jedoch weiterhin vor große Herausforderungen. Vor allem die Hochschullehre und die Lehrenden sind mit vielfältigen Veränderungsanforderungen konfrontiert. Dies betrifft die Entwicklung von Curricula, wie auch von entsprechend darauf ausgerichteten, didaktischen Konzepten für die Lehre und das Prüfen. Zudem hinterfragt es auch das bisherige Rollenverständnis von Lehrenden und tangiert die unmittelbare Wahrnehmung der eigenen Person in einer Welt aus immer zahlreicheren künstlichen Avataren. Aber nicht nur die Fragen der Studiengangsorganisation und der Lehrenden stehen im Fokus der NeL-AI-Week, sondern vor allem auch, welche Möglichkeiten und Grenzen für Studierende durch den Einsatz von KI entstehen können.
Die Aktionswoche des Netzwerks Landeseinrichtungen für digitale Hochschullehre (NeL) möchte vor allem einen praxisorientierten Blick auf diese Fragestellungen werfen und mit einer Vielzahl von Praxisbeiträgen von Lehrenden und Studierenden einen Erfahrungsaustausch in diesem Bereich anregen.
Am Eröffnungstag, Montag 10. März, werden thematische Einführungen vorgenommen und Kooperationspotenziale beleuchtet. Der Dienstag und Donnerstag unserer NeL-AI-Week adressiert dann die konkrete Praxisperspektive. Die Praxisbeiträge am Dienstag und am Donnerstag sind auf 15 Minuten Impulsvortrag ausgelegt. Im Anschluss an einen 45-minütigen Vortragsslot mit jeweils drei Impulsvorträgen besteht in parallelen Breakoutsessions die Möglichkeit, zu diesen Praxisbeiträgen vertiefende Fragestellungen einzubringen und mit den Referent*innen zu diskutieren.
Am Mittwoch, 12. März und Freitag, 14. März werden “Specials” u.a. zu rechtlichen Implikationen der KI-Verordnung sowie zur Lehrendenfortbildung angeboten. Wir freuen uns auf eine rege und aktive Beteiligung und wünschen Ihnen einen interessanten Erfahrungsaustausch im Rahmen der NeL-AI-Week!Mit Ihrer Anmeldung sind Sie für die gesamte NeL-AI-Week akkreditiert. Selbstverständlich ist es möglich, nur Teile der Veranstaltung wahrzunehmen.
Institution
Die KI-Verordnung (KI-VO) ist ein Prestigeprojekt der EU. Es ist zu erwarten, dass das groß angelegte Regulierungsvorhaben die Verbreitung und den Einsatz von künstlicher Intelligenz (KI) in der Union und über ihre Grenzen hinaus erheblich beeinflussen wird.
In diesem Workshop wollen wir die Inhalte der KI-Verordnung, die Bezüge zum Urheber- und Datenschutzrecht, die Bezüge zu OpenData und insbesondere für (öffentliche) Hochschulen relevante Bezüge herausarbeiten.
Die Teilnehmenden sind eingeladen, vorab ggf. eigene Fragen/Fälle vorzubereiten und bei Interesse im Workshop vorzutragen.
Schwerpunkte sind:
Als virtuellen Lernort werden wir ZOOM nutzen. Der ZOOM-Link wird einen Tag vor Schulungsbeginn bis 13:00 Uhr versendet.
Institutions
In DaVincis Resolve gibt es verschiedene KI-Tools, die die Postproduktion erleichtern sollen.
In 30 Minuten erhalten Sie einen Überblick über den aktuellen Stand der Möglichkeiten, die es bereits gibt und die vielleicht bald hinzukommen. Wir schauen uns an, wie die Tools funktionieren und wie sie in den eigenen Workflow integriert werden können.
Grundkenntnisse in DaVinci Blackmagic Resolve sind nicht erforderlich, aber hilfreich.
Anmeldung hier
Als virtuellen Lernort werden wir ZOOM nutzen. Der ZOOM-Link wird einen Tag vor Schulungsbeginn bis 13:00 Uhr versendet.
Institution
With the launch of ChatGPT last year and the ensuing debate about the benefits and potential risks of generative AI, also the work on the European AI Act shifted into a higher gear. The European Council and Parliament, working on their respective compromise texts, had to find ways to accommodate this new phenomenon. The attempts to adapt the AI Act went hand in hand with a lively public debate on what was so new and different about generative AI, whether it raised new, not yet anticipated risks, and how to best address a technology whose societal implications are not yet well understood. Most importantly, was the AI Act outdated even before is adopted? In my presentation I would like to discuss the different approaches that the Council and Parliament adopted to governing Generative AI, the most salient points of discussion and the different approaches proposed to solve some of the key ethical and societal concerns around the rise of generative AI.
Prof. Dr. Natali Helberger (Universiteit van Amsterdam, NL)
Natali Helberger is Distinguished University Professor of Law and Digital Technology, with a special focus on AI, at the University of Amsterdam and a member of the Institute for Information Law (IViR). Her research on AI and automated decision systems focuses on its impact on society and governance. Helberger co-founded the Research Priority Area Information, Communication, and the Data Society, which has played a leading role in shaping the international discussion on digital communication and platform governance. She is a founding member of the Human(e) AI research program and leads the Digital Transformation Initiative at the Faculty of Law. Since 2021, Helberger has also been director of the AI, Media & Democracy Lab, and since 2022, scientific director of the Algosoc (Public Values in the Algorithmic Society) Gravitation Consortium. A major focus of the Algosoc program is to mentor and train the next generation of interdisciplinary researchers. She is a member of several national and international research groups and committees, including the Council of Europe's Expert Group on AI and Freedom of Expression.
Institutions
Prof. Dr. Ibo van de Poel, Delft University of Technology, NL
Value alignment is important to ensure that AI systems remain aligned with human intentions, preferences, and values. It has been suggested that it can best be achieved by building AI systems that can track preferences or values in real-time. In my talk, I argue against this idea of real-time value alignment. First, I show that the value alignment problem is not unique to AI, but applies to any technology, thus opening up alternative strategies for attaining value alignment. Next, I argue that due to uncertainty about appropriate alignment goals, real-time value alignment may lead to harmful optimization and therefore will likely do more harm than good. Instead, it is better to base value alignment on a fallibilist epistemology, which assumes that complete certainty about the proper target of value alignment is and will remain impossible. Three alternative principles for AI value alignment are proposed: 1) adopt a fallibilist epistemology regarding the target of value alignment; 2) focus on preventing serious misalignments rather than aiming for perfect alignment; 3) retain AI systems under human control even if it comes at the cost of full value alignment.
Institutions
Institutions
Prof. Dr. Kate Vredenburgh, London School of Economics, GB
Current AI regulation in the EU and globally focus on trustworthiness and accountability, as seen in the AI Act and AI Liability instruments. Yet, they overlook a critical aspect: environmental sustainability. This talk addresses this gap by examining the ICT sector's significant environmental impact. AI technologies, particularly generative models like GPT-4, contribute substantially to global greenhouse gas emissions and water consumption.
The talk assesses how existing and proposed regulations, including EU environmental laws and the GDPR, can be adapted to prioritize sustainability. It advocates for a comprehensive approach to sustainable AI regulation, beyond mere transparency mechanisms for disclosing AI systems' environmental footprint, as proposed in the EU AI Act. The regulatory toolkit must include co-regulation, sustainability-by-design principles, data usage restrictions, and consumption limits, potentially integrating AI into the EU Emissions Trading Scheme. This multidimensional strategy offers a blueprint that can be adapted to other high-emission technologies and infrastructures, such as block chain, the meta-verse, or data centers. Arguably, it is crucial for tackling the twin key transformations of our society: digitization and climate change mitigation.
Institutions
Institutions
Institutions
Taming the Machines — Horizons of Artificial Intelligence. The Ethics in Information Technology Public Lecture Series
This summer‘s „Taming the Machine“ lecture series sheds light on the ethical, political, legal, and societal dimensions of Artificial Intelligence (AI).Prof. Dr. Louise Amoore, Durham University, Durham, UK
Institutions
Prof. Dr. Philipp Hacker, European University Viadrina, Frankfurt (Oder), DE
Current AI regulation in the EU and globally focus on trustworthiness and accountability, as seen in the AI Act and AI Liability instruments. Yet, they overlook a critical aspect: environmental sustainability. This talk addresses this gap by examining the ICT sector's significant environmental impact. AI technologies, particularly generative models like GPT-4, contribute substantially to global greenhouse gas emissions and water consumption.
The talk assesses how existing and proposed regulations, including EU environmental laws and the GDPR, can be adapted to prioritize sustainability. It advocates for a comprehensive approach to sustainable AI regulation, beyond mere transparency mechanisms for disclosing AI systems' environmental footprint, as proposed in the EU AI Act. The regulatory toolkit must include co-regulation, sustainability-by-design principles, data usage restrictions, and consumption limits, potentially integrating AI into the EU Emissions Trading Scheme. This multidimensional strategy offers a blueprint that can be adapted to other high-emission technologies and infrastructures, such as block chain, the meta-verse, or data centers. Arguably, it is crucial for tackling the twin key transformations of our society: digitization and climate change mitigation.
Institutions
Taming the Machines — Horizons of Artificial Intelligence. The Ethics in Information Technology Public Lecture Series
This summer‘s „Taming the Machine“ lecture series sheds light on the ethical, political, legal, and societal dimensions of Artificial Intelligence (AI).Institutions
Taming the Machines — Horizons of Artificial Intelligence. The Ethics in Information Technology Public Lecture Series
This summer‘s „Taming the Machine“ lecture series sheds light on the ethical, political, legal, and societal dimensions of Artificial Intelligence (AI).Prof. Dr. Mathias Risse, John F. Kennedy School of Government, Harvard University, Cambridge, MA, USA
Institutions
Taming the Machines — Horizons of Artificial Intelligence. The Ethics in Information Technology Public Lecture Series
This summer‘s „Taming the Machine“ lecture series sheds light on the ethical, political, legal, and societal dimensions of Artificial Intelligence (AI).Prof. Dr. Andra Siibak, University of Tartu, Tartu, Estland
Present day children’s futures are decided by algorithms predicting their probability of success at school, their suitability for a job position, their likely recidivism or mental health problems. Advances in predictive analytics, artificial intelligence (AI) systems, behavioral-, and biometrics technologies, have started to be aggressively used for monitoring, aggregating, and analyzing children’s data. Such dataveillance happening both in homes, schools, and peer networks has a profound impact not only on children’s preferences, social relations, life chances, rights and privacy but also the "future of human agency - and ultimately, of society and culture" (Mascheroni & Siibak 2021: 169).
Building upon the findings of my different empirical case studies, I will showcase how the popular digital parenting practices and the growing datafication happening in the education sector, could create not only hypothetical data scares but also lead to real data scars in the lives of the young.
Institutions
Taming the Machines — Horizons of Artificial Intelligence. The Ethics in Information Technology Public Lecture Series
This summer‘s „Taming the Machine“ lecture series sheds light on the ethical, political, legal, and societal dimensions of Artificial Intelligence (AI).Vincent C. Müller is AvH Professor for Philosophy and Ethics of AI and Director of the Centre for Philosophy and AI Research (PAIR) at FAU Erlangen-Nuremberg
It is now frequently observed that there is no proper scope and no proper method in the discipline of AI-ethics. This has become an issue in the development towards maturity of the discipline, e.g. canonical problems, positions, arguments … secure steps forward. We propose a minimal, yet universal view of the field (again Müller 2020). Given this proposal, we will know the scope and the method, and we can appreciate the wide set of contributions.
Institutions
Institutions
Institutions
Taming the Machines — Horizons of Artificial Intelligence. The Ethics in Information Technology Public Lecture Series
This summer‘s „Taming the Machine“ lecture series sheds light on the ethical, political, legal, and societal dimensions of Artificial Intelligence (AI).Prof. Dr. Aimee van Wynsberghe, Rheinische Friedrich-Wilhelms-Universität Bonn, D
Institutions
Speaker: Prof. Dr. Elena Esposito, Universität Bielefeld, DE
Institutions
Universität Hamburg
Adeline Scharfenberg
Universität Hamburg
Adeline Scharfenberg
Universität Hamburg
Adeline Scharfenberg