Samstag, 7. Juli 2019 (Anreisetag) bis Samstag, 14. Juli 2019 (Abreisetag)
The start of the 21th century has seen the pace of AI development accelerate, prompting many philosophical questions about our relationship with AI and our future. This week-long summer school will introduce students to current developments in artificial intelligence and then explore a number of epistemological, ethical, and political questions that these developments raise.
We will begin with a brief exploration of the nature of intelligence and the mind, focusing on different ways to develop intelligent machines. At the moment, the most promising path towards general artificial intelligence is through machine learning. So we will spend some time focusing on how and why machine learning has becoming the dominant field of AI. We will discuss not only the current limits of AI technology but also consider more theoretical issues in the development of a general learning algorithm (i.e. a single algorithm that is capable of domain general learning).
For the majority of our time together, we will take a case-based approach to ethical and political issues surrounding the development and deployment of AI. These cases have been chosen to highlight the types of issues that the use of AI raise.
First, we shall consider a number of questions that are already currently being discussed. We have already seen that there are a number of social problems that can arise when we leave certain tasks to automated systems. For example, Facebook’s news recommendation system has created echo chambers that can be manipulated for political gain. What are the limits of purely technological solutions to these problems? And what other tools do we have at our disposal to minimize harm as the use of AI becomes more widespread? Similar questions arise from the role that bias plays in machine learning. Regularly, machine learning algorithms are deployed that have been trained on biased data. For example, Amazon recently developed an algorithm to identify promising employees from their resumes. However, because the algorithm was trained off resumes that were primarily male, the resulting algorithm was biased against women. What technological tools do we have to de-bias these algorithms? And what other tools (e.g. legal) do we have to minimize the harm of algorithmic bias?
The next three cases focus on issues that will arise in the near future. The first case concerns the automation and outsourcing of moral decision-making. How can such decision making be automated and what are the potential benefits and risks associated with this research programme? The prospects of off-loading moral decision-making from humans onto to AI systems raise potential risks. For example, it could reduce pluralistic human value-systems to an artificial uniformity. However, there are potential benefits as well since our capacity for moral reasoning is notoriously unreliable.
The second case concerns the role of AI in warfare. The development of autonomous weapon systems is actively pursued by several states. This raises questions regarding the applicability of international humanitarian and human rights law as well as the possibility of attributing moral and legal responsibility to such systems.
Our third case concerns automated vehicles and will raise questions about the relationship between the public and AI systems. One of the main bottlenecks in AI’s deployment is that these systems are often opaque to not only the general public but also to its users and engineers. As a result, it’s unclear how to develop policy and regulatory laws that govern AI. For example, how can we trust an automated driving system when we cannot clearly specify the conditions which it will behave in one way or another? How do we regulate the use of such systems? The problem is twofold because these systems are not only non-deterministic (i.e. probabilistic) but they also can employ models that are too complicated for a human mind to understand.
We shall conclude the week with a more imaginative exploration of AI discussing the potential development of superintelligence. This event is often referred to as “the singularity”. We shall consider various pathways to superintelligence and explore what such a being would look like and desire. The primary reason to consider this question is to explore whether the choices we make now could have an impact on the consequences of the singularity. Could we ensure that the superintelligent being is benign and beneficial to human beings or does this development always end in our doom as science fiction has envisaged?
– Prof. Brian Kim, Department of Philosophy, State University of Oklahoma.
– Dr. Ariadna Pop, Diplomat, Swiss Federal Department of Foreign Affairs, Political Directorate, Human Security Division.
Weitere Dozierende: Dr. Teresa Scantamburlo, Department of Environmental Sciences, Informatics and Statistics (DAIS), University of Ca’ Foscari, Venice, Dr. Rune Nyrup, Leverhulme Centre for the Future of Intelligence, Cambridge University.
Teilnehmende: Interessierte Studierende aller Fächer, insbesondere Philosophie, Mathematik, Informatik sowie Politikwissenschaften.
Reader: Wird verteilt (elektronisch).
Literatur: Weitere Literaturangaben werden folgen.
Veranstaltungsort: Migleglia (TI)
Koordination: Sarah Beyeler
Administration: Nathalie Ellington
Allgemeine Informationen: → PDF