Commit 4aea7fd3 authored by Tina Jessica Ladwig's avatar Tina Jessica Ladwig
Browse files

Merge branch 'ueberarbeitung_fußnote_chapter_5' into 'master'

Ueberarbeitung fußnote chapter 5

See merge request hoou/tekethics-booklet!4
parents 49c73aa6 82760ad6
Pipeline #24866 passed with stage
in 21 minutes and 6 seconds
# From Military Robots to Self-driving Pizza Delivery
This chapter will give you a short introduction to roboethics and address some ethical issues of recent robot technology. The intention here is not to give definitive answers but to present some ethical issues of current robot technology.
\begin{mdframed}
\footnotesize
There are a lot of excellent introductions to the field of roboethics. The book 'Robot ethics: the ethical and social implications of robotics' (2012, Eds. Lin, Abney \& Bekey) is an excellent introduction to the ethical issues of robotics. Recently, Spyros Tzaphestas (2016) has published an introduction to robot ethics that covers a lot of relevant issues and is accessible to the general public.
\end{mdframed}
This chapter will give you a short introduction to roboethics and address some ethical issues of recent robot technology. The intention here is not to give definitive answers but to present some ethical issues of current robot technology[^1] .
Now, without further ado, here is what we will do in this chapter:
(1) In the first section, to get things going, we will have a brief cultural-historical look at our obsession with artificial creatures.
......@@ -16,16 +9,8 @@ Now, without further ado, here is what we will do in this chapter:
### (1) A little history to begin with
Humans have been obsessed with artificial creatures for a long time now. Just consider Talos, from Greek mythology, which is a giant bronze automaton that is supposedly good at crushing your enemies (as early as 400BC). Then, of course, there is the Golem of the Jewish tradition, a creature made out of non-organic material, such as clay, and that comes to life through magic. Another example that is closer to robots comes to us from Leonardo da Vinci, who devised a mechanical humanoid knight (around 1490). Our obsession with artificial creatures generally, and mechanical automatons in particular, is nowhere more evident than in movies and literature. To name just two historic examples here: There is E.T.A Hoffman’s famous story Der Sandmann (1816) that features an artificial woman named Olimpia and there is the classical movie Metropolis (1927) by Fritz Lang, where the artificial creature Maria stirs unrest. Of course, we could continue this list of examples until we arrive at the latest instalments in pop culture ranging from cute little robots like Wall-E (2008) to cunning murder machines like in the movie Ex Machina (2014). So, taking into account our obsession with artificial creatures, it may not come as a surprise that we are at a stage of technical development where vacuum robots like Roomba clean our apartments, self-driving cars are likely to hit the streets in the near future, and care robots are deployed in hospitals and retirement homes.
Humans have been obsessed with artificial creatures for a long time now. Just consider Talos, from Greek mythology, which is a giant bronze automaton that is supposedly good at crushing your enemies (as early as 400BC). Then, of course, there is the Golem of the Jewish tradition, a creature made out of non-organic material, such as clay, and that comes to life through magic. Another example that is closer to robots comes to us from Leonardo da Vinci, who devised a mechanical humanoid knight (around 1490). Our obsession with artificial creatures generally, and mechanical automatons in particular, is nowhere more evident than in movies and literature. To name just two historic examples here: There is E.T.A Hoffman’s famous story Der Sandmann (1816) that features an artificial woman named Olimpia and there is the classical movie Metropolis (1927) by Fritz Lang, where the artificial creature Maria stirs unrest. Of course, we could continue this list of examples until we arrive at the latest instalments in pop culture ranging from cute little robots like Wall-E (2008) to cunning murder machines like in the movie Ex Machina (2014). So, taking into account our obsession with artificial creatures, it may not come as a surprise that we are at a stage of technical development where vacuum robots like Roomba clean our apartments, self-driving cars are likely to hit the streets in the near future, and care robots are deployed in hospitals and retirement homes[^2] .
\begin{mdframed}
\footnotesize
If you want to delve deeper into the history of automatons (what we today call robots), Kang in his book ‘Sublime Dreams of Living Machines: The Automaton in the European Imagination’ (2011) provides an intellectual history of mechanical beings.
The Czech author Karel Čapek was the first to introduce the term 'robot' in his play Rossum’s Universal Robots (1920). Interestingly, in this play the robots are trying to overpower its human masters. This is another example, like Olimpia in Fritz Lang's movie or the humanoid robot in Ex Machina, of both our obsession but also fear of our own creations.
\end{mdframed}
### (2) Roboethics
......@@ -36,14 +21,7 @@ For the purpose of the chapter, however, let us use a simpler classification tha
Accordingly then, roboethics can be split up into assistive roboethics, military roboethics, and so forth.
Now, what is roboethics? To answer this, we will first take a look at ethics generally and then shift to roboethics. Although in ordinary contexts ethics and morality are used interchangeably it is customary, at least in philosophy, to distinguish the two. Morality refers to the collection of norms and values that people hold, whereas ethics is the investigation and reflection on morality. Simply put, ethics reflects on right and wrong conduct, so please keep in mind that ethics is also concerned with the justification of our conduct. That means giving reasons for or against something. So, for example, when we say ‘Hitting a child is wrong’, we pass a normative judgment. In turn, judging that something is good or bad, right or wrong, is not enough. We also have to provide reasons (that is, a justification) for why we think that this is the right or the wrong conduct.
\begin{mdframed}
\footnotesize
If you want to know more about the distinction between morality and ethics, the BBC has a homepage devoted to the question 'What is ethics?'\footnote{http://www.bbc.co.uk/ethics/introduction/}. If you want more in-depth material on ethics and ethical theories please visit the Internet Encyclopedia of Philosophy on ethics\footnote{https://www.iep.utm.edu/ethics/}.
\end{mdframed}
Now, what is roboethics? To answer this, we will first take a look at ethics generally and then shift to roboethics. Although in ordinary contexts ethics and morality are used interchangeably it is customary, at least in philosophy, to distinguish the two. Morality refers to the collection of norms and values that people hold, whereas ethics is the investigation and reflection on morality. Simply put, ethics reflects on right and wrong conduct, so please keep in mind that ethics is also concerned with the justification of our conduct. That means giving reasons for or against something. So, for example, when we say ‘Hitting a child is wrong’, we pass a normative judgment. In turn, judging that something is good or bad, right or wrong, is not enough. We also have to provide reasons (that is, a justification) for why we think that this is the right or the wrong conduct[^3] .
Traditionally, ethics is concerned with the proper conduct towards other human beings and towards non-human living beings. However, ethics nowadays also includes reflecting on the right and wrong actions regarding the environment, and recently it has come to include the reflection on how we should treat our robots. We will come back to the behavior towards our own creation in the last section. For now, let us turn to roboethics.
......@@ -70,14 +48,8 @@ Just for the fun of it, here are two more examples: [Dubai recently showcased on
Now, despite their ability to shoot people and their occasional intimidating looks, using military robots could have some beneficial consequences that may be taken as reasons to ethically justify their deployment. For example, military robots may reduce casualties because you need fewer humans to fight your war. Of course, obviously, this advantage only applies to the side that has military robots. Further, robots are not subject to psychological stress like human beings. Given that a lot of soldiers suffer from PTSD (posttraumatic stress disorder) after returning from the battlefield, it seems to be a good idea to reduce this kind of suffering by using robots instead of humans. Another advantage is that robots do not give in to emotions and rage and, unlike human soldiers, blindly obey the commands given to them.
Despite these (potential) advantages there are some crucial ethical concerns that need to be addressed: One of the pressing issues is whether military robots should be given the authority to fire at humans without a human in the loop. This is particularly important, because we need to make sure that robots are sufficiently able to distinguish between combatants and civilians. Further, the availability of military robot may decrease the threshold of armed conflicts. After all, if you have a bunch of robots that can fight for you without human losses on your side (!), then the motivation to start an armed conflict may be higher. A related issue is that the potential ease of using robots may foster an attitude that takes military robots to be a ‘technical fix’ to problems, so that other, more peaceful, solutions drop out of sight. Also, there is the question of how responsibility is to be distributed, especially when the military robot harms people that it was not supposed to harm. How do we determine and distribute who is responsible for the behavior of military robots, particularly when they are autonomous? This issue is very complex because we have to take into account the multitude of players that are involved: the creators of the robot (including IT companies that provide the software, and other research institutions), the military (for example the people in the chain of command like commanders and soldiers). Or maybe we can attribute responsibility to the robot itself? Now, it is not surprising that philosophers have a lot to say about this issue. Some authors have argued that it is impossible to attribute responsibility to any of the players when it comes to military robots (Sparrow 2007), whereas other authors have suggested a way of attributing responsibility (e.g., Schulzke 2013).
\begin{mdframed}
\footnotesize
Because of the risks and moral dilemmas involved in military robots, some people, including Stephen Hawking and Elon Musk, have called for a ban of 'killer robots'\footnote{https://www.technologyreview.com/s/539876/military-robots-armed-but-how-dangerous/}\footnote{http://hir.harvard.edu/article/?a=14494}.
Despite these (potential) advantages there are some crucial ethical concerns that need to be addressed: One of the pressing issues is whether military robots should be given the authority to fire at humans without a human in the loop. This is particularly important, because we need to make sure that robots are sufficiently able to distinguish between combatants and civilians. Further, the availability of military robot may decrease the threshold of armed conflicts. After all, if you have a bunch of robots that can fight for you without human losses on your side (!), then the motivation to start an armed conflict may be higher. A related issue is that the potential ease of using robots may foster an attitude that takes military robots to be a ‘technical fix’ to problems, so that other, more peaceful, solutions drop out of sight. Also, there is the question of how responsibility is to be distributed, especially when the military robot harms people that it was not supposed to harm. How do we determine and distribute who is responsible for the behavior of military robots, particularly when they are autonomous? This issue is very complex because we have to take into account the multitude of players that are involved: the creators of the robot (including IT companies that provide the software, and other research institutions), the military (for example the people in the chain of command like commanders and soldiers). Or maybe we can attribute responsibility to the robot itself? Now, it is not surprising that philosophers have a lot to say about this issue. Some authors have argued that it is impossible to attribute responsibility to any of the players when it comes to military robots (Sparrow 2007), whereas other authors have suggested a way of attributing responsibility (e.g., Schulzke 2013)[^4] .
\end{mdframed}
## Companion robots
......@@ -103,7 +75,17 @@ Nevertheless, despite the advantages of self-driving cars, some ethical issues n
## Ethical treatment of robots?
Remember, at the beginning of this chapter we said that ethics not only deals with the justifiable conduct regarding other people and non-human animals but that ethics nowadays is also concerned with the right conduct towards artificial products. Consider this example: In October 2017, Saudi Arabia granted citizen rights to the sophisticated humanoid robot called Sophia. [This is the first robot to receive citizenship in the world](http://www.hansonrobotics.com/robot/sophia/). This incident suggests that we may want to start thinking about how we treat robots and what part they will play in our social world. Should we regard them as persons and grant them rights? After all, we regard companies as persons and grant them certain rights. Further, is it possible to treat robots in an unethical way (e.g., by harming them)? We will likely be confronted with these and similar questions in the future. Even more so, because robots will likely reach a level of sophistication that will prompt us to rethink what it is that distinguishes us from them. So, we better get a head start in thinking about these issues instead of trying to catch up with the technical development.
Remember, at the beginning of this chapter we said that ethics not only deals with the justifiable conduct regarding other people and non-human animals but that ethics nowadays is also concerned with the right conduct towards artificial products. Consider this example: In October 2017, Saudi Arabia granted citizen rights to the sophisticated humanoid robot called Sophia. [This is the first robot to receive citizenship in the world](http://www.hansonrobotics.com/robot/sophia/). This incident suggests that we may want to start thinking about how we treat robots and what part they will play in our social world. Should we regard them as persons and grant them rights? After all, we regard companies as persons and grant them certain rights. Further, is it possible to treat robots in an unethical way (e.g., by harming them)? We will likely be confronted with these and similar questions in the future. Even more so, because robots will likely reach a level of sophistication that will prompt us to rethink what it is that distinguishes us from them. So, we better get a head start in thinking about these issues instead of trying to catch up with the technical development.
[^1]: There are a lot of excellent introductions to the field of roboethics. The book 'Robot ethics: the ethical and social implications of robotics' (2012, Eds. Lin, Abney \& Bekey) is an excellent introduction to the ethical issues of robotics. Recently, Spyros Tzaphestas (2016) has published an introduction to robot ethics that covers a lot of relevant issues and is accessible to the general public.
[^2]: If you want to delve deeper into the history of automatons (what we today call robots), Kang in his book ‘Sublime Dreams of Living Machines: The Automaton in the European Imagination’ (2011) provides an intellectual history of mechanical beings.
The Czech author Karel Čapek was the first to introduce the term 'robot' in his play Rossum’s Universal Robots (1920). Interestingly, in this play the robots are trying to overpower its human masters. This is another example, like Olimpia in Fritz Lang's movie or the humanoid robot in Ex Machina, of both our obsession but also fear of our own creations.
[^3]: If you want to know more about the distinction between morality and ethics, the BBC has a homepage devoted to the question ['What is ethics?'](http://www.bbc.co.uk/ethics/introduction/). If you want more in-depth material on ethics and ethical theories please visit the [Internet Encyclopedia of Philosophy on ethics](https://www.iep.utm.edu/ethics/).
[^4]: Because of the [risks and moral dilemmas involved in military robots](http://hir.harvard.edu/article/?a=14494), some people, including Stephen Hawking and Elon Musk, have called for a [ban of 'killer robots'](https://www.technologyreview.com/s/539876/military-robots-armed-but-how-dangerous/).
## Further readings
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment