Commit ca9a8607 authored by Axel Dürkop's avatar Axel Dürkop
Browse files

Merge branch 'master' into 'korrektur-aw'

# Conflicts:
#   de/05c_Kapitel.md
parents 8a8fb55e a1f020ff
Pipeline #26914 passed with stage
in 2 minutes and 49 seconds
# Von Militärrobotern bis hin zu selbstfahrenden Pizza-Lieferdiensten
Dieses Kapitel gibt eine kurze Einleitung in Roboterethik und widmet sich einigen ethischen Aspekten der neuesten Robotertechnologie. Hierbei sollen nicht etwa definitive Antworten gegeben werden, vielmehr werden diese ethischen Aspekte vorgestellt[^1] .
Kurz gesagt werden wir in diesem Kapitel Folgendes betrachten: (1) Im ersten Abschnitt werfen wir einen kurzen kulturhistorischen Blick auf unsere Obsession mit künstlichen Kreaturen. (2) Wir wenden uns dann der Roboterethik zu und beleuchten, was es damit auf sich hat. (3) Wir sprechen einige ethische Aspekte der aktuellen Robotertechnologie an. Vor allem betrachten wir hier Militärroboter, selbstfahrende Autos sowie Roboter, die als Begleitung oder zur Pflege dienen.
......
......@@ -7,12 +7,12 @@ Now, without further ado, here is what we will do in this chapter:
(2) Then, we will turn to roboethics and what it is concerned about.
(3) Next, we will address some ethical issues regarding current robotic technology. Particularly, military robots, companion robots, care robots and self-driving cars.
### (1) A little history to begin with
## A little history to begin with
Humans have been obsessed with artificial creatures for a long time now. Just consider Talos, from Greek mythology, which is a giant bronze automaton that is supposedly good at crushing your enemies (as early as 400BC). Then, of course, there is the Golem of the Jewish tradition, a creature made out of non-organic material, such as clay, and that comes to life through magic. Another example that is closer to robots comes to us from Leonardo da Vinci, who devised a mechanical humanoid knight (around 1490). Our obsession with artificial creatures generally, and mechanical automatons in particular, is nowhere more evident than in movies and literature. To name just two historic examples here: There is E.T.A Hoffman’s famous story Der Sandmann (1816) that features an artificial woman named Olimpia and there is the classical movie Metropolis (1927) by Fritz Lang, where the artificial creature Maria stirs unrest. Of course, we could continue this list of examples until we arrive at the latest instalments in pop culture ranging from cute little robots like Wall-E (2008) to cunning murder machines like in the movie Ex Machina (2014). So, taking into account our obsession with artificial creatures, it may not come as a surprise that we are at a stage of technical development where vacuum robots like Roomba clean our apartments, self-driving cars are likely to hit the streets in the near future, and care robots are deployed in hospitals and retirement homes[^2] .
### (2) Roboethics
## Roboethics
Before we will come to roboethics, a quick word on the classification of robot technology. As one may expect, there are many ways to classify robot technology. Here is one example taken from Kopacek (2013):
......@@ -29,7 +29,7 @@ Generally speaking, roboethics is concerned with the examination and analysis of
A short summary of what we have addressed so far. First, we briefly looked at our obsession with artificial creatures and robots. Then, we introduced a simple way of classifying robots. Most importantly, we addressed ethics and roboethics. In the next section, we will look at some ethical issues that arise in connection to particular robot technologies. Specifically, we concentrate on 4 types of robot technologies: military robots, companion robots, assistive robots and last but not least, autonomous vehicles.
### (3) Robots and ethics
## Robots and ethics
The most natural question on a lot of people’s mind when it comes to robots is: How do we get robots to behave in a way that we deem appropriate? In his novels, the author Isaac Asimov presents an answer to this question. He puts forth the idea that robots may be programmed to behave according to moral rules or laws. So, for example, the robot could be programmed to do x, but not do y. The rules that he introduced have come to be known as ‘Asimov’s laws of robotics’ and they are as follows:
......@@ -40,7 +40,7 @@ The most natural question on a lot of people’s mind when it comes to robots is
Now, on a first glance, the idea to program robots to behave according to a set of rules seems like a very reasonable thing to do. However, there are some well-known problems with this approach (For more on the shortcomings of Asimov’s laws and an alternative see Murphy and Woods 2009). Asimov was well aware of these problems and used them as a device to propel the narrative of his science-fiction stories. One problem concerns the vagueness of the terms used in the laws. For example, it is not clear what the term ‘human’ means in the first law, or what ‘robot’ and ‘doing harm’ means precisely. Further, there is the issue of a bloat of rules. That means that the world is a messy place and we need a lot of rules and rules for the exception to the rules in order to address all the circumstances that a robot may find itself in. This, however, seems to be an impossible task. The most obvious problem, though, is that there are a lot of situations where one rule will conflict with another rule. Consider the well-known trolley scenario, where an out of control trolley runs along a track on which there are 5 people. Yet, the trolley can be diverted to run along on another track. Unfortunately, there is a person on this other track. So, a decision needs to be made between diverting the trolley to the track where it will run over, and presumably kill, the one person, or letting the trolley stay on track and letting it run over the group of five. How should a robot in this situation behave, given that it is supposed to save human lives? (For that matter, how are humans supposed to act in such a situation?). Last but not least, another problem is that Asimov’s rules may not be feasible in some contexts. There may be contexts in where we expect the robot to harm a human being for example. This brings us to the first robot technology that we will take a closer look at: military robots.
## Military robots
### Military robots
Not surprisingly, the military is at the forefront when it comes to robot technology. Military robots are here and they are here to stay. For example, in 2005 the New York Times reported plans of the Pentagon to replace soldiers with robots, and [only 5 countries backed a UN resolution to ban killer robots](https://www.theverge.com/2014/5/16/5724538/what-happened-at-the-un-killer-robot-debate). It is worth pointing out here that fully autonomous weapons already exist (Please recall, that autonomous here means that the robot goes about its task without human intervention). [South Korea has an automatic machine gun that can identify and shoot targets without human commands](http://www.dailymail.co.uk/sciencetech/article-2756847/Who-goes-Samsung-reveals-robot-sentry-set-eye-North-Korea.html). Another example comes from Russia, [where the military uses autonomous tanks to patrol sensitive areas](https://www.newscientist.com/article/mg22229664-400-armed-russian-robocops-to-defend-missile-bases/).
......@@ -51,13 +51,13 @@ Now, despite their ability to shoot people and their occasional intimidating loo
Despite these (potential) advantages there are some crucial ethical concerns that need to be addressed: One of the pressing issues is whether military robots should be given the authority to fire at humans without a human in the loop. This is particularly important, because we need to make sure that robots are sufficiently able to distinguish between combatants and civilians. Further, the availability of military robot may decrease the threshold of armed conflicts. After all, if you have a bunch of robots that can fight for you without human losses on your side (!), then the motivation to start an armed conflict may be higher. A related issue is that the potential ease of using robots may foster an attitude that takes military robots to be a ‘technical fix’ to problems, so that other, more peaceful, solutions drop out of sight. Also, there is the question of how responsibility is to be distributed, especially when the military robot harms people that it was not supposed to harm. How do we determine and distribute who is responsible for the behavior of military robots, particularly when they are autonomous? This issue is very complex because we have to take into account the multitude of players that are involved: the creators of the robot (including IT companies that provide the software, and other research institutions), the military (for example the people in the chain of command like commanders and soldiers). Or maybe we can attribute responsibility to the robot itself? Now, it is not surprising that philosophers have a lot to say about this issue. Some authors have argued that it is impossible to attribute responsibility to any of the players when it comes to military robots (Sparrow 2007), whereas other authors have suggested a way of attributing responsibility (e.g., Schulzke 2013)[^4] .
## Companion robots
### Companion robots
After the rather bleak topic of killer machines, let us now turn to more uplifting machines: companion robots. Usually, these robots are set up to allow some kind of interaction, such as speech or gestures. In short, companion robots are robots that, as one would expect from the name, keep people company at home, at work, in hospitals and retirement homes. The classic example here is Paro the fluffy robot seal that can be used in retirement homes to cognitively stimulate people with dementia or calm them down. Two more recent companion robots are Kuri and Buddy. These two are supposed to be all-round companions that can play music, remind people of tasks and duties, and, with the build in camera, [you can send it to specific places in our house to check something out](https://www.wired.com/story/companion-robots-are-here/)[.](https://www.technologyreview.com/s/539356/personal-robots-artificial-friends-with-limited-benefits/)
There are some things that speak in favor or having companion robots. There is some indication that companion robots increase interaction and communication of autistic children (Scassellati, Admoni, Matarić 2012). Also, companion robots may also ameliorate loneliness in some people, especially when they are elderly or socially isolated (Bemelmans et al. 2012). However, the cuteness and cuddliness of companion robots should not blind us to the ethical issues that need to be addressed. One of the problems concerns attachment and deception: Should we really create things that have a high potential for attachment on part of the user but where this attachment ultimately rests on a deception? After all, the robot pretends to be something that he is not: a friend or companion. In other words, do the benefits that a companion robot may bring outweigh the cost that said benefit is achieved by deceiving a human into thinking that he or she has a reciprocal relationship with it? (Sparrow & Sparrow 2006). Another ethically relevant issue is data security because people interact and talk to these companion robots in intimate settings like their home. The information gathered in these interactions should be protected and stored securely, so as not to allow access from unauthorized third parties. Also, it is worthwhile to think about the ownership of the data that are gathered in these intimate contexts. Should the ownership of the data reside with the person that interacts with the companion robot, or is it legitimate that the company that produced these robots has ownership? (A similar concern can be raised regarding other technologies as well. For example, think of devices and services like Amazon’s Alexa or Microsoft’s Cortana). Another ethical issue concerns the level of authority and autonomy that we give to our companion robots. Should a companion robot that is ‘tasked’ with keeping a young child company be able to intervene, when the child is about to do something that she is not supposed to do; eating candy for example? Some of these ethical issues just addressed also apply to assistive or care robots, to which we will turn next.
## Care Robots
### Care Robots
Care robots are robots that fulfill crucial tasks in the care for other people, primarily the elderly or bodily disabled. Such tasks may include grasping and lifting objects, or carrying and feeding people. An example for a state of the art care bot is the so-called Care-O-bot developed by the Fraunhofer Institute that is equipped with a tray for bringing things and a tablet interface for displaying websites. [Further, the robot can remind its user to take medicine or call help when the user has fallen and cannot get up](http://www.care-o-bot.de/en/care-o-bot-3.html).
......@@ -65,7 +65,7 @@ There are clear advantages of care robots. Obviously, they can support elderly a
However, we should no be so careless as to neglect some crucial ethical issues when it comes to care robots. One of the most pressing issues is the potential conflict between the values of autonomy and freedom of choice on part of the user and the level of interference in the life of the elderly. For example, how persistent should the robot be if a person refuses to take the medicine? Another obvious issue concerns data security. Care robots are used in a sensitive environment and may also have access to medical and other personal data of the owner, so it needs to be ensured that the data is safe and that they do not get into the hands of people that exploit these data. Further, care robots may lead to a decrease in social contact on part of the elderly because relatives may choose to deploy a robot instead of a human caretaker or visit less frequently because grandma has a robot companion. Also, people that are cared for by robots may feel objectified by being handled by a machine. Further, as with companion robots above, the issue of deception lurks. It may be argued that care robots create the illusion of relationship because they ‘deceive’ the user or patient by pretending to be a companion or friend although in reality they do not care. Ultimately, when it comes to care robots, there are also some broader societal issues that we have to take into account. We should ask ourselves in what kind of society we want to live. Do we want to give our most vulnerable members of society over into the care of robots and if so, to what extent exactly? The answer to questions like this should concern everyone and should not be left exclusively to the people that drive technological development. Speaking of driving, the last robot technology that we will have a closer look at is self-driving cars.
## Autonomous vehicles
### Autonomous vehicles
If you follow the media, you will be familiar with both Tesla’s and Google’s self-driving cars. However, given the price of a Tesla car, maybe a more relatable example is the self-driving pizza car [that is being tested in a collaboration between Ford and the pizza chain Dominos](https://medium.com/self-driven/how-pizza-is-helping-us-design-our-self-driving-future-a78720818e99). This is how the self-driving pizza car is supposed to work: You order the pizza and an employee puts the pizza into the self-driving pizza delivery vehicle. Then, the car finds its way to your house autonomously. When the car with the pizza arrives at your place, you take out the pizza and the car drives off to the pizza place again. It is likely that we will actually see self-driving pizza cars in the not far future because other companies have entered the race. [Recently, Pizza Hut has teamed up with Toyota to work on its own version of an autonomous pizza delivery vehicle](https://www.eater.com/2018/1/8/16865982/pizza-hut-toyota-self-driving-truck).
......@@ -73,7 +73,7 @@ Having your delicious pizza pie delivered by an autonomous vehicle has some well
Nevertheless, despite the advantages of self-driving cars, some ethical issues need to be discussed. Similar to the military robot technology that we have addressed above, there is the issue of responsibility ascription and distribution. Who should we hold responsible when a self-driving car caused an accident? A related issue concerns what kind of decision capabilities we want in a self-driving car. Think about a critical traffic situation, for example a version of the trolley scenario that we have looked at in the section on Asimimov’s laws. Imagine there is a group of people ahead, and a choice needs to be made between running over the group of people, steering to the left and running over one person or steering to the right and crashing into a wall, possible injuring the people in the car. Here the question naturally arises, based on which criteria the autonomous car is supposed to decide. One option is to have no decision power in these situations and leave it up to the driver. However, what if the driver is not attentive? Should the car then be allowed to decide on an option? Ultimately, we have to ask ourselves what risk we want to take as a society and whether the benefits of having self-driving cars on the street outweigh the dangers and risks. Another crucial and not to be neglected ethical issue is the potential loss of jobs that comes with self-driving cars. [According to the American Trucking Associations there are 3,5 million truck drivers in the US](http://www.trucking.org/News_and_Information_Reports_Industry_Data.aspx). You would not need them anymore if trucks could drive autonomously. The same goes for our self-driving pizza delivery vehicle because it eliminates the human element in pizza delivery. In the concluding section, we will see that robots may not only come for our jobs but also for your rights.
## Ethical treatment of robots?
### Ethical treatment of robots?
Remember, at the beginning of this chapter we said that ethics not only deals with the justifiable conduct regarding other people and non-human animals but that ethics nowadays is also concerned with the right conduct towards artificial products. Consider this example: In October 2017, Saudi Arabia granted citizen rights to the sophisticated humanoid robot called Sophia. [This is the first robot to receive citizenship in the world](http://www.hansonrobotics.com/robot/sophia/). This incident suggests that we may want to start thinking about how we treat robots and what part they will play in our social world. Should we regard them as persons and grant them rights? After all, we regard companies as persons and grant them certain rights. Further, is it possible to treat robots in an unethical way (e.g., by harming them)? We will likely be confronted with these and similar questions in the future. Even more so, because robots will likely reach a level of sophistication that will prompt us to rethink what it is that distinguishes us from them. So, we better get a head start in thinking about these issues instead of trying to catch up with the technical development.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment