Ron Arkin.  Governing Lethal Behavior in Autonomous Robots (pp 29-36; 37-48; 62-67; 138-143).

Ron Arkin’s book is a good example of how a robotics technologist would write about a question as weighty as lethality and technology. You have significant excerpts to read from this text, and so the questions index you into specific passages.


Reading questions: 

Chapter 4: Related Philosophical Thought pp 37-48

1 There is an argument Arkin uses at time to dismiss certain considerations by explaining that those considerations do not apply just to autonomous robots in war, but to many sorts of technologies in war.  How should the fact that an issue is larger than a specific technological implementation color our consideration of its ethical implications?

Section 6.2 Ethical Behavior (pp 62-67)

2 This section summarizes the formulaic/architectural side of Arkin’s implementation.  His view of this architecture seems couched in two assumptions: one, that a robot can perceptually bring in enough information to make good choices; and two, that a robot is deciding whether or not to take fairly discrete, atomic actions.  What happens to this architecture if machine perception continues to be a problem into the next decade, and if action is not atomic but a fluent of decisions on how to behave over time?


Section 10.3 Ethical Adaptor (pp 138-143)

3 Arkin presents the guilt variable and guilt action threshold concepts for his architecture in this section.  Do you believe that the mechanism introduced here by Arkin helps a robot to act more ethically?  Explain your response in detail, drawing on our other reading as applicable.


P.W. Singer.  Wired for War (pp 19-41; 123-134; 205-236; 315-325; 382-412)

This set of excerpts provides broad background on the state of the art in military robotics as well as several glimpses into the near future.


4 These excerpts paint a picture of how robots are used today in warfare, how they may be used tomorrow, and how the public and policy-making bodies consider robots of both remote-controlled and autonomous kinds.  Imagining an axis along which one demarcates levels of autonomy (from none to very high levels), identify at least four positions along this axis and describe, for each position, a war-fighting robot that already exists or is likely to exist.