Robot Futures 2

Assignment due Monday, 25 November

Questions for Chapter 4: Attention Dilution Disorder

1. Legal-ethical boundary setting

Imagine a few years from now under the assumption that Google Glass – like devices catch on, whether by Google or others. We already have spoken about the “no face recognition” rule that Google has unilaterally imposed to draw boundaries around functionality. Consider and propose three other rules that companies or legislators may pronounce on these devices as they become more widespread. Note that you don’t need to like or agree with the rules- I’m asking you to predict the future the you think is likely, not a future that is desirable.

 

2. Another boundary question.

Now this one is more personal. Consider Google Now several years in the future. It can respond to your emails, continue phone conversations or start conversations for you, deal with social media requests, appointments, etc. It’s up to you to decide what you *do* want it to do for you, and when you want to be involved “manually.” For you, yourself, in a few years, assuming this technology is ready: what distribution of responsibilities would you strive for between your own responsibilities and your “AI” personal assistant’s responsibilities and actions? I am hoping you can name a few in each category, or create some sort of rule for how you decide which responsibility belongs where.

 

 

Robot Futures 1

Assignment due Wednesday, 20 November

We will be reading only two chapters this semester. For Wednesday, please read Chapter 1: New Mediocracy. For the following Monday (25 November) we will be reading Chapter 4: Attention Dilution Disorder.

 

Question for Chapter 1:

The core argument in this chapter is a remark about free will essentially: about how increasingly effective, customized marketing can undercut the concept of fair and free choice, whether in the arena of consumerism or even in the sphere of politics. For your reaction, I am curious whether you think that robotic sensing and action change this basic tension between “marketing advances” and free choice in any fundamental way, or is this nothing but incremental changes to the relationship between information, corporations and people? So, reflect on the chapter and explain whether you find this to be incremental or on the cusp of a major transformation in terms of power relationship, and why?

Robots, Law and Privacy

M. Ryan Calo
Robots and Privacy – draft book chapter

Professor Calo’s book chapter on robotics and privacy marks out important categories of privacy, relating from both society and law, that will potentially change disruptively as robots become even more common throughout society and home environs.  This chapter touches on ethics, but more important provides analysis across the privacy spectra, and this argument in turn inform consideration of the ethical ramifications of robotics when personal privacy and the related concept of personal freedom are part of the consideration. 

Questions:

1. page 8-9: Prof. Calo recalls the work of Weizenbaum in considering the future impact of voice recognition technologies as funded by the Office of Naval Research decades ago.  His suggestion is that there are ethical considerations at play in how advancing robotic technologies chain us toward a possible future where robots have negative ethical impact.  What can you suggest as concrete examples of such ethically worst-case future scenarios?  Draw a line from today’s research (be as specific as possible) to your conjectured future, and describe the repercussions in an ethical framework.

 

2. pp 18-20; Calo states that people respond to social machines as if a person were present, and he goes on to describe how people avoid damaging machines, and exhibit other signs of following anthropocentric social conventions even when interacting with such machines.  How do you respond to these claims? Agree and justify or disagree and explain.  In either case, consider other interactive devices (e.g. mobile phones, computer, etc.) and see if you can provide evidence for your point of view by considering human behavior with other systems in today’s society.

 

Robot Servitude

Stephen Petersen. The Ethics of Robot Servitude.

This article provides an application of both Kantian and Aristotelean ethical analyses to the question of whether it is appropriate for humans to create robots that serve us. Questions arise both from the manner in which Petersen anthropomorphizes the concept of programmed robots, and also from the way in which he applies ethical frameworks to the question he proposes to answer.

1. Petersen defines Engineered Robot Servitude on page 3.  Critique the definition itself.  Describe at least two potential problems you can find with the definition wording and propose an alternative wording that you believe more accurately captures the concept of ERS as you would interpret it.

2. Petersen also states on page 3 that post-identity modification is inherently wrong, not only in humans but in robots also.  I could construct a counter-argument that robots, as programmable computers, have the very identity designed for continual modification.  Writing, compiling and executing new computer code is how we do modification to robots and computers all the time, and this does not strike me in itself as unctonroversially wrong.  If you agree with me, then how does the concept of identity relate to a programmable robot as opposed to a human?  If you disagree with me, then motivate your agreement with Petersen (or your subtle middle ground).

3. Petersen presents a Kantian and Aristotelian set of arguments for why ERS may be different from EHS, or for why EHS and ERS may in fact be permissible. Choose one of our ethical frameworks and apply it yourself to Petersen’s ERS-specific question (ignore EHS) and, in this application, be sure to consider consequences and/or character for society as a whole (e.g. humanity, gaia, civilization).

Military Robotics

Ron Arkin.  Governing Lethal Behavior in Autonomous Robots (pp 29-36; 37-48; 62-67; 138-143).

Ron Arkin’s book is a good example of how a robotics technologist would write about a question as weighty as lethality and technology. You have significant excerpts to read from this text, and so the questions index you into specific passages.

 

Reading questions: 

Chapter 4: Related Philosophical Thought pp 37-48

1 There is an argument Arkin uses at time to dismiss certain considerations by explaining that those considerations do not apply just to autonomous robots in war, but to many sorts of technologies in war.  How should the fact that an issue is larger than a specific technological implementation color our consideration of its ethical implications?

Section 6.2 Ethical Behavior (pp 62-67)

2 This section summarizes the formulaic/architectural side of Arkin’s implementation.  His view of this architecture seems couched in two assumptions: one, that a robot can perceptually bring in enough information to make good choices; and two, that a robot is deciding whether or not to take fairly discrete, atomic actions.  What happens to this architecture if machine perception continues to be a problem into the next decade, and if action is not atomic but a fluent of decisions on how to behave over time?

 

Section 10.3 Ethical Adaptor (pp 138-143)

3 Arkin presents the guilt variable and guilt action threshold concepts for his architecture in this section.  Do you believe that the mechanism introduced here by Arkin helps a robot to act more ethically?  Explain your response in detail, drawing on our other reading as applicable.

 

P.W. Singer.  Wired for War (pp 19-41; 123-134; 205-236; 315-325; 382-412)

This set of excerpts provides broad background on the state of the art in military robotics as well as several glimpses into the near future.

Question:

4 These excerpts paint a picture of how robots are used today in warfare, how they may be used tomorrow, and how the public and policy-making bodies consider robots of both remote-controlled and autonomous kinds.  Imagining an axis along which one demarcates levels of autonomy (from none to very high levels), identify at least four positions along this axis and describe, for each position, a war-fighting robot that already exists or is likely to exist.

 

Artificial Morality: Sullins and Sawyer

John Sullins.  When is a Robot a Moral Agent?

1. Can you identify a non-robotic, non-human entities in our society that are moral agents by the definitions espoused in Sullins’ work? Explain.

2. Sullins’ definition of ‘autonomy’ on page 26 suggests a very low bar, enabling many computer-controlled systems to be labeled as autonomous. Autonomy may be more important than he suggests, so it is worth your time to think about defining autonomy more precisely for the consideration of moral agency in the case of autonomous robots specifically. So- give me a new, deeply thought-out definition of autonomy for this case. Please don’t invent a definition that can never be true!

 

Robert Sawyer.  Robot Ethics.

3. Sawyer states that attempts to govern complex robotic behavior with coded strictures may be misguided.   He uses some science fiction examples as data points supporting this hypothesis, but I am looker for a deeper analysis. We often govern complex behavior with coded strictures—examples are the inspection and testing regulations for automobiles and airplanes’ annual service and maintenance requirements.  Take the position that robots are indeed different. Identify and justify three ways in which coded regulations will not work for governing robot behavior the way they do for extant, non-robotic technology products.

Artificial Morality: Wallach

Wendell Wallach.  Artificial Morality.

Questions:

1. Wallach argues about a future in which computers have comprehensive rationality; that is, they do not suffer from the bounded rationality characteristic true of today’s humans and robots. I would argue forcefully that this is a seriously flawed argument, and that embodied robots will have a significant bounded rationality problem. First, agree or disagree with me and explain why in concrete terms. Second, if I am in fact right that this is a flawed perspective, explain the impact of this changed assumption on the key points of this article regarding artificial moral intelligence.

 

Wendell Wallach. Implementing Moral Decision Making Faculties in Computers and Robots.

Consider a home care robot for an elderly person. It helps with simple chores, like cleaning the floor, but also keeps its eye on the occupant. In cases where the person is having trouble- say, spending too long in the bathroom, it has the ability and option to reach out to the occupant’s children by contacting them.  It has to weigh privacy concerns with maintenance of safety and health.

2. The top-down strategy that Wallach describes on pages 466-467- in computer science and AI terms, what are the key breakthroughs we would need in order to implement such top-down moral reasoning for our home care robot? How many years away do you think this is? – explain your estimate.  

3. The bottom-up strategy that Wallach describes on page 467- restate just how is this robot becoming a moral reasoning system?  Wallach suggests that this bottom-up approach has promise due to its embedded learning nature, so that morality is learned in a context of existence and action relevant to one’s own personal experience.  Yet at the bottom of p.467 Wallach suggests porting the resulting, learned system from one robot to another (in effect, a cloning of moral reasoning).  What technical challenges do you foresee in this bottom-up approach? How many years away do you think this may be?

Rhetoric and Robotics

Bruno Latour.  Science in Action.

This very short section speaks about the rhetoric inherent in science practice.

 

Reading question:

Rhetoric, pp 30-33

1 In some of the robot visionaries’ articles (such as by Bill Joy, Ron Arkin and Ray Kurzweil) there appears to be an appeal towardsargument from authority in isolated texts, just as described by Latour.  Yet these are opinion-setting pieces for systems that have not yet been built.  Is there an alternative for a more rhetorically balanced presentation of these ideas so that the readers have a fairer chance at correctly interpreting the opinions and concepts expressed?

 

Drew McDermott.  Artificial Intelligence meets Natural Stupidity.

McDermott’s paper certainly caused ripples when it was first published. It represents a state of mind and a snapshot of time that is of great value to revisit.

Reading question:

  1. The inflationary spiral that McDermott describes for AI in 1981—do you think this phenomenon exists in robotics today? Describe its instantiation in our field if you believe it exists.  If you believe we do not suffer from this problem, describe why you think we have managed to avoid it.

Rhetoric of Robotics is an unpublished manuscript by me. Please read it only as background. No questions required on it.

Self-Replication

Cho. Making Machines that make Others of their Kind.

Answer only #2: By the end of this article, it ought to be clear that what self-replicating means, and even how and who does it today, is utterly unclear.  Come up with a new term and a precise, workable definition. The term I am looking for would be a characteristic of robotic systems such that: (1) no robot system today achieves this characteristic; (2) if a robot in the future did achieve this characteristic, we could have a whole lot of similar robots quickly thereafter, without any explicit decisions by humans to manufacture or produce more copies.  This new term, which I will call self-replication 2.0 until you invent a much better term and definition, should represent an exciting disruptive point in the future possible trajectory of robotics.

 

Moshe Sipper and James Reggia. Go Forth and Replicate. Scientific American special issue, pp 49 – 57

Prognostication. How many years away do you believe we are from a primitive form of robot self-replication? Define the timespan and also the level of technical sophistication you are suggesting at that point.

 

Kenneth Chang. Scientists report they have made robot that makes its own robots.

This news article is useful for rhetorical analysis.

(3) Re-write in three paragraphs the core of this article in the New York Times.  Your job is to write the article so that the lay reader can actually form a considered opinion of the work, and so that the reader’s future thoughts about legislation, ethics and technology’s future can be informed and supported by the information you present.

Gray Goo and Self-Replication (1)

Bill Joy.  Why the Future doesn’t Need Us.

This work sparked every possible emotional and technical response imaginable in its time. You should read it for yourself. Note I will give you official Bill Joy – published copies with pictures in class!

(reduced) Reading Questions:

 

1. But while I was aware of the moral dilemmas surrounding technology’s consequences in fields like weapons research, I did not expect that I would confront such issues in my own field, or at least not so soon.

Joy’s statement above suggests a naiveté about the separation of ethics from technological progress and engineering in general. Can any engineering or science field be fully divorced from moral issues, or are moral issues always part and parcel of any field of technical inquiry? Justify your answer to this question in detail.

2. By 2030, we are likely to be able to build machines, in quantity, a million times as powerful as the personal computers of today – sufficient to implement the dreams of Kurzweil and Moravec.

First, do some research to see if this prediction, in terms of forward computational prediction, is still roughly accurate 6 years following Joy’s article publication.  Explain what you find.  

3. The progression of thinking as evolved by Oppenheimer is a fascinating study of a scientist’s response to deeply disruptive scientific discovery.  Joy describes Oppenheimer’s response to the atomic bomb project following V-E day, following the military use of two atomic bombs, and then in the years following the event.  Describe the evolution of Oppenheimer’s attitude over time, couching your description and engaging with the tools you have been learning in our class.

 

Selmer Bringsjord.  Ethical robotics: the future can heed us.

1. You have now read Bill Joy’s original article first, followed by Bringsjord’s response.  On balance, what do you accept of what Bringsjord says, what do you reject, and where does this leave you with respect to Joy’s original article?