The Ethics of Character: Hinman2

Lawrence Hinman.  The Ethics of Character: Aristotle and Our Contemporaries.

This is the second of two excellent Ethics introductions chapters we will be reading thanks to Hinman.  

Reading question:

1 The Aristotelian approach to ethics, based on personal and societal character and virtue, is a markedly different approach from consequentialism, which more obviously mates with the cognitive process of a robotics engineer in deliberating future technological advancements.  Considering human flourishing as a litmus test of character ethics, evaluate the Roomba in terms of what positive ethical impact it is capable of having from an Aristotelian point of view. You ought to come up with at least three distinct ways in which such a home-cleaning robot can have positive impact on human flourishing; name and describe them.

Consequentialist Ethics

Larry Hinman, The Ethics of Consequences: Utilitarianism

NOTE: I am going to delay this due date to Monday 30 September. It is a fair amount of reading, but not for one and a half days.

This book chapter by Prof. Lawrence Hinman excellently introduces consequentialism, which has significant relevance to robotics and future engineering and research considerations.

Questions:

1 Regarding Hinman’s interview with Peter Railton, by the end it is clear that the almost black and white theory of consequentialism can turn into a weakened form in practice when real life runs up against absolute ethical practices.  Railton explains why it is alright to not tow the line fully, to not do everything possible for the good of the world, even at cost to his children and family.  If practical application of consequentialism or utilitarianism is somewhat approximate, describe how it could have benefit in the case of an ethical framework for the practice of robotics nonetheless.

2 By page 154 you have learned about act, rule and practice utilitarianism, and furthermore several examples (of not very real-world scenarios) should have you convinced that the picture of what is desirable is downright confusing within this framework. Hinman says on p. 154 that many suggest that limits must be imposed on consequentialist calculations (for example a hard and fast rule for human rights, or perhaps torture, not to be crossed in spite of consequentialist calculation).  If we were to implement a utilitarianist approach to ethical guidelines for robotics, can you suggest one or more key limits that should be treated as hard limits beyond which consequentialist calculation would not be allowed to cross?  Defend your constraint or constraints.

The Singularity, chapter 2 of 2

David Pearce. The Biointelligence Explosion excerpt: pp. 1 – 16

I hope you enjoyed reading that- what a ride. Prepare for some fun in the classroom with this piece.

1. Pearce’s trope is “modifying your own source code.”  Analyze this phraseology. It is intended to provoke a certain imagining on the part of the reader. First of all, is real-time genetic manipulation a modification of source code?   Is source code an appropriate term to use in describing part of the human body architecture?  Finally, if we are modifying our source code and choosing to use open-source tools (human IDE’s?) to do so, can you extrapolate from software engineering processes today to imagine some of the issues and challenges we may face? Get dystopian.

2. Imagine that this article attracted serious traction- got printed in the Times and was read by everyone.  Journalists start quoting it, pundits talk about this future.  How would you (now you are a policymaker) attempt to modulate opinions that flow freely after non-experts read this article to get people to have some perspective that separates likely futures from exaggeration?

 

Joe Weizenbaum

Computer Power and Human Reason, pp. 1-16, 128-129, 226-227, 270-271

 

 

1. Weizenbaum states that there are some things computers ought not do. What does he identify in this category and what is his reasoning?

 

2. What are you opinions about the answer to #1 above; do you agree or disagree with Weizenbaum. Justify your opinion.

 

 

The Singularity, chapter 1 of 2

This assignment is due on September 18th, Wednesday

 

P.W. Singer.  Wired for War (pp 94-108).

Singer does an outstanding job of describing the landscape of singularity proponents. Enjoy this and read as background.  If you have any comments feel free to share them here. There are no required reading questions for you to answer.

 

Hans Moravec.  Rise of the Robots.

Ray Kurzweil. The Coming Merging of Mind and Machine.

 [Note these readings come from the large Sci Am reading available to you from the syllabus]

These two well-respected thinkers write in the same Scientific American special issue regarding their vision of robotics’ and humanity’s future. Read both of these articles first.  Hans Moravec and Ray Kurzweil represent two of the deepest thinkers in the Singularity movement, which has won support from major corporations and even founded a university in Silicon Valley.

Questions:

1 Assumptions.  Identify at least three major, unsupported assumptions made by each author that are foundational to their view of our singularity-driven future.  To be precise, this means you are identifying and discussing a total of 6 assumptions.

2 For each of Moravec and Kurzweil, answer the following sub-questions:

 a) How does the author deal with the ethical analysis of his imagined future in his article?  If you see an ethical discussion, identify it and evaluate it.

b) If you were trying to model how the author likely thinks about ethical consequences regarding his work, what system or form of ethics do you believe the author would represent?

3 Conduct your own ethical evaluation of the hypothesized singularity (merger of human and robot forms), using either the consequentialist analysis or a character-based, Aristotelian analysis.

George’s News Submission

 
Title: Fox News Does Science 
 
Fair and Balanced? Perhaps. 
Grounded in Reality? Well let’s see.
 
Some students at UCSD built a Fire Surveillance Robot, and produced a youtube video demonstrating its ability to map its environment using stereoscopic cameras. The video attracted 20,000 views, won grand prize at a Student Infrared Imaging Competition, then… got picked up by Fox News. 
 
At the time the video became popular, I was working with Will Warren, one of the authors, who was surprised by the attention his research had attracted. 
 
We will discuss the actual work he did and Fox News’ take on it. 

Thomas Gieryn: Cultural Boundaries of Science (2)

This assignment is due 09 September.

The (Cold) Fusion of Science, Mass Media and Politics (pp 183-232)

Cold fusion is an outstanding cultural study of science gone awry- it is the execution of the reveal that itself failed to illuminate, instead confusing all sorts of decision-makers and publics for quite some time.  Create a cultural cartographic representation of the cold fusion story as you see it- a map- that identifies the various concepts and parties relevant to this story.  Take liberties and enjoy this one.

As for posting this, I don’t know if comments can have images attached in WordPress. If so, go for it! If this is not possible, post your image somewhere on-line and put a link in your WordPress comment that resolves to your image.

 

Bruno Latour: Science in Action (1)

This assignment is due 04 September.

Latour’s dense writing yields many insights. This selection speaks to the community of science and the loneliness of dissent.

Reading questions:

 Dissenters and Loneliness, pp 40-44

1 Dissension is lonely, and the question we can consider is, if robotics behaves like a strong steam locomotive with strong work and strong articles charging forth an agenda that does not incorporate consideration of ethics into the practice of robotics, can the ethical roboticist be literally crowded out of the equation because his or her ideas are too lonely to gain any traction at all?  Is there a structural or personal way we could avoid this rut?

Diesel and credit assignment, pp 104-108

2 This example reinforces the [somewhat trite] point that science is done not by a single person but by a whole history of community acting over time. Does this diminish or amplify the importance of a single person’s ethical decision-making regarding the robotics work that they do? Explain your answer.