Wendell Wallach.  Artificial Morality.

Questions:

1. Wallach argues about a future in which computers have comprehensive rationality; that is, they do not suffer from the bounded rationality characteristic true of today’s humans and robots. I would argue forcefully that this is a seriously flawed argument, and that embodied robots will have a significant bounded rationality problem. First, agree or disagree with me and explain why in concrete terms. Second, if I am in fact right that this is a flawed perspective, explain the impact of this changed assumption on the key points of this article regarding artificial moral intelligence.

 

Wendell Wallach. Implementing Moral Decision Making Faculties in Computers and Robots.

Consider a home care robot for an elderly person. It helps with simple chores, like cleaning the floor, but also keeps its eye on the occupant. In cases where the person is having trouble- say, spending too long in the bathroom, it has the ability and option to reach out to the occupant’s children by contacting them.  It has to weigh privacy concerns with maintenance of safety and health.

2. The top-down strategy that Wallach describes on pages 466-467- in computer science and AI terms, what are the key breakthroughs we would need in order to implement such top-down moral reasoning for our home care robot? How many years away do you think this is? – explain your estimate.  

3. The bottom-up strategy that Wallach describes on page 467- restate just how is this robot becoming a moral reasoning system?  Wallach suggests that this bottom-up approach has promise due to its embedded learning nature, so that morality is learned in a context of existence and action relevant to one’s own personal experience.  Yet at the bottom of p.467 Wallach suggests porting the resulting, learned system from one robot to another (in effect, a cloning of moral reasoning).  What technical challenges do you foresee in this bottom-up approach? How many years away do you think this may be?

Advertisements