John Sullins.  When is a Robot a Moral Agent?

1. Can you identify a non-robotic, non-human entities in our society that are moral agents by the definitions espoused in Sullins’ work? Explain.

2. Sullins’ definition of ‘autonomy’ on page 26 suggests a very low bar, enabling many computer-controlled systems to be labeled as autonomous. Autonomy may be more important than he suggests, so it is worth your time to think about defining autonomy more precisely for the consideration of moral agency in the case of autonomous robots specifically. So- give me a new, deeply thought-out definition of autonomy for this case. Please don’t invent a definition that can never be true!

 

Robert Sawyer.  Robot Ethics.

3. Sawyer states that attempts to govern complex robotic behavior with coded strictures may be misguided.   He uses some science fiction examples as data points supporting this hypothesis, but I am looker for a deeper analysis. We often govern complex behavior with coded strictures—examples are the inspection and testing regulations for automobiles and airplanes’ annual service and maintenance requirements.  Take the position that robots are indeed different. Identify and justify three ways in which coded regulations will not work for governing robot behavior the way they do for extant, non-robotic technology products.

Advertisements