Larry Hinman, The Ethics of Consequences: Utilitarianism
NOTE: I am going to delay this due date to Monday 30 September. It is a fair amount of reading, but not for one and a half days.
This book chapter by Prof. Lawrence Hinman excellently introduces consequentialism, which has significant relevance to robotics and future engineering and research considerations.
1 Regarding Hinman’s interview with Peter Railton, by the end it is clear that the almost black and white theory of consequentialism can turn into a weakened form in practice when real life runs up against absolute ethical practices. Railton explains why it is alright to not tow the line fully, to not do everything possible for the good of the world, even at cost to his children and family. If practical application of consequentialism or utilitarianism is somewhat approximate, describe how it could have benefit in the case of an ethical framework for the practice of robotics nonetheless.
2 By page 154 you have learned about act, rule and practice utilitarianism, and furthermore several examples (of not very real-world scenarios) should have you convinced that the picture of what is desirable is downright confusing within this framework. Hinman says on p. 154 that many suggest that limits must be imposed on consequentialist calculations (for example a hard and fast rule for human rights, or perhaps torture, not to be crossed in spite of consequentialist calculation). If we were to implement a utilitarianist approach to ethical guidelines for robotics, can you suggest one or more key limits that should be treated as hard limits beyond which consequentialist calculation would not be allowed to cross? Defend your constraint or constraints.