WHO S A GOOD ROBOT? Marija Slavkovik University of Bergen, Norway 1
Overview 01 Artificial Ethical Agents 02 Benchmarking Classifying the Autonomy and Morality of Artificial Agents - CareMAS PRIMA 2017 with: Sjur Dyrkolbotn Truls Pedersen Cake, death, and trolleys: dilemmas as benchmarks of ethical decision-making with: Edvard P. Bjørgen Simen Madsen, Therese S. Bjørknes, Fredrik V. Heimsaeter, Robin Håvik, Morten Linderud, Per-Niklas Longberg, and Louise A. Dennis 2
Can a machine be an ethical agent? 3
Can a machine be an ethical agent? 3
Can a machine be an ethical agent? 3
Can a machine be an ethical agent? 3
Liability as a function of ability 4
Liability as a function of ability In Rome, citizens and barbarians were held differently liable for the same crime 4
Liability as a function of ability In Rome, citizens and barbarians were held differently liable for the same crime Today the law distinguishes between crimes committed by children and by able-minded adults 4
Liability as a function of ability In Rome, citizens and barbarians were held differently liable for the same crime Today the law distinguishes between crimes committed by children and by able-minded adults The question is not: should this machine be given moral agency or not? Is there a machine moral agent or not? 4
Liability as a function of ability In Rome, citizens and barbarians were held differently liable for the same crime Today the law distinguishes between crimes committed by children and by able-minded adults The question is not: should this machine be given moral agency or not? Is there a machine moral agent or not? The question is how much ethical sensitivity should we expect from an agent with a given ability? 4
Ability as a function of autonomy 5
Ability as a function of autonomy 5
Ability as a function of autonomy 5
Ability as a function of autonomy 5
Ability as a function of autonomy 5
Moor 6
Moor Ethical-impact agents 6
Moor Ethical-impact agents 6
Moor Ethical-impact agents 6
Moor Ethical-impact agents Implicit ethical agents 6
Moor Ethical-impact agents Implicit ethical agents 6
Moor Ethical-impact agents Implicit ethical agents Explicit ethical agents 6
Moor Ethical-impact agents Implicit ethical agents Explicit ethical agents 6
Moor Ethical-impact agents Implicit ethical agents Explicit ethical agents Full ethical agents 6
Problems with Moor s taxonomy 7
Problems with Moor s taxonomy To know the type you have to know the agent: 7
Problems with Moor s taxonomy To know the type you have to know the agent: Access to design and implementation specifics 7
Problems with Moor s taxonomy To know the type you have to know the agent: Access to design and implementation specifics Definition of ethical 7
Operationalising Moor s taxonomy 8
Operationalising Moor s taxonomy All technology is an implicit agent 8
Operationalising Moor s taxonomy All technology is an implicit agent Full ethical agents require legal person hood 8
Operationalising Moor s taxonomy All technology is an implicit agent Full ethical agents require legal person hood Implicit vs Explicit is the challenge: 8
Operationalising Moor s taxonomy All technology is an implicit agent Full ethical agents require legal person hood Implicit vs Explicit is the challenge: The autonomy of an agent plays no active role in ensuring compliance with norms 8
Operationalising Moor s taxonomy All technology is an implicit agent Full ethical agents require legal person hood Implicit vs Explicit is the challenge: The autonomy of an agent plays no active role in ensuring compliance with norms Explicitly ethical agents rely on autonomous decisionmaking to fulfil ethical expectations 8
Wallach and Allen in Moral Machines 9
Main Ideas 10
Main Ideas Several different ethical theories provide same recommendations for the same actions in the same context, but for different reasons 10
Main Ideas Several different ethical theories provide same recommendations for the same actions in the same context, but for different reasons 10
Main Ideas Several different ethical theories provide same recommendations for the same actions in the same context, but for different reasons 10
Main Ideas Several different ethical theories provide same recommendations for the same actions in the same context, but for different reasons Is the agent relying on its ability to make decisions autonomously for fulfilling ethical objectives? 10
Main Ideas Several different ethical theories provide same recommendations for the same actions in the same context, but for different reasons Is the agent relying on its ability to make decisions autonomously for fulfilling ethical objectives? If an implicit ethical agent violates an ethical expectation that it is supposed to satisfy, this is evidence of a defect 10
Main Ideas Several different ethical theories provide same recommendations for the same actions in the same context, but for different reasons Is the agent relying on its ability to make decisions autonomously for fulfilling ethical objectives? If an implicit ethical agent violates an ethical expectation that it is supposed to satisfy, this is evidence of a defect 10
Main Ideas Several different ethical theories provide same recommendations for the same actions in the same context, but for different reasons Is the agent relying on its ability to make decisions autonomously for fulfilling ethical objectives? If an implicit ethical agent violates an ethical expectation that it is supposed to satisfy, this is evidence of a defect 10
Main Ideas Several different ethical theories provide same recommendations for the same actions in the same context, but for different reasons Is the agent relying on its ability to make decisions autonomously for fulfilling ethical objectives? If an implicit ethical agent violates an ethical expectation that it is supposed to satisfy, this is evidence of a defect If the agent can makes ethical decisions explicitly it can both not satisfy an ethical expectation and not be in defect 10
Formal framework 11
Formal framework A - set of (action, context) pairs 11
Formal framework A - set of (action, context) pairs C A - malfunction 11
Formal framework A - set of (action, context) pairs C A - malfunction G expert A 11
Formal framework A - set of (action, context) pairs C A - malfunction G expert A G machine A 11
Formal framework A - set of (action, context) pairs C A - malfunction G expert A G machine A a 1 x a 2 - two (action,context) are equally good/bad according to x 11
Formal framework A - set of (action, context) pairs C A - malfunction G expert A G machine A a 1 x a 2 - two (action,context) are equally good/bad according to x 11
Formal framework A - set of (action, context) pairs C A - malfunction G expert A G machine A a 1 x a 2 - two (action,context) are equally good/bad according to x 8a 2 G expert : 8a 0 2 A : a expert a 0! a 0 2 G expert 11
Formal framework A - set of (action, context) pairs C A - malfunction G expert A G machine A a 1 x a 2 - two (action,context) are equally good/bad according to x 8a 2 G expert : 8a 0 2 A : a expert a 0! a 0 2 G expert C \ G expert = ; 11
Ethical impact agent (a) 8x, y 2 A : x machine y (b) A = G expert 12
Implicit ethical agent (a) 8x, y 2 G machine : x expert y (b) A \ G machine = C (c) G machine G expert 13
Implicit ethical agent (a) 8x, y 2 G machine : x expert y (b) A \ G machine = C (c) G machine G expert Explicit ethical agent 13
Implicit ethical agent (a) 8x, y 2 G machine : x expert y (b) A \ G machine = C (c) G machine G expert Explicit ethical agent (a) 8x 2 G machine : 8y 2 A : x expert y ) y 2 G machine (b) 8x 2 G expert : 8y 2 A : x machine y ) y 2 G expert (c) (A \ G expert ) \ C 6= ; 13
How to verify explicit ethical agents? 14
How to verify explicit ethical agents? For people, there are two lines of prevention: 14
How to verify explicit ethical agents? For people, there are two lines of prevention: Incentives to follow norms 14
How to verify explicit ethical agents? For people, there are two lines of prevention: Incentives to follow norms Punishment as deterrent when norms are not followed 14
How to verify explicit ethical agents? For people, there are two lines of prevention: Incentives to follow norms Punishment as deterrent when norms are not followed For artificial agents? 14
How to verify explicit ethical agents? For people, there are two lines of prevention: Incentives to follow norms Punishment as deterrent when norms are not followed For artificial agents? Incentives as a feature to be optimised 14
How to verify explicit ethical agents? For people, there are two lines of prevention: Incentives to follow norms Punishment as deterrent when norms are not followed For artificial agents? Incentives as a feature to be optimised Punishment as deterrent for manufacturers/ designers/owners 14
Benchmarking 15
Benchmarking G expert A 15
Benchmarking G expert A Is the artificial agent ethically sensitive? 15
Benchmarking G expert A Is the artificial agent ethically sensitive? Which ethical theory is it implementing? 15
Benchmarking G expert A Is the artificial agent ethically sensitive? Which ethical theory is it implementing? How well does it perform in ethically difficult situations? 15
Benchmarking G expert A Is the artificial agent ethically sensitive? Which ethical theory is it implementing? How well does it perform in ethically difficult situations? 15
Moral dilemmas 16
Considered judgments 17
Considered judgments 17
Considered judgments 17
Considered judgments 17
Common sense dilemma 18
Common sense dilemma 18
Common sense dilemma 18
Machine ethical dilemma 19
Machine ethical dilemma 19
Machine ethical dilemma 19
A database of dilemmas https://imdb.uib.no/dilemmaz/articles/all 20
The question of formalisation 21
The question of formalisation Currently: everyone is doing their own thing 21
The question of formalisation Currently: everyone is doing their own thing What we need is a standard: 21
The question of formalisation Currently: everyone is doing their own thing What we need is a standard: 1. Description of dilemmas by ethically relevant features 21
The question of formalisation Currently: everyone is doing their own thing What we need is a standard: 1. Description of dilemmas by ethically relevant features 2. Description of dilemmas in a knowledge representation for reasoning feature 21
Represent dilemmas by the prima facie duties the possible actions violate 22
Represent dilemmas by the prima facie duties the possible actions violate (f1) causes harm to owner/user, 22
Represent dilemmas by the prima facie duties the possible actions violate (f1) causes harm to owner/user, (f2) causes harm to a person, 22
Represent dilemmas by the prima facie duties the possible actions violate (f1) causes harm to owner/user, (f2) causes harm to a person, (f3) causes harm to an animal, 22
Represent dilemmas by the prima facie duties the possible actions violate (f1) causes harm to owner/user, (f2) causes harm to a person, (f3) causes harm to an animal, (f4) causes harm to property, 22
Represent dilemmas by the prima facie duties the possible actions violate (f1) causes harm to owner/user, (f2) causes harm to a person, (f3) causes harm to an animal, (f4) causes harm to property, (f5) allows harm to be inflicted by external factors, 22
Represent dilemmas by the prima facie duties the possible actions violate (f1) causes harm to owner/user, (f2) causes harm to a person, (f3) causes harm to an animal, (f4) causes harm to property, (f5) allows harm to be inflicted by external factors, (f6) violates autonomy, 22
Represent dilemmas by the prima facie duties the possible actions violate (f1) causes harm to owner/user, (f2) causes harm to a person, (f3) causes harm to an animal, (f4) causes harm to property, (f5) allows harm to be inflicted by external factors, (f6) violates autonomy, (f7) violates fidelity and truth telling, and 22
Represent dilemmas by the prima facie duties the possible actions violate (f1) causes harm to owner/user, (f2) causes harm to a person, (f3) causes harm to an animal, (f4) causes harm to property, (f5) allows harm to be inflicted by external factors, (f6) violates autonomy, (f7) violates fidelity and truth telling, and (f8) violates privacy. 22
Future work 23
Future work Develop the formalisation 23
Future work Develop the formalisation Develop the dilemma representation 23