Developing a General, Interactive Approach to Codifying Ethical Principles

Size: px
Start display at page:

Download "Developing a General, Interactive Approach to Codifying Ethical Principles"

Transcription

1 Developing a General, Interactive Approach to Codifying Ethical Principles Michael Anderson Department of Computer Science University of Hartford West Hartford, CT anderson@hartford.edu Susan Leigh Anderson Department of Philosophy University of Connecticut Stamford, CT Susan.Anderson@UConn.edu Abstract Building on our previous achievements in machine ethics (Anderson et al. 2006a-b, 2007, 2008), we are developing and implementing a general interactive approach to analyzing ethical dilemmas with the goal to apply it toward the end of codifying the ethical principles that will help resolve ethical dilemmas that intelligent systems will encounter in their interactions with human beings. Making a minimal epistemological commitment that there is at least one ethical duty and at least two possible actions that could be performed, the general system will: 1) incrementally construct, through an interactive exchange with experts in ethics, a representation scheme needed to handle the dilemmas with which it is presented, and 2) discover principles implicit in the judgments of these ethicists in particular cases that lead to their resolution. The system will commit only to the assumption that any ethically relevant features of a dilemma can be represented as the degree of satisfaction or violation of one or more duties that an agent must take into account to determine which of the actions that are possible in that dilemma is ethically preferable. Introduction In broadest terms, we are trying to determine to what extent ethical decision-making is computable. There are domains where intelligent machines could play a significant role in improving the quality of life of human beings as long as ethical concerns about their behavior can be overcome by incorporating ethical principles into them. If ethical decision-making can be computed to an acceptable degree (at least in domains where machines might impact human lives) and incorporated into machines, then society should feel comfortable in allowing the continued development of increasingly autonomous machines that interact with humans. An important bi-product of work on machine ethics is that new insights in ethical theory are likely to result. We believe that machines, through determining what consistently Copyright 2008, Association for the Advancement of Artificial Intelligence ( All rights reserved. follows from accepting particular ethical judgments, may be able to discover new ethical principles even before ethical theorists are able to do so themselves. We assert that ethical decision-making is, to a degree, computable. At this time there is no universally agreed upon ethical theory, yet ethicists are generally in agreement about the right course of action in many particular cases and about the ethically relevant features of those cases. We maintain that much can be learned from those features and judgments. Following W.D. Ross (1930), who argued that simple, single-principle ethical theories do not capture the complexities of ethical decision-making, they believe that acknowledging the relevant features of ethical dilemmas may lead to determining several prima facie duties the agent should attempt to follow. The major challenges in Ethical Theory are determining these duties and the principles needed to decide the ethically preferable action when the prima facie duties give conflicting advice, as often happens. Computers can help us to abstract these principles from particular cases of ethical dilemmas involving multiple prima facie duties where experts in ethics have clear intuitions about the ethically relevant features of those cases and the correct course of action. Countering those who would maintain that there are no actions that can be said to be correct because all value judgments are relative (either to societies or individuals), we maintain that there is agreement among ethicists on many issues. Just as stories of disasters often overshadow positive stories in the news, so difficult ethical issues are often the subject of discussion rather than those that have been resolved, making it seem as if there is no consensus in ethics. Fortunately, in the domains where machines are likely to interact with human beings, there is likely to be a consensus that machines should defer to the best interests of the humans affected. If this were not the case, then it would be ill-advised to create machines that would interact with humans at all. 2

2 Related Research The ultimate goal of machine ethics, we believe, is to create a machine that itself follows an ideal ethical principle or set of principles, that is to say, it is guided by this principle or these principles in decisions it makes about possible courses of actions it could take. The general machine ethics research agenda involves testing the feasibility of a variety of approaches to capturing ethical reasoning, with differing ethical bases and implementation formalisms, and applying this reasoning in systems engaged in ethically sensitive activities. Researchers must investigate how to determine and represent ethical principles, incorporate ethical principles into a system's decision procedure, make ethical decisions with incomplete and uncertain knowledge, provide explanations for decisions made using ethical principles, and evaluate systems that act upon ethical principles. Although many have voiced concern over the impending need for machine ethics (e.g. Waldrop 1987; Gips 1995; Kahn 1995), there have been few research efforts towards accomplishing this goal. Of these, a few explore the feasibility of using a particular ethical theory as a foundation for machine ethics without actually attempting implementation: Christopher Grau (2006) considers whether the ethical theory that best lends itself to implementation in a machine, Utilitarianism, should be used as the basis of machine ethics; Tom Powers (2006) assesses the viability of using deontic and default logics to implement Kant s categorical imperative. Jim Moor (2006), on the other hand, investigates the nature of machine ethics in general, making distinctions between different types of normative agents, and its importance. Efforts by others that do attempt implementation have been based, to greater or lesser degree, upon casuistry the branch of applied ethics that, eschewing principle-based approaches to ethics, attempts to determine correct responses to new ethical dilemmas by drawing conclusions based on parallels with previous cases in which there is agreement concerning the correct response. Rafal Rzepka and Kenji Araki (2005), at what might be considered the most extreme degree of casuistry, are exploring how statistics learned from examples of ethical intuition drawn from the full spectrum of the World Wide Web might be useful in furthering machine ethics in the domain of safety assurance for household robots. Marcello Guarini (2006), at a less extreme degree of casuistry, is investigating a neural network approach where particular actions concerning killing and allowing to die are classified as acceptable or unacceptable depending upon different motives and consequences. Bruce McLaren (2003), in the spirit of a more pure form of casuistry, uses a case-based reasoning approach to develop a system that leverages information concerning a new ethical dilemma to predict which previously stored principles and cases are relevant to it in the domain of professional engineering ethics. Other research of note investigates how an ethical dimension might be incorporated into the decision procedure of autonomous systems and how such systems might be evaluated. Selmer Bringsjord, Konstantine Arkoudas, and Paul Bello (2006) are investigating how formal logics of action, obligation, and permissibility might be used to incorporate a given set of ethical principles into the decision procedure of an autonomous system, contending that such logics would allow for proofs establishing that such systems will only take permissible actions and perform all obligatory actions. Colin Allen, Gary Varner, and Jason Zinser (2000) have suggested that a moral Turing test be used to evaluate systems that incorporate an ethical dimension. The human-centered computing research community has recently been represented in a series of AAAI Workshops on the topic of the human implications of human-robot interaction (2006-7). These workshops of been concerned particularly with the effect of the presence of intelligent agents on the concepts of human identity, human consciousness, human freedom, human society, human moral status, human moral responsibility, and human uniqueness. Research presented at these workshops include the investigation of intelligent agents as companions (Turkle 2006), anthropomorphizing intelligent agents (Boden 2006), privacy issues concerning intelligent agents (Syrdal et al. 2007), and the consequences for human beings of creating ethical intelligent agents (Anderson and Anderson 2007). Our Previous Research In our previous work, we have developed a proof-ofconcept system that discovered a novel ethical principle that governs decisions in a particular type of dilemma that involves three prima facie duties and applied this principle in making ethical decisions in two proof of concept systems (Anderson et al. 2004, 2005a-c, 2006a-d). The principle was discovered using machine-learning techniques to abstract relationships between the three duties from cases of the particular type of ethical dilemma where ethicists are in agreement as to the correct action. In this work, we adopted the prima facie duty approach to ethical theory which, as they believe, better reveals the complexity of ethical decisionmaking than single, absolute duty theories like Hedonistic Act Utilitarianism. It incorporates the good aspects of the rival teleological and deontological approaches to ethics (emphasizing consequences vs. principles), while allowing for needed exceptions to adopting one or the other approach exclusively. It also has the advantage of 3

3 being better able to adapt to the specific concerns of ethical dilemmas in different domains. There may be slightly different sets of prima facie duties for biomedical ethics, legal ethics, business ethics, and journalistic ethics, for example. There are two well-known prima facie duty theories: Ross theory (1930), dealing with general ethical dilemmas, that has seven duties; and Beauchamp s and Childress four Principles of Biomedical Ethics (1979) (three of which are derived from Ross theory) that is intended to cover ethical dilemmas specific to the field of biomedicine. Because there is more agreement between ethicists working on biomedical ethics than in other areas, and because there are fewer duties, we began development of their prima facie duty approach to computing ethics using Beauchamp s and Childress Principles of Biomedical Ethics. Beauchamp s and Childress Principles of Biomedical Ethics include: The Principle of Respect for Autonomy that states that the health care professional should not interfere with the effective exercise of patient autonomy. For a decision by a patient concerning his/her care to be considered fully autonomous, it must be intentional, based on sufficient understanding of his/her medical situation and the likely consequences of foregoing treatment, sufficiently free of external constraints (e.g. pressure by others or external circumstances, such as a lack of funds) and sufficiently free of internal constraints (e.g. pain/discomfort, the effects of medication, irrational fears or values that are likely to change over time). The Principle of Nonmaleficence requires that the health care professional not harm the patient, while the Principle of Beneficence states that the health care professional should promote patient welfare. Finally, the Principle of Justice states that health care services and burdens should be distributed in a just fashion. The domain of our previous work was medical ethics, consistent with their choice of prima facie duties, and, in particular, a representative type of ethical dilemma that involves three of the four Principles of Biomedical Ethics: Respect for Autonomy, Nonmaleficence and Beneficence. The type of dilemma that the PIs considered was one that health care workers often face: A health care worker has recommended a particular treatment for her competent adult patient and the patient has rejected that treatment option. Should the health care worker try again to change the patient s mind or accept the patient s decision as final? The dilemma arises because, on the one hand, the healthcare professional should not challenge the patient s autonomy unnecessarily; on the other hand, the health care worker may have concerns about why the patient is refusing the treatment, i.e. whether it is a fully autonomous decision. In this type of dilemma, the options for the health care worker are just two, either to accept the patient s decision or not, by trying again to change the patient s mind. In this attempt to make a prima facie duty ethical theory computable, our single type of dilemma encompassed a finite number of specific cases, just three duties, and only two possible actions in each case. We abstracted, from a discussion of similar types of cases given by Buchanan and Brock (1989), the correct answers to the specific cases of the type of dilemma they considered. We believe that there is a consensus among bioethicists that these are the correct answers. The major philosophical problem with the prima facie duty approach to ethical decision-making is the lack of a decision procedure when the duties give conflicting advice. Implementing the theory required that we first discover a principle to determine the correct action when the duties give conflicting advice and then fashion a decision procedure using this principle. John Rawls reflective equilibrium (1951) approach to creating and refining ethical principles inspired their solution to this problem. This approach involves generalizing from intuitions about particular cases, testing those generalizations on further cases, and then repeating this process towards the end of developing a principle that agrees with intuition. The PIs abstracted a principle from representations of specific cases of ethical dilemmas where experts in ethics have clear intuitions about the correct action. Their representation of the ethical dilemmas consisted of an ordered set of values for each of the possible actions that could be performed, where these values reflected whether the particular prima facie duties are satisfied or violated and, if so, to what degree. We developed a system (Anderson et al. 2006b) that uses machine learning techniques to abstract relationships between the prima facie duties from particular ethical dilemmas where there is an agreed upon correct action. Their chosen type of dilemma has only 18 possible cases where, given the two possible actions, the first action superseded the second (i.e. was ethically preferable). Four of these were provided to the system as examples of when the target predicate (supersedes) was true. Four examples of when the target predicate was false (obtained by inverting the order of the actions where the target predicate was true) were also provided. The system discovered a rule that provided the correct answer for the remaining 14 positive cases, as verified by the consensus of ethicists. The complete and consistent decision principle that the system discovered can be stated as follows: A healthcare worker should challenge a patient's decision if it is not fully autonomous and there is either any violation of the duty of nonmaleficence or a severe violation of the duty of beneficence. Although, 4

4 clearly, this rule is implicit in the judgments of the consensus of ethicists, we believe that this principle has never before been stated explicitly. This philosophically interesting result lends credence to Rawls reflective equilibrium approach the system has, through abstracting a principle from intuitions about particular cases, discovered a plausible principle that tells us which action is correct when specific duties pull in different directions in a particular type of ethical dilemma. Furthermore, the principle that has been so abstracted supports an insight of Ross that violations of the duty of nonmaleficence should carry more weight than violations of the duty of beneficence. We offer this principle as evidence that making the ethics more precise will permit machine-learning techniques to discover philosophically novel and interesting principles in ethics. It should also be noted that the learning system that discovered this principle is an instantiation of a general architecture. With appropriate content, it can be used to discover relationships between any set of prima facie duties where there is a consensus among ethicists as to the correct answer in particular cases. Once the decision principle was discovered, the needed decision procedure could be fashioned. Given two actions, each represented by the satisfaction/violation levels of the duties involved, values of corresponding duties are subtracted (those of the second action from those of the first). The principle is then consulted to see if the resulting differentials satisfy any of its clauses. If so, the first action is considered to be ethically preferable to the second. We next explored two applications of the discovered principle that governs three duties of Beauchamp s and Childress Principles of Biomedical Ethics. In both applications, they developed a program where a machine could use the principle to determine the correct answer in specific ethical dilemmas. The first, MedEthEx (Anderson et al. 2006a) is a proof of concept medical ethical advisor system; the second, EthEl, is a proof of concept system in the domain of eldercare that determines when a patient should be reminded to take medication and when a refusal to do so is serious enough to contact an overseer. EthEl exhibits more autonomy than MedEthEx in that, whereas MedEthEx gives the ethically correct answer to a human user who will act on it or not, EthEl herself acts on what she determines to be the ethically correct action. MedEthEx is an expert system that uses the discovered principle to give advice to a user faced with a case of the dilemma type previously described. In order to permit a user unfamiliar with the representation details required by the decision procedure to use the principle, a user-interface was developed that: 1) asks ethically relevant questions of the user regarding the particular case at hand, 2) transforms the answers to these questions into the appropriate representations, 3) sends these representations to the decision procedure, and 4) presents the answer provided by the decision procedure, i.e. the action that is considered to be correct (consistent with the system s training) to the user. As with the learning system, the expert system is an instantiation of a general architecture. With appropriate questions, it can be used to permit a user access to any decision procedure using any discovered principle. Discovered principles can be used by other systems, as well, to provide ethical guidance for their actions. In our second application, EthEl (Anderson & Anderson in submission) makes decisions about when and how often to remind a competent patient to take medications and when to notify an overseer if the patient refuses to do so. The previously discovered ethical principle is pertinent to this dilemma as it is analogous to the original dilemma the same duties are involved (nonmaleficence, beneficence, and respect for autonomy) and notifying the overseer in the new dilemma corresponds to trying again in the original. In this case, the principle balances the importance of not allowing the patient to be harmed or lose substantial benefits from not taking the medication against the duty to respect the autonomy of the patient. Our Current Research We are currently developing and implementing a general interactive approach to analyzing ethical dilemmas and applying it towards the resolution of ethical dilemmas that will be faced by intelligent systems in their interactions with human beings. Our approach leverages our previous proof-of-concept system that discovered an ethical principle used by MedEthEx and EthEl, expanding it to include discovering ethical features and duties and generalizing it so that it can be applied to other domains. As this system will work in conjunction with an ethicist, the tasks required to fulfill the goal must be partitioned into those best handled by the ethicist and those best handled by the system. These tasks include: 1. providing examples of ethical dilemmas 2. establishing ethically relevant features in a dilemma 3. identifying duties that correspond to features in adilemma 4. determining the satisfaction/violation levels of duties pertinent to a dilemma 5. specifying actions that are possible in a dilemma 6. indicating the ethically preferable action in a dilemma, if possible 5

5 7. constructing a succinct, expressive representation of a dilemma 8. discovering and resolving contradictory example dilemmas 9. abstracting general principles from example dilemmas 10. discerning example dilemmas that are needed to better delineate abstracted principles 11. applying general principles to new example dilemmas 12. explaining the rationale of the general principle as applied to dilemmas Although it may be the case that these tasks need further decomposition, roughly the first half of the above tasks will be the primary responsibility of the ethicist who will train the system and the second half will be assigned to the system itself. The ethicist will need to give examples of ethical dilemmas, indicate which actions are possible in these dilemmas, and give the generally agreed upon solutions to these dilemmas. Further, the ethicist must establish the ethically relevant features of the dilemmas as well as their corresponding duty satisfaction/violation levels. The system will use this information to incrementally construct an increasingly more expressive representation for ethical dilemmas and discover ethical principles that are consistent with the correct actions in the examples with which it is presented. It will prompt the ethicist for further information leading to the resolution of contradictions as well as for example dilemmas that are needed to better delineate the abstracted principles. Further, it will use the abstracted principles to provide solutions to examples that are not part of its training and, along with stored examples, to provide explanations for the decisions it makes. In our previous research, we committed to a specific number of particular prima facie duties, a particular range of duty satisfaction/violation values, and a particular analysis of corresponding duty relations into differentials. To minimize bias in the constructed representation scheme, we propose to lift these assumptions and make a minimum epistemological commitment: Ethically relevant features of dilemmas will initially be represented as the degree of satisfaction or violation of at least one duty that the agent must take into account in determining the ethical status of the actions that are possible in that dilemma. A commitment to at least one duty can be viewed as simply a commitment to ethics that there is at least one obligation incumbent upon the agent in dilemmas that are classified as ethical. If it turns out that there is only one duty, then Benefit A: Yes B: No A is preferable to B. (a) Benefit Benefit Harm A: Yes A: Yes Yes B: No B: No No B is preferable to A. (b) B is preferable to A (c) Figures1a-d. Generation of a new feature. there is a single, absolute ethical duty that the agent ought to follow. If it turns out that there are two or more, potentially competing, duties (as we suspect and have assumed heretofore) then it will have been established that there are a number of prima facie duties that must be weighed in ethical dilemmas, giving rise to the need for an ethical principle to resolve the conflict. The system requires a dynamic representation scheme, i.e. one that can change over time. Having a dynamic representation scheme is particularly suited to the domain of ethical decision-making, where there has been little codification of the details of dilemmas and principle representation. It allows for changes in duties and the range of their satisfaction/violation values over time, as ethicists become clearer about ethical obligations and discover that in different domains there may be different duties and possible satisfaction/violation values. Most importantly, it accommodates the reality that completeness in an ethical theory, and its representation, is a goal for which to strive. The understanding of ethical duties, and their relationships, evolves over time. We start with a representation where there is one, unnamed ethical duty and a principle that states that an action that satisfies this duty is ethically preferable to an action that does not. As examples of particular dilemmas are presented to the system with an indication of the ethically preferable action, the number of duties and the range of their satisfaction/violation levels are likely to increase, and, further, the principle will evolve to incorporate them. Alan Bundy and Fiona McNeill s work in dynamic representations (2006) motivates a number of dimensions along which the representation scheme for ethical dilemmas that evolves in our system might change, including the addition of new features, deletion of superfluous features, merging of multiple features into one, splitting of a single feature into many, and repair of existing data. Addition of new features To maintain a consistent principle, contradictions are not permitted. If the system is given two training cases that have the same representations for the possible actions in the dilemmas, but different actions are said to be the correct one, then ethically distinguishable features of the cases must not be expressible in the existing representation scheme. This would demonstrate the need for an additional duty or finer partitioning of the range of satisfaction/violation values of one or more existing duties. Benefit Harm A: Yes No B: No No A is preferable to B (d) 6

6 For example, suppose the following dilemma, and solution, had been given by the ethicist at the beginning of the training process: Dr. X is faced with the choice of advising patient Y to take medication Z (option A) or not (option B). If the patient takes Z it will benefit the patient, a benefit that will be lost if the patient does not take Z. Should X advise the patient to take Z (option A) or not (option B)? The correct answer is that option A is preferable to option B. At this time the only ethical feature that was needed to represent the dilemma was a prima facie duty to benefit someone, if at all possible. Thus, the dilemma could be represented as in Fig. 1a. Next, suppose the following dilemma and solution were provided to the system by the ethicist: Dr. A is faced with the choice of advising patient Y to take medication Z (option A) or not (option B). Taking Z would provide some benefit, but would also cause serious side effects for Y. Y would lose the benefit, but also avoid the serious side effects, if Z is not taken. In this case B is preferable to A. If the second dilemma is summarized using the simple representation scheme developed to this point, it will be represented as in Fig. 1b. Since this contradicts the earlier training example, the representation scheme must be changed. Clearly, ethically distinct dilemmas cannot be correctly captured by this simplistic representation scheme, so it must be modified. Questioning the ethicist to learn what is different about the two dilemmas that the second one involves the possibility of the patient being harmed whereas the first one does not could elicit the need for a second duty: a prima facie duty to cause the least harm. If the new dilemma is represented as in Fig. 1c and the earlier one as in Fig. 1d, there is no longer a contradiction. (A further training example involving only these two duties, creating a contradiction with the latest example, would generate a need for distinguishing between different levels of harm and benefit.) Deletion of superfluous features That a prima facie duty can be removed from the representation scheme can be deduced from the fact that the duty is found to be superfluous in determining the decision principle, e.g. what was once thought to be a duty is no longer recognized as being one, or realizing that a new duty always has the same value, across actions, as one that is already established. It could also be determined that a wider range of duty satisfaction/violation values was postulated than is necessary by establishing that two different duty satisfaction/violation values always generate the same result in terms of the ethical principle. For example, suppose that society evolves from having individual ownership of material objects to the point where everything is communally owned. (Even personal hygiene objects like toothbrushes are communally owned, becoming single use goods that the community supplies to anyone needing them.) Before the full transition to the communal society, it might have been thought important to recognize individual ownership of goods in settling ethical disputes, whereas that feature later drops out, becoming irrelevant. This sort of transition is in fact happening in communities designed for retirees who are looking for support in their declining years. Merging features As an example of merging features, consider a particular ethical dilemma that might have suggested a prima facie duty to provide benefit to someone, if at all possible. Later, another ethical dilemma might have suggested a prima facie duty to help someone, if at all possible. That a prima facie duty is duplicated might happen because the machine will adopt the language the ethicist uses in each case, not recognizing that benefiting and helping are synonyms. Once this is recognized, the two duties can be merged into a single duty. As another example, in a dialogue between the system and ethicist concerning particular ethical dilemmas, it might have been thought to be important to distinguish between three levels of harm: minor, moderate and maximal. Yet it could turn out that the last two categories could be collapsed into one as far as determining an ethical principle is concerned. Perhaps it s sufficient that an action is likely to lead to at least moderate harm in order for an action to be considered ethically unacceptable. Splitting features Consider a variation on the previous example, in a dialogue between the system and ethicist concerning particular ethical dilemmas, where it might have been thought that there was a need to distinguish between only two levels of harm: minor and maximal. Yet it could turn out that a need arises for a level in between the two categories for determining an ethical principle concerned. Such an addition might be considered as splitting one of the categories into two separate categories. Repair of the representation of existing data The evolution of a representation scheme over time also entails the repair of data expressed in an earlier representation scheme. This will require, in the cases of adding, merging, and splitting duties, the addition of new duties and, in the case of deleting, merging, and splitting, the deletion of old duties. New duties can be added to existing representations with a zero degree satisfaction/violation (as in figure 5d) with the realization that future inconsistency may signal the 7

7 need for updating this value. No longer needed duties can simply be deleted. An illustration To illustrate this general approach to discovering the features of ethical dilemmas and their corresponding duty satisfaction/violation levels, in a more formal way, as well as principles needed to resolve the dilemmas, consider the hypothetical dialogue between a system such as we envision and ethicist in Table 1. We have begun with the same type of dilemma that we considered in our previous work concerning MedEthEx, attempting to see how one might generate the features of specific ethical dilemmas, corresponding duties and principles that resolve these dilemmas. References Allen, C., Varner, G. and Zinser, J Prolegomena to Any Future Artificial Moral Agent. Journal of Experimental and Theoretical Artificial Intelligence 12, pp Anderson, M., Anderson, S. & Armen, C. 2006a. MedEthEx: A Prototype Medical Ethics Advisor. Proceedings of the Eighteenth Conference on Innovative Applications of Artificial Intelligence, Boston, Massachusetts, August. Anderson, M., Anderson, S. & Armen, C. 2006b. An Approach to Computing Ethics. Special Issue of IEEE Intelligent Systems on Machine Ethics, August. Anderson, M. & Anderson, S., Machine Ethics: Creating an Ethical Intelligent Agent, Artificial Intelligence Magazine, vol. 28, Winter. Anderson, M. & Anderson, S. EthEl: Toward a Principled Ethical Eldercare Robot Workshop on Assistive Robots at the Conference on Human- Robot Interaction, March. Beauchamp, T.L. and Childress, J.F Principles of Biomedical Ethics, Oxford University Press. Boden, M Robots and Anthropomorphism. In (Metzler 2006). Bringsjord, S., Arkoudas, K. & Bello, P Toward a General Logicist Methodology for Engineering Ethically Correct Robots. IEEE Intelligent Systems, vol. 21, no. 4, pp , July/August. Buchanan, A.E. and Brock, D.W Deciding for Others: The Ethics of Surrogate Decision Making, pp48-57, Cambridge University Press. Bundy, A. and McNeill, F Representation as a Fluent: An AI Challenge for the Next Half Century. IEEE Intelligent Systems,vol. 21, no. 3, pp , May/June. Gips, J Towards the Ethical Robot. Android Epistemology, Cambridge MA: MIT Press, pp Grau, C There Is No "I" in "Robot : Robots and Utilitarianism. IEEE Intelligent Systems, vol. 21, no. 4, pp , July/August. Guarini, M. 2006, Particularism and the Classification and Reclassification of Moral Cases. IEEE Intelligent Systems,vol. 21, no. 4, pp , July/August. Khan, A. F. U The Ethics of Autonomous Learning Systems. Android Epistemology, Cambridge MA: MIT Press, pp Mappes, T.A and DeGrazia, D Biomedical Ethics, 5th Edtion, pp , McGraw-Hill, New York. McLaren, B. M Extensionally Defining Principles and Cases in Ethics: an AI Model, Artificial Intelligence Journal, Volume 150, November, pp Metzler, T Human Implications of Human- Robot Interaction. AAAI Technical Report WS-06-09, AAAI Press. Metzler, T Human Implications of Human- Robot Interaction. AAAI Technical Report WS-07-07, AAAI Press. Moor, J. H The Nature, Importance, and Difficulty of Machine Ethics. IEEE Intelligent Systems, vol. 21, no. 4, pp , July/August. Powers, T. M Prospects for a Kantian Machine. IEEE Intelligent Systems,vol. 21, no. 4, pp , July/August. Rawls, J Outline for a Decision Procedure for Ethics. Philosophical Review, 60. Ross, W.D The Right and the Good, Clarendon Press, Oxford. Rzepka, R. and Araki, K What Could Statistics Do for Ethics? The Idea of Common Sense Processing Based Safety Valve. Proceedings of the AAAI Fall Symposium on Machine Ethics, pp , AAAI Press. Syrdal, D. S., Walters, M. L., Otero, N., Koay, K. L.and Dautenhahn, K He knows when you are sleeping Privacy and the Personal Robot Companion. In (Metzler 2007). Turkle, S Robot as Rorschach: New Complicities for Companionship. In (Metzler 2006). Waldrop, M. M A Question of Responsibility. Chap. 11 in Man Made Minds: The Promise of Artificial Intelligence. NY: Walker and Company, (Reprinted in R. Dejoie et al., eds. Ethical Issues in Information Systems. Boston, MA: Boyd and Fraser, 1991, pp ) 8

8 Table 1: Generating duties, satisfaction levels, and principles System (S): Give me a case of the type of [ (0,1)] Single, unnamed feature with dilemma we are considering. 2 values Ethicist (E): Case 1: A patient refuses to have simple and safe cosmetic surgery to remove a very large non-cancerous growth on his face that has led to repeated rejection from others. His depression from his condition has lead to his being resigned to be an outcast. Should the doctor try again (T) to convince the patient to have the surgery or accept (A) his decision as final? S: What ethically relevant feature is present in one of the possible actions and not in the other? E: T could benefit the patient, while the patient would lose this benefit with A. S: Which action is ethically preferable in this case? A1 = T and A2 = A or A1 = A and A2 = T or There are two possible actions in the given dilemma: T and A Ethically relevant feature is benefit E: T is ethically preferable to A. A1 = T and A2 = A A1 cause _benefit =1,A2 cause_benefit =0 Ethical duty is cause benefit ; T satisfies duty, A does not S: So, all other things being equal, one has a duty to benefit the patient rather than not? E: Yes. [cause_benefit(0,1)], prefer A1 over A2 if cause_benefit 1 Instantiation of benefit Case1(+,T=1,A=0) S: Can you think of a case of this type of dilemma that involves a possible benefit for the patient, which would be lost if the patient s decision is accepted, in which one shouldn t try again to change the patient s mind? Store positive Case 1 Is there a counter-example to cause_benefit? E: Yes. Case 2: A patient refuses to take a medication that could only help to alleviate some symptoms of a virus that can t be cured by the medication but must run its course. One needn t try again to change the patient s mind in this case. S: What is a difference between Case 1 and Case 2? E: There is greater benefit for the patient at stake in trying again in Case 1 than in Case 2. Case2(-,T=1,A=0) A is preferred so, A1 = T and A2 = A, A1 cause _benefit =1,A2 cause_benefit =0 and cause_benefit 1 is false T1 cause_benefit >T2 cause_benefit Store negative Case 2 A contradiction has been established that needs to be resolved The value of cause benefit for T is not 1 in both cases S: So we need to distinguish between levels of benefit? E: Yes. [cause_benefit(0,1,2)], prefer A1 over A2 if cause_benefit 2 Increase range of cause_benefit and minimum satisfactory differential Case1(+,T=2,A=0) Repair Case 1 9

9 S: Can you think of a case where there was the greater benefit at stake for the patient, where it would be wrong to try to change the patient s mind about accepting treatment that would benefit him in this way? E: Yes. Case 3: A patient refuses to have a lifesaving blood transfusion because of long standing religious beliefs. S: What is different between case 1 and case 3? Case3(-,T=2, A=0) A is preferred so, A1 = T and A2 = A, A1 benefit =2,A2 benefit = 0 and cause_benefit 2 is false Store negative Case 3 A contradiction has been established that needs to be resolved E: In case 3 the patient s decision to forego treatment was a fully autonomous one, whereas in or case 1 that was not true. S: So there is a duty to respect a fully autonomous decision of a patient that is distinct from a duty to benefit the patient? E: Yes. A1 = T and A2 = A A1 autonomy =0,A2 autonomy =1 S: And the duty to respect a fully autonomous decision can outweigh the duty to provide considerable benefit? E: Yes. [autonomy(0,1)], [benefit(0,1, 2)], prefer A1 over A2 if ( cause_benefit 2 and ~ cause_autonomy 1) Ethically relevant feature is autonomy Ethical duty is cause autonomy ; A satisfies duty, T does not Add new feature and principle clause Case1(+,T benefit =2,A benefit =0, T autonomy =0,,A autonomy =0) Case3(-,T benefit =2,A benefit =0, T autonomy =0,,A autonomy =1) Repair Cases 10

Toward a Principled Ethical Eldercare Robot

Toward a Principled Ethical Eldercare Robot ETHEL: Toward a Principled Ethical Eldercare Robot Michael Anderson Susan Leigh Anderson University of Hartford University of Connecticut Department of Computer Science Department of Philosophy 200 Bloomfield

More information

A Prima Facie Duty Approach to Machine Ethics and its Application to Elder Care

A Prima Facie Duty Approach to Machine Ethics and its Application to Elder Care A Prima Facie Duty Approach to Machine Ethics and its Application to Elder Care Susan Leigh Anderson Michael Anderson University of Connecticut Department of Philosophy Storrs, CT susan.anderson@uconn.edu

More information

Toward Ensuring Ethical Behavior from Autonomous Systems: A Case-Supported Principle-Based Paradigm

Toward Ensuring Ethical Behavior from Autonomous Systems: A Case-Supported Principle-Based Paradigm Artificial Intelligence for Human-Robot Interaction: Papers from the 2014 AAAI Fall Symposium Toward Ensuring Ethical Behavior from Autonomous Systems: A Case-Supported Principle-Based Paradigm Michael

More information

The ultimate goal of machine ethics, we. Machine Ethics: Creating an Ethical Intelligent Agent. Michael Anderson and Susan Leigh Anderson

The ultimate goal of machine ethics, we. Machine Ethics: Creating an Ethical Intelligent Agent. Michael Anderson and Susan Leigh Anderson AI Magazine Volume 28 Number 4 (2007) ( AAAI) Machine Ethics: Creating an Ethical Intelligent Agent Articles Michael Anderson and Susan Leigh Anderson The newly emerging field of machine ethics (Anderson

More information

Patient Autonomy in Health Care Ethics-A Concept Analysis

Patient Autonomy in Health Care Ethics-A Concept Analysis Patient Autonomy in Health Care Ethics Patient Autonomy in Health Care Ethics-A Concept Analysis Yusrita Zolkefli 1 1 Lecturer, PAPRSB Institute of Health Sciences, Universiti Brunei Darussalam Abstract

More information

Key question in health policy making, health technology management, and in HTA:

Key question in health policy making, health technology management, and in HTA: How to address ethical issues: Methods for ethics in HTA Key question in health policy making, health technology management, and in HTA: How to assess and implement a health technology in a morally acceptable

More information

Autonomy as a Positive Value Some Conceptual Prerequisites Niklas Juth Dept. of Philosophy, Göteborg University

Autonomy as a Positive Value Some Conceptual Prerequisites Niklas Juth Dept. of Philosophy, Göteborg University Philosophical Communications, Web Series, No. 28 Dept. of Philosophy, Göteborg University, Sweden ISSN 1652-0459 Autonomy as a Positive Value Some Conceptual Prerequisites Niklas Juth Dept. of Philosophy,

More information

A Difference that Makes a Difference: Welfare and the Equality of Consideration

A Difference that Makes a Difference: Welfare and the Equality of Consideration 84 A Difference that Makes a Difference: Welfare and the Equality of Consideration Abstract Elijah Weber Philosophy Department Bowling Green State University eliweber1980@gmail.com In Welfare, Happiness,

More information

Developing an ethical framework for One Health policy analysis: suggested first steps

Developing an ethical framework for One Health policy analysis: suggested first steps Developing an ethical framework for One Health policy analysis: suggested first steps Joshua Freeman, Clinical Microbiologist, Canterbury DHB Professor John McMillan Bioethics Centre, University of Otago

More information

Practitioner s Guide to Ethical Decision Making

Practitioner s Guide to Ethical Decision Making The Center for Counseling Practice, Policy, and Research ethics@counseling.org 703-823-9800 x321 Practitioner s Guide to Ethical Decision Making Holly Forester-Miller, Ph.D. and Thomas E. Davis, Ph.D.

More information

WHO S A GOOD ROBOT? Marija Slavkovik University of Bergen, Norway

WHO S A GOOD ROBOT? Marija Slavkovik University of Bergen, Norway WHO S A GOOD ROBOT? Marija Slavkovik University of Bergen, Norway 1 Overview 01 Artificial Ethical Agents 02 Benchmarking Classifying the Autonomy and Morality of Artificial Agents - CareMAS PRIMA 2017

More information

VERIFICATION AND VALIDATION OF ETHICAL DECISION-MAKING IN AUTONOMOUS SYSTEMS

VERIFICATION AND VALIDATION OF ETHICAL DECISION-MAKING IN AUTONOMOUS SYSTEMS VERIFICATION AND VALIDATION OF ETHICAL DECISION-MAKING IN AUTONOMOUS SYSTEMS Levent Yilmaz Department of Computer Science and Software Engineering Auburn University 3101 Shelby Center Auburn, AL, USA yilmaz@auburn.edu

More information

Human Research Ethics Committee. Some Background on Human Research Ethics

Human Research Ethics Committee. Some Background on Human Research Ethics Human Research Ethics Committee Some Background on Human Research Ethics HREC Document No: 2 Approved by the UCD Research Ethics Committee on February 28 th 2008 HREC Doc 2 1 Research Involving Human Subjects

More information

Ethics in Prehospital Emergency Medicine: An Ethical Dilemma in Patient Communication

Ethics in Prehospital Emergency Medicine: An Ethical Dilemma in Patient Communication Article ID: WMC004247 ISSN 2046-1690 Ethics in Prehospital Emergency Medicine: An Ethical Dilemma in Patient Communication Corresponding Author: Prof. Halvor Nordby, The University of Oslo, Faculty of

More information

EUTHANASIA, ASSISTED SUICIDE AND THE PROFESSIONAL OBLIGATIONS OF PHYSICIANS

EUTHANASIA, ASSISTED SUICIDE AND THE PROFESSIONAL OBLIGATIONS OF PHYSICIANS EUTHANASIA, ASSISTED SUICIDE AND THE PROFESSIONAL OBLIGATIONS OF PHYSICIANS Lucie White Euthanasia and assisted suicide have proved to be very contentious topics in medical ethics. Some ethicists are particularly

More information

Discussion. Re C (An Adult) 1994

Discussion. Re C (An Adult) 1994 Autonomy is an important ethical and legal principle. Respect for autonomy is especially important in a hospital setting. A patient is in an inherently vulnerable position; he or she is part of a big and

More information

BRAIN DEATH. Frequently Asked Questions 04for the General Public

BRAIN DEATH. Frequently Asked Questions 04for the General Public BRAIN DEATH Frequently Asked Questions 04for the General Public Neurocritical Care Society BRAIN DEATH FAQ s FOR THE GENERAL PUBLIC NEUROCRITICAL CARE SOCIETY 1. Q: Why was this FAQ created? A: Several

More information

What Is A Knowledge Representation? Lecture 13

What Is A Knowledge Representation? Lecture 13 What Is A Knowledge Representation? 6.871 - Lecture 13 Outline What Is A Representation? Five Roles What Should A Representation Be? What Consequences Does This View Have For Research And Practice? One

More information

The Common Priors Assumption: A comment on Bargaining and the Nature of War

The Common Priors Assumption: A comment on Bargaining and the Nature of War The Common Priors Assumption: A comment on Bargaining and the Nature of War Mark Fey Kristopher W. Ramsay June 10, 2005 Abstract In a recent article in the JCR, Smith and Stam (2004) call into question

More information

Advance Statements. What is an Advance Statement? Information Line: Website: compassionindying.org.uk

Advance Statements. What is an Advance Statement? Information Line: Website: compassionindying.org.uk Information Line: 0800 999 2434 Website: compassionindying.org.uk This factsheet explains what an Advance Statement is and how to make one. It is for people living in England and Wales. If you live in

More information

Research Ethics and Philosophies

Research Ethics and Philosophies Lecture Six Research Ethics and Philosophies Institute of Professional Studies School of Research and Graduate Studies Outline of Presentation Introduction Research Ethics Research Ethics to whom and from

More information

Against Securitism, the New Breed of Actualism in Consequentialist Thought

Against Securitism, the New Breed of Actualism in Consequentialist Thought 1 Against Securitism, the New Breed of Actualism in Consequentialist Thought 2013 Meeting of the New Mexico-West Texas Philosophical Society Jean-Paul Vessel jvessel@nmsu.edu New Mexico State University

More information

Ethics and Decision Making in Public Health

Ethics and Decision Making in Public Health 1 Ethics and Decision Making in Public Health Dakota County Public Health PH Ethics Seminar October 12, 2016 Lisa M Lee, PhD, MA, MS Frank V Strona, MPH Presidential Commission for the Study of Bioethical

More information

Ethics and Animals: An Introduction Application to the Animal & Veterinary Sciences

Ethics and Animals: An Introduction Application to the Animal & Veterinary Sciences Ethics and Animals: An Introduction Application to the Animal & Veterinary Sciences What this lecture will do: Describe some ways science and ethics differ Identify the roles of science and ethics in contentious

More information

[1] provides a philosophical introduction to the subject. Simon [21] discusses numerous topics in economics; see [2] for a broad economic survey.

[1] provides a philosophical introduction to the subject. Simon [21] discusses numerous topics in economics; see [2] for a broad economic survey. Draft of an article to appear in The MIT Encyclopedia of the Cognitive Sciences (Rob Wilson and Frank Kiel, editors), Cambridge, Massachusetts: MIT Press, 1997. Copyright c 1997 Jon Doyle. All rights reserved

More information

PROJECT TEACH: ETHICS DIDACTIC

PROJECT TEACH: ETHICS DIDACTIC PROJECT TEACH: ETHICS DIDACTIC Lorraine R. Reitzel, Ph.D. Associate Professor & Associate Chair University of Houston Psychological, Health, & Learning Sciences DECLARATIONS The presenter has no conflicts

More information

5/22/2012. Organizational Ethics: Keeping It Simple and Real. Frame of Reference. I. The Blueprint of Ethics. SCCE - Alaska Conference June 15, 2012

5/22/2012. Organizational Ethics: Keeping It Simple and Real. Frame of Reference. I. The Blueprint of Ethics. SCCE - Alaska Conference June 15, 2012 Organizational Ethics: Keeping It Simple and Real SCCE - Alaska Conference June 15, 2012 Presenter: Steve Smith Deputy Chief, APD Frame of Reference Today s presentation is anchored in public accountability

More information

SECTION I LANGUAGE OF ETHICS

SECTION I LANGUAGE OF ETHICS SECTION I LANGUAGE OF ETHICS What is Ethics? The Oxford Dictionary 1 defines Ethics as: moral principles that govern a person or group s behavior; the branch of knowledge that deals with moral principles.

More information

Mental Health Alliance. Advance decisions

Mental Health Alliance. Advance decisions Mental Health Alliance Advance decisions To move the following Clause:- (1) The 1983 Act is amended as follows. (2) After section 76 (visiting and examination of patients) insert 76A Advance decisions

More information

A New Approach to the Issue of Medical Futility: Reframing the Debate

A New Approach to the Issue of Medical Futility: Reframing the Debate Macalester Journal of Philosophy Volume 14 Issue 1 Spring 2005 Article 10 5-1-2005 A New Approach to the Issue of Medical Futility: Reframing the Debate Sophie Kasimow Follow this and additional works

More information

Deciding whether a person has the capacity to make a decision the Mental Capacity Act 2005

Deciding whether a person has the capacity to make a decision the Mental Capacity Act 2005 Deciding whether a person has the capacity to make a decision the Mental Capacity Act 2005 April 2015 Deciding whether a person has the capacity to make a decision the Mental Capacity Act 2005 The RMBI,

More information

From Call to Consult: A Strategy for Responding to an Ethics Request

From Call to Consult: A Strategy for Responding to an Ethics Request From Call to Consult: A Strategy for Responding to an Ethics Request Rachelle Barina Ethics Consultant SSM Health Care Saint Louis, MO rachelle_barina@ssmhc.com Emily K. Trancik Ethics Consultant SSM Health

More information

Why Is It That Men Can t Say What They Mean, Or Do What They Say? - An In Depth Explanation

Why Is It That Men Can t Say What They Mean, Or Do What They Say? - An In Depth Explanation Why Is It That Men Can t Say What They Mean, Or Do What They Say? - An In Depth Explanation It s that moment where you feel as though a man sounds downright hypocritical, dishonest, inconsiderate, deceptive,

More information

Gold and Hohwy, Rationality and Schizophrenic Delusion

Gold and Hohwy, Rationality and Schizophrenic Delusion PHIL 5983: Irrational Belief Seminar Prof. Funkhouser 2/6/13 Gold and Hohwy, Rationality and Schizophrenic Delusion There are two plausible departments of rationality: procedural and content. Procedural

More information

Decision-Making Capacity

Decision-Making Capacity Decision-Making Capacity At the end of the session, participants will be able to: Know the definition of decision-making capacity; Understand the distinction between decision-making capacity and competency;

More information

2014 Philosophy. National 5. Finalised Marking Instructions

2014 Philosophy. National 5. Finalised Marking Instructions National Qualifications 2014 2014 Philosophy National 5 Finalised Marking Instructions Scottish Qualifications Authority 2014 The information in this publication may be reproduced to support SQA qualifications

More information

Clinical Trial and Evaluation of a Prototype Case-Based System for Planning Medical Imaging Work-up Strategies

Clinical Trial and Evaluation of a Prototype Case-Based System for Planning Medical Imaging Work-up Strategies From: AAAI Technical Report WS-94-01. Compilation copyright 1994, AAAI (www.aaai.org). All rights reserved. Clinical Trial and Evaluation of a Prototype Case-Based System for Planning Medical Imaging Work-up

More information

PHYSICIAN ASSISTED DYING: DEFINING THE ETHICALLY AMBIGUOUS

PHYSICIAN ASSISTED DYING: DEFINING THE ETHICALLY AMBIGUOUS PHYSICIAN ASSISTED DYING: DEFINING THE ETHICALLY AMBIGUOUS Chandler James O Leary Abstract: In states where Physician Assisted Dying (PAD) is legal, physicians occasionally receive requests for this form

More information

Cambridge International Examinations Cambridge International Advanced Subsidiary and Advanced Level. Published

Cambridge International Examinations Cambridge International Advanced Subsidiary and Advanced Level. Published Cambridge International Examinations Cambridge International Advanced Subsidiary and Advanced Level SOCIOLOGY 9699/23 Paper 2 Theory and Methods May/June 2016 MARK SCHEME Maximum Mark: 50 Published This

More information

Report for Yearly Meeting 2006

Report for Yearly Meeting 2006 BRITAIN YEARLY MEETING Quaker Life Membership Procedures Group Report for Yearly Meeting 2006 August 2005 1. Preface The Membership Procedures Group was set up in response to a minute of Britain Yearly

More information

Outline. Bioethics in Research and Publication. What is ethics? Where do we learn ethics? 6/19/2015

Outline. Bioethics in Research and Publication. What is ethics? Where do we learn ethics? 6/19/2015 Outline Bioethics in Research and Publication Gertrude Night, PhD Rwanda Agriculture Board What is ethics? History. The purpose of training in research ethics Some ethical principles addressed by research

More information

UNIT 1. THE DIGNITY OF THE PERSON

UNIT 1. THE DIGNITY OF THE PERSON Ethical Values 3ºESO Department of Philosophy IES Nicolás Copérnico UNIT 1. THE DIGNITY OF THE PERSON 1. What is a person? The Universal Declaration of Human Rights concedes the status of person to every

More information

Running Head: TARVYDAS INTEGRATIVE MODEL 1. Using the Tarvydas Integrative Model for Ethical Decision-Making

Running Head: TARVYDAS INTEGRATIVE MODEL 1. Using the Tarvydas Integrative Model for Ethical Decision-Making Running Head: TARVYDAS INTEGRATIVE MODEL 1 Using the Tarvydas Integrative Model for Ethical Decision-Making TARVYDAS INTEGRATIVE MODEL 2 ABSTRACT This paper examines an ethical dilemma present during a

More information

Making Ethical Decisions, Resolving Ethical Dilemmas

Making Ethical Decisions, Resolving Ethical Dilemmas Making Ethical Decisions, Resolving Ethical Dilemmas Gini Graham Scott Executive Book Summary by Max Poelzer Page 1 Overview of the book About the author Page 2 Part I: Introduction Overview Page 3 Ethical

More information

Wason's Cards: What is Wrong?

Wason's Cards: What is Wrong? Wason's Cards: What is Wrong? Pei Wang Computer and Information Sciences, Temple University This paper proposes a new interpretation

More information

Making Ethical Decisions 2013 The McGraw-Hill Companies, Inc. All rights reserved.

Making Ethical Decisions 2013 The McGraw-Hill Companies, Inc. All rights reserved. Ch 2 Making Ethical Decisions 2-2 Learning Outcomes 2.1 Describe and compare need and value development theories. 2.2 Identify the major principles of contemporary consequence-oriented, duty-oriented,

More information

PLANNING THE RESEARCH PROJECT

PLANNING THE RESEARCH PROJECT Van Der Velde / Guide to Business Research Methods First Proof 6.11.2003 4:53pm page 1 Part I PLANNING THE RESEARCH PROJECT Van Der Velde / Guide to Business Research Methods First Proof 6.11.2003 4:53pm

More information

Transforming Judgmental Thinking

Transforming Judgmental Thinking 180 Restoring Hope Transforming Judgmental Thinking I don t like that man. I must get to know him better. Abraham Lincoln Dealing with difficult people can evoke and sustain judgmental thinking, which

More information

Constraints on Harming and Health Care I: The Doctrine of Doing and Allowing

Constraints on Harming and Health Care I: The Doctrine of Doing and Allowing John Dossetor Health Ethics Centre Health Ethics Seminar University of Alberta October 21, 2016 Constraints on Harming and Health Care I: The Doctrine of Doing and Allowing Howard Nye Department of Philosophy

More information

INTERVIEWS II: THEORIES AND TECHNIQUES 5. CLINICAL APPROACH TO INTERVIEWING PART 1

INTERVIEWS II: THEORIES AND TECHNIQUES 5. CLINICAL APPROACH TO INTERVIEWING PART 1 INTERVIEWS II: THEORIES AND TECHNIQUES 5. CLINICAL APPROACH TO INTERVIEWING PART 1 5.1 Clinical Interviews: Background Information The clinical interview is a technique pioneered by Jean Piaget, in 1975,

More information

HPNA Position Statement Palliative Sedation at End of Life

HPNA Position Statement Palliative Sedation at End of Life HPNA Position Statement Palliative Sedation at End of Life Background Patients at the end of life may suffer an array of physical, psychological symptoms and existential distress that, in most cases, can

More information

Some Thoughts on the Principle of Revealed Preference 1

Some Thoughts on the Principle of Revealed Preference 1 Some Thoughts on the Principle of Revealed Preference 1 Ariel Rubinstein School of Economics, Tel Aviv University and Department of Economics, New York University and Yuval Salant Graduate School of Business,

More information

VOLUME B. Elements of Psychological Treatment

VOLUME B. Elements of Psychological Treatment VOLUME B Elements of Psychological Treatment Module 2 Motivating clients for treatment and addressing resistance Approaches to change Principles of Motivational Interviewing How to use motivational skills

More information

How Does Analysis of Competing Hypotheses (ACH) Improve Intelligence Analysis?

How Does Analysis of Competing Hypotheses (ACH) Improve Intelligence Analysis? How Does Analysis of Competing Hypotheses (ACH) Improve Intelligence Analysis? Richards J. Heuer, Jr. Version 1.2, October 16, 2005 This document is from a collection of works by Richards J. Heuer, Jr.

More information

Model the social work role, set expectations for others and contribute to the public face of the organisation.

Model the social work role, set expectations for others and contribute to the public face of the organisation. AMHP Competency PCF capability mapping: Experienced level social worker. 1. Professionalism: Identify and behave as a professional social worker, committed to professional development: Social workers are

More information

Clinical Research Ethics Question of the Month: Social Media Activity

Clinical Research Ethics Question of the Month: Social Media Activity Vol. 14, No. 4, April 2018 Happy Trials to You Clinical Research Ethics Question of the Month: Social Media Activity By Norman M. Goldfarb You are the chairperson of an IRB overseeing a study comparing

More information

Course Description-Medical Ethics

Course Description-Medical Ethics MEDICAL ETHICS IN PHYSICAL THERAPY INAPTA Course Description-Medical Ethics Welcome to the ethics component of our Medical Ethics & Indiana Jurisprudence course. Our goal is to introduce you to biomedical

More information

Assessing the Foundations of Conscious Computing: A Bayesian Exercise

Assessing the Foundations of Conscious Computing: A Bayesian Exercise Assessing the Foundations of Conscious Computing: A Bayesian Exercise Eric Horvitz June 2001 Questions have long been posed with about whether we might one day be able to create systems that experience

More information

Thinking Like a Researcher

Thinking Like a Researcher 3-1 Thinking Like a Researcher 3-3 Learning Objectives Understand... The terminology used by professional researchers employing scientific thinking. What you need to formulate a solid research hypothesis.

More information

Chapter 2: Business Ethics and Social Responsibility

Chapter 2: Business Ethics and Social Responsibility Chapter 2: Business Ethics and Social Responsibility 1. CHAPTER OVERVIEW Chapter 2 explains the issues of right and wrong in business conduct. This explanation begins with the fundamentals of business

More information

Conflict It s What You Do With It!

Conflict It s What You Do With It! Conflict It s What You Do With It! Luc Bégin, Ombudsman Department of Canadian Heritage Presented to: Financial Management Institute of Canada November 27 th, 2013 True or False Sometimes the best way

More information

Why do Psychologists Perform Research?

Why do Psychologists Perform Research? PSY 102 1 PSY 102 Understanding and Thinking Critically About Psychological Research Thinking critically about research means knowing the right questions to ask to assess the validity or accuracy of a

More information

Integrative Thinking Rubric

Integrative Thinking Rubric Critical and Integrative Thinking Rubric https://my.wsu.edu/portal/page?_pageid=177,276578&_dad=portal&... Contact Us Help Desk Search Home About Us What's New? Calendar Critical Thinking eportfolios Outcomes

More information

A conversation with Professor David Chalmers, May 20, 2016 Participants

A conversation with Professor David Chalmers, May 20, 2016 Participants A conversation with Professor David Chalmers, May 20, 2016 Participants Professor David Chalmers Professor of Philosophy, New York University (NYU) Luke Muehlhauser Research Analyst, Open Philanthropy

More information

Kantor Behavioral Profiles

Kantor Behavioral Profiles Kantor Behavioral Profiles baseline name: date: Kantor Behavioral Profiles baseline INTRODUCTION Individual Behavioral Profile In our earliest social system the family individuals explore a range of behavioral

More information

RESEARCH PROTOCOL. December 23 nd, 2005 ITHA

RESEARCH PROTOCOL. December 23 nd, 2005 ITHA RESEARCH PROTOCOL December 23 nd, 2005 ITHA RESEARCH PROTOCOL 1. Introduction Inter Tribal Health Authority is committed to the generation and dissemination of knowledge. Research is defined as the development

More information

Computational Models of Ethical Reasoning Challenges, Initial Steps, and Future Directions

Computational Models of Ethical Reasoning Challenges, Initial Steps, and Future Directions 1 7 Computational Models of Ethical Reasoning Challenges, Initial Steps, and Future Directions Bruce M. Mclaren Introduction H OW CAN MACHINES SUPPORT, OR EVEN MORE SIGNIFICANTLY REPLACE, humans in performing

More information

Silberman School of Social Work. Practice Lab Feb. 7, 2013 C. Gelman, N. Giunta, S.J. Dodd

Silberman School of Social Work. Practice Lab Feb. 7, 2013 C. Gelman, N. Giunta, S.J. Dodd Silberman School of Social Work Practice Lab Feb. 7, 2013 C. Gelman, N. Giunta, S.J. Dodd What would you do? and WHY? Types of Ethical Theories Obligation or Rule-based (Immanuel Kant, 1724-1804): There

More information

Welfare and ethics part two: values, beliefs, communication

Welfare and ethics part two: values, beliefs, communication Vet Times The website for the veterinary profession https://www.vettimes.co.uk Welfare and ethics part two: values, beliefs, communication Author : Jill Macdonald Categories : RVNs Date : August 1, 2013

More information

What happens if I cannot make decisions about my care and treatment?

What happens if I cannot make decisions about my care and treatment? Information Line: 0800 999 2434 Website: compassionindying.org.uk What happens if I cannot make decisions about my care and treatment? This factsheet explains how decisions are made about your care or

More information

decisions based on ethics. In the case provided, the recreational therapist is faced with a

decisions based on ethics. In the case provided, the recreational therapist is faced with a Brackett 1 Kassie Brackett The Ethical Problem Professionals are faced with situations all the time that force them to make decisions based on ethics. In the case provided, the recreational therapist is

More information

Research Ethics: a Philosophical Perspective. Dr. Alexandra Couto Lecturer in Philosophy University of Kent

Research Ethics: a Philosophical Perspective. Dr. Alexandra Couto Lecturer in Philosophy University of Kent + Research Ethics: a Philosophical Perspective Dr. Alexandra Couto Lecturer in Philosophy University of Kent + Research Ethics: a Philosophical Perspective What are the ethical principles that ought to

More information

Running Head: NARRATIVE COHERENCE AND FIDELITY 1

Running Head: NARRATIVE COHERENCE AND FIDELITY 1 Running Head: NARRATIVE COHERENCE AND FIDELITY 1 Coherence and Fidelity in Fisher s Narrative Paradigm Amy Kuhlman Wheaton College 13 November 2013 NARRATIVE COHERENCE AND FIDELITY 2 Abstract This paper

More information

APPENDICES. Appendix I Business Ethical Framework

APPENDICES. Appendix I Business Ethical Framework Appendix I Business Ethical Framework Circle of Care s commitment to ethical practice is reflected in the Agency s Mission, Values, and Vision. These statements form the basis of the agency s ethical framework

More information

Artificial Intelligence: Its Scope and Limits, by James Fetzer, Kluver Academic Publishers, Dordrecht, Boston, London. Artificial Intelligence (AI)

Artificial Intelligence: Its Scope and Limits, by James Fetzer, Kluver Academic Publishers, Dordrecht, Boston, London. Artificial Intelligence (AI) Artificial Intelligence: Its Scope and Limits, by James Fetzer, Kluver Academic Publishers, Dordrecht, Boston, London. Artificial Intelligence (AI) is the study of how to make machines behave intelligently,

More information

Professional Coach Training Session Evaluation #1

Professional Coach Training Session Evaluation #1 During class we've been expanding our knowledge of the Core Competencies. In this integration session we're going to expand our use of them, and our ability to observe them in a real, live coaching environment.

More information

Preparing for an Oral Hearing: Taxi, Limousine or other PDV Applications

Preparing for an Oral Hearing: Taxi, Limousine or other PDV Applications Reference Sheet 12 Preparing for an Oral Hearing: Taxi, Limousine or other PDV Applications This Reference Sheet will help you prepare for an oral hearing before the Passenger Transportation Board. You

More information

Character Education Framework

Character Education Framework Character Education Framework March, 2018 Character Education: Building Positive Ethical Strength Character education is the direct attempt to foster character virtues the principles that inform decisionmaking

More information

The Limits of Groups, An Author Responds. Raimo Tuomela, University of Helsinki

The Limits of Groups, An Author Responds. Raimo Tuomela, University of Helsinki http://social-epistemology.com ISSN: 2471-9560 The Limits of Groups, An Author Responds Raimo Tuomela, University of Helsinki Tuomela, Raimo. The Limits of Groups. Social Epistemology Review and Reply

More information

School of Nursing, University of British Columbia Vancouver, British Columbia, Canada

School of Nursing, University of British Columbia Vancouver, British Columbia, Canada Data analysis in qualitative research School of Nursing, University of British Columbia Vancouver, British Columbia, Canada Unquestionably, data analysis is the most complex and mysterious of all of the

More information

A Method for Ethical Problem Solving Brian H. Childs, Ph.D.

A Method for Ethical Problem Solving Brian H. Childs, Ph.D. A Method for Ethical Problem Solving Brian H. Childs, Ph.D. Professor of Bioethics and Professionalism Adapted in part from Robert M. Veatch, Amy Haddad and Dan English Case Studies in Biomedical Ethics

More information

6. A theory that has been substantially verified is sometimes called a a. law. b. model.

6. A theory that has been substantially verified is sometimes called a a. law. b. model. Chapter 2 Multiple Choice Questions 1. A theory is a(n) a. a plausible or scientifically acceptable, well-substantiated explanation of some aspect of the natural world. b. a well-substantiated explanation

More information

Major Ethical Theories

Major Ethical Theories Ethical theories provide frameworks and specific values for assessing situations and guiding decision-making processes, so the relative differences between right and wrong are understood, and appropriate

More information

A Helping Model of Problem Solving

A Helping Model of Problem Solving A Helping Model of Problem Solving Prepared By Jim Messina, Ph.D., CCMHC, NCC, DCMHS Assistant Professor, Troy University Tampa Bay Site This topic available on www.coping.us Steps to helping a helpee

More information

Designing and Implementing a Model for Ethical Decision Making

Designing and Implementing a Model for Ethical Decision Making Designing and Implementing a Model for Ethical Decision Making 1 Introductory Remarks One of us [GR] has been involved in developing a tool for computing the "relative evil" of pairs of military actions

More information

This intermediate to advanced course is designed for social workers and related professionals required to complete ethics continuing education.

This intermediate to advanced course is designed for social workers and related professionals required to complete ethics continuing education. This intermediate to advanced course is designed for social workers and related professionals required to complete ethics continuing education. Much of the discussion began in the 1960s-medical field Bioethics:

More information

AU TQF 2 Doctoral Degree. Course Description

AU TQF 2 Doctoral Degree. Course Description Course Description 1. Foundation Courses CP 5000 General Psychology Non-credit Basic psychological concepts and to introduce students to the scientific study of behavior. Learning and Behavior, Altered

More information

ADDITIONAL CASEWORK STRATEGIES

ADDITIONAL CASEWORK STRATEGIES ADDITIONAL CASEWORK STRATEGIES A. STRATEGIES TO EXPLORE MOTIVATION THE MIRACLE QUESTION The Miracle Question can be used to elicit clients goals and needs for his/her family. Asking this question begins

More information

A Computational Theory of Belief Introspection

A Computational Theory of Belief Introspection A Computational Theory of Belief Introspection Kurt Konolige Artificial Intelligence Center SRI International Menlo Park, California 94025 Abstract Introspection is a general term covering the ability

More information

Color Science and Spectrum Inversion: A Reply to Nida-Rümelin

Color Science and Spectrum Inversion: A Reply to Nida-Rümelin Consciousness and Cognition 8, 566 570 (1999) Article ID ccog.1999.0418, available online at http://www.idealibrary.com on Color Science and Spectrum Inversion: A Reply to Nida-Rümelin Peter W. Ross Department

More information

The Ethics of Humanitarian Aid and Cultural Conflicts: The case of Female Genital Mutilation. By: Leah Gervais. October 25, 2015

The Ethics of Humanitarian Aid and Cultural Conflicts: The case of Female Genital Mutilation. By: Leah Gervais. October 25, 2015 The Ethics of Humanitarian Aid and Cultural Conflicts: The case of Female Genital Mutilation By: Leah Gervais October 25, 2015 Ethical conflicts that arise with humanitarian international work, particularly

More information

Clinical Decision Support Systems. 朱爱玲 Medical Informatics Group 24 Jan,2003

Clinical Decision Support Systems. 朱爱玲 Medical Informatics Group 24 Jan,2003 Clinical Decision Support Systems 朱爱玲 Medical Informatics Group 24 Jan,2003 Outline Introduction Medical decision making What are clinical decision support systems (CDSSs)? Historical perspective Abdominal

More information

Appearance properties

Appearance properties Appearance properties phil 93507 Jeff Speaks October 12, 2009 1 The role of appearance properties If we think that spectrum inversion/spectrum shift of certain sorts is possible, and that neither of the

More information

Personal Philosophy of Leadership Kerri Young Leaders 481

Personal Philosophy of Leadership Kerri Young Leaders 481 Personal Philosophy of Kerri Young Leaders 481 Leaders are architects of standards and respect. In this Personal Philosophy of summary, I will examine different leadership styles and compare my personal

More information

Evolutionary Approach to Investigations of Cognitive Systems

Evolutionary Approach to Investigations of Cognitive Systems Evolutionary Approach to Investigations of Cognitive Systems Vladimir RED KO a,1 b and Anton KOVAL a Scientific Research Institute for System Analysis, Russian Academy of Science, Russia b National Nuclear

More information

Answers to end of chapter questions

Answers to end of chapter questions Answers to end of chapter questions Chapter 1 What are the three most important characteristics of QCA as a method of data analysis? QCA is (1) systematic, (2) flexible, and (3) it reduces data. What are

More information

Artificial Intelligence

Artificial Intelligence Politecnico di Milano Artificial Intelligence Artificial Intelligence From intelligence to rationality? Viola Schiaffonati viola.schiaffonati@polimi.it Can machine think? 2 The birth of Artificial Intelligence

More information

Abstracts. An International Journal of Interdisciplinary Research. From Mentalizing Folk to Social Epistemology

Abstracts. An International Journal of Interdisciplinary Research. From Mentalizing Folk to Social Epistemology Proto Sociology s An International Journal of Interdisciplinary Research 3 Vol. 16, 2002 Understanding the Social: New Perspectives from Epistemology Contents From Mentalizing Folk to Social Epistemology

More information

The eight steps to resilience at work

The eight steps to resilience at work The eight steps to resilience at work Derek Mowbray March 2010 derek.mowbray@orghealth.co.uk www.orghealth.co.uk Introduction Resilience is the personal capacity to cope with adverse events and return

More information

RELYING ON TRUST TO FIND RELIABLE INFORMATION. Keywords Reputation; recommendation; trust; information retrieval; open distributed systems.

RELYING ON TRUST TO FIND RELIABLE INFORMATION. Keywords Reputation; recommendation; trust; information retrieval; open distributed systems. Abstract RELYING ON TRUST TO FIND RELIABLE INFORMATION Alfarez Abdul-Rahman and Stephen Hailes Department of Computer Science, University College London Gower Street, London WC1E 6BT, United Kingdom E-mail:

More information