Christiane Bieber Knut G. Müller Jennifer Nicolai Mechthild Hartmann Wolfgang Eich

Similar documents
Stewart W Mercer a, Margaret Maxwell b, David Heaney c and Graham CM Watt a. Introduction

DEVELOPING A TOOL TO MEASURE SOCIAL WORKERS PERCEPTIONS REGARDING THE EXTENT TO WHICH THEY INTEGRATE THEIR SPIRITUALITY IN THE WORKPLACE

Involving patients in decision making and communicating risk: a longitudinal evaluation of doctors attitudes and confidence during a randomized trial

Smiley Faces: Scales Measurement for Children Assessment

Measuring Perceived Social Support in Mexican American Youth: Psychometric Properties of the Multidimensional Scale of Perceived Social Support

Chapter 9. Youth Counseling Impact Scale (YCIS)

The Controllability Beliefs Scale used with carers of people with intellectual disabilities: psychometric propertiesjir_

An International Study of the Reliability and Validity of Leadership/Impact (L/I)

CHAPTER VI RESEARCH METHODOLOGY

RESULTS. Chapter INTRODUCTION

2013 Supervisor Survey Reliability Analysis

Assessing the Validity and Reliability of the Teacher Keys Effectiveness. System (TKES) and the Leader Keys Effectiveness System (LKES)

INSTRUCTION MANUAL Instructions for Patient Health Questionnaire (PHQ) and GAD-7 Measures

International Conference on Humanities and Social Science (HSS 2016)

Scottish Neurological Symptoms Study

Which Medical Interview Behaviors Are Associated With Patient Satisfaction?

Cognitive-Behavioral Assessment of Depression: Clinical Validation of the Automatic Thoughts Questionnaire

Pathways to Inflated Responsibility Beliefs in Adolescent Obsessive-Compulsive Disorder: A Preliminary Investigation

Reliability and Validity of the Pediatric Quality of Life Inventory Generic Core Scales, Multidimensional Fatigue Scale, and Cancer Module

Relative Benefits of Narrowband and Broadband Tools for Behavioral Health Settings

The Asian Conference on Education & International Development 2015 Official Conference Proceedings. iafor

Review of Various Instruments Used with an Adolescent Population. Michael J. Lambert

CHAPTER III RESEARCH METHOD. method the major components include: Research Design, Research Site and

The Wellness Assessment: Global Distress and Indicators of Clinical Severity May 2010


China January 2009 International Business Trip Analysis

Teachers Sense of Efficacy Scale: The Study of Validity and Reliability

Focus of Today s Presentation. Partners in Healing Model. Partners in Healing: Background. Data Collection Tools. Research Design

Stop Delirium! A complex intervention for delirium in care homes for older people

SOCIAL SUPPORT FOR MEDICATION ADHERENCE AMONG THAI PEOPLE WITH POST-ACUTE MYOCARDIAL INFARCTION: FACTOR ANALYSIS

Table S1. Search terms applied to electronic databases. The African Journal Archive African Journals Online. depression OR distress

The Somatic Symptom Disorder B Criteria Scale (SSD-12) in a Dutch Clinical Sample. A validation study.

Chapter 3. Psychometric Properties

Supplementary Online Content

CHAPTER III RESEARCH METHODOLOGY

Are touchscreen computer surveys acceptable to medical oncology patients?

EXPLORING CASINO GAMBLING IMPACT PERCEPTIONS: A GENDERED SOCIAL EXCHANGE THEORY APPROACH

Validity and Reliability of Sport Satisfaction

PLS 506 Mark T. Imperial, Ph.D. Lecture Notes: Reliability & Validity

The Satisfaction in the doctor-patient relationship: the communication assessment

Author s response to reviews

Expectations Of Patients Using Mental Health Services

Summary. General introduction

Religiosity and Death Anxiety

SUMMARY 8 CONCLUSIONS

DESIGN TYPE AND LEVEL OF EVIDENCE: Randomized controlled trial, Level I

Validation of the Russian version of the Quality of Life-Rheumatoid Arthritis Scale (QOL-RA Scale)

Methodology METHODOLOGY

Teacher stress: A comparison between casual and permanent primary school teachers with a special focus on coping

Assessment in Integrated Care. J. Patrick Mooney, Ph.D.

SUMMARY AND CONCLUSION

ADMS Sampling Technique and Survey Studies

Reliability and Exploratory Factor Analysis of Psychological Well-being in a Persian Sample

Evaluating the Greek Version of Religious Commitment Inventory-10 on a Sample of Pomak Households

Influence of depressive symptoms on the use of Facebook

CRITICALLY APPRAISED PAPER (CAP)

Keywords assessment, measurement, competency development, professional training, rehabilitation counselors, psychiatric rehabilitation, suicide

IDEA Technical Report No. 20. Updated Technical Manual for the IDEA Feedback System for Administrators. Stephen L. Benton Dan Li

T he measurement of patient perceptions relating to the

ISSN X Journal of Educational and Social Research Vol. 2 (8) October 2012

Self-Compassion, Perceived Academic Stress, Depression and Anxiety Symptomology Among Australian University Students

Fatigue is widely recognized as the most common symptom for individuals with

A CONSTRUCT VALIDITY ANALYSIS OF THE WORK PERCEPTIONS PROFILE DATA DECEMBER 4, 2014

The New Zealand Mental Health Commission has defined recovery as. The Wellness Recovery Action Plan (WRAP): workshop evaluation CONSUMER ISSUES

A new scale (SES) to measure engagement with community mental health services

Patient-Centredness in the Consultation. 2: Does it Really Make a Difference?

THE DIMENSIONALITY OF THE AARHUS UNIVERSITY QUALITY IN THE PHD PROCESS SURVEY

PEER REVIEW HISTORY ARTICLE DETAILS TITLE (PROVISIONAL)

Title:Problematic computer gaming, console-gaming, and internet use among adolescents: new measurement tool and association with time use

Spiritual Wellbeing and Depression in Psychotherapy Outpatients

Text-based Document. Psychometric Evaluation of the Diabetes Self-Management Instrument-Short Form (DSMI-20) Lin, Chiu-Chu; Lee, Chia-Lun

Making a psychometric. Dr Benjamin Cowan- Lecture 9

The Communal Coping Model of Catastrophizing: Patient Health Provider Interactionspme_

Critical Evaluation of the Beach Center Family Quality of Life Scale (FQOL-Scale)

THE GLOBAL elearning JOURNAL VOLUME 6, ISSUE 1, A Comparative Analysis of the Appreciation of Diversity in the USA and UAE

Examining the efficacy of the Theory of Planned Behavior (TPB) to understand pre-service teachers intention to use technology*

E-MENTAL HEALTH NICE TO HAVE OR MUST HAVE? Exploring attitudes toward e-mental health in the general population

Development of a New Fear of Hypoglycemia Scale: Preliminary Results

Communication Skills in Standardized-Patient Assessment of Final-Year Medical Students: A Psychometric Study

Measurement equivalence of the Big Five: Shedding further light on potential causes of the educational bias. Rammstedt, B. & Kemper, C. J.

FACTORIAL CONSTRUCTION OF A LIKERT SCALE

Assessing Social and Emotional Development in Head Start: Implications for Behavioral Assessment and Intervention with Young Children

Toward the Measurement of Interpersonal Generosity (IG): An IG Scale Conceptualized, Tested, and Validated Conceptual Definition

Author's response to reviews

Sex Differences in Depression in Patients with Multiple Sclerosis

GOLDSMITHS Research Online Article (refereed)

Issues in Clinical Measurement

Development of self efficacy and attitude toward analytic geometry scale (SAAG-S)

Avoidant Coping Moderates the Association between Anxiety and Physical Functioning in Patients with Chronic Heart Failure

Quality of Life at the End of Life:

Worry and Social Desirability: Opposite Relationships for Socio-Political and Social-Evaluation Worries. Joachim Stöber*

Psychometric Evaluation of Self-Report Questionnaires - the development of a checklist

School orientation and mobility specialists School psychologists School social workers Speech language pathologists

STUDY ON THE CORRELATION BETWEEN SELF-ESTEEM, COPING AND CLINICAL SYMPTOMS IN A GROUP OF YOUNG ADULTS: A BRIEF REPORT

Quantitative Methods in Computing Education Research (A brief overview tips and techniques)

Project Outline Master Thesis

Validation of the German Version of the Brief Fatigue Inventory

DEVELOPMENT AND VERIFICATION OF VALIDITY AND RELIABILITY OF

Psychometric properties of the Chinese quality of life instrument (HK version) in Chinese and Western medicine primary care settings

Reliability and validity of "Job Satisfaction Survey" questionnaire in military health care workers

Transcription:

J Clin Psychol Med Settings (2010) 17:125 136 DOI 10.1007/s10880-010-9189-0 How Does Your Doctor Talk with You? Preliminary Validation of a Brief Patient Self-Report Questionnaire on the Quality of Physician Patient Interaction Christiane Bieber Knut G. Müller Jennifer Nicolai Mechthild Hartmann Wolfgang Eich Published online: 10 March 2010 Ó Springer Science+Business Media, LLC 2010 Abstract The quality of physician patient interaction is increasingly being recognized as an essential component of effective treatment. The present article reports on the development and validation of a brief patient self-report questionnaire (QQPPI) that assesses the quality of physician patient interactions. Data were gathered from 147 patients and 19 physicians immediately after consultations in a tertiary care outpatient setting. The QQPPI displayed good psychometric properties, with high internal consistency and good item characteristics. The QQPPI total score showed variability between different physicians and was independent of patients gender, age, and education. The QQPPI featured high correlations with other quality-related measures and was not influenced by social desirability, or patients clinical characteristics. The QQPPI is a brief patient self-report questionnaire that allows assessment of the quality of physician patient interactions during routine ambulatory care. It can also be used to evaluate physician communication training programs or for educational purposes. Keywords Quality of physician patient interaction Validation of questionnaire Quality of health care Patient-centered care Patient involvement Shared decision-making C. Bieber (&) K. G. Müller J. Nicolai M. Hartmann W. Eich Department of Psychosomatic and General Internal Medicine, Centre for Psychosocial Medicine, University of Heidelberg, Thibautstraße 2, 69115 Heidelberg, Germany e-mail: christiane.bieber@med.uni-heidelberg.de Abbreviations QQPPI Questionnaire on the Quality of Physician Patient Interaction QHC Quality of health care PSHC Patient satisfaction with health care PPS Presumed patient satisfaction In recent decades, physician patient interactions in outpatient settings have become a focus of scientific interest (Barr, 2004; Detmar, Muller, Wever, Schornagel, & Aaronson, 2001; Ford, Schofield, & Hope, 2006; Frankel, 2004; Gericke, Schiffhorst, Busse, & Haussler, 2004; Kaplan, Greenfield, & Ware, 1989; Kjeldmand, Holmstrom, & Rosenqvist, 2006; Langewitz, Keller, Denz, Wössmer-Buntschu, & Kiss, 1995; Ong, de Haes, Hoos, & Lammes, 1995; Rimal, 2001; Roter et al., 1997; Safran et al., 2006). There is general agreement that high quality physician patient interactions should be considered an asset in themselves (Goldman Sher et al., 1997; Mead & Bower, 2000b, 2002) and that they also appear to be a precondition for an effective treatment (Di Blasi, Harkness, Ernst, Georgiou, & Kleijnen, 2001; Swenson et al., 2004). Positive physician patient interactions have favorable effects on patient satisfaction (Rosenberg, Lussier, & Beaudoin, 1997; Williams, Weinman, & Dale, 1998), treatment adherence (Anstiss, 2009; Schneider, Kaplan, Greenfield, Li, & Wilson, 2004), and on medical outcomes (Safran et al., 1998; Stewart et al., 2000). Communication training programs for physicians aim at improving the quality of the physician patient interaction to facilitate information exchange, patient participation, shared decision-making, and the development of a reliable physician patient relationship (Bieber et al., 2006, 2008; Tiernan, 2003; Towle, Godolphin, Grams, & Lamarre, 2006; Weiner,

126 J Clin Psychol Med Settings (2010) 17:125 136 Barnet, Cheng, & Daaleman, 2005). To monitor the success of such communication training programs and to allow for individual feedback to physicians, an instrument assessing the quality of physician patient interaction is desirable. In addition, such an instrument should ideally be brief enough to allow its use in routine care. However, no brief patient selfreport questionnaire encompassing adequate psychometric properties measuring the quality of physician patient interaction is currently available. Several extensive patient self-report instruments focused on the general quality of care are currently at hand, which tap into the quality of physician patient interactions with one of their several subscales (Bitzer, Dierks, Dörning, & Schwartz, 1999; Gericke et al., 2004; Grol et al., 1999; Safran et al., 2006). However, these instruments are rather extensive and therefore, not practical for use in routine quality assessment or as add-on questionnaires in clinical trials. Furthermore, it is doubtful whether the specific subscales of interest can be used independently of the whole instrument. Additionally, because these instruments have been developed in the context of large survey research projects, there is currently no published evidence for their validity or reliability in small study samples. When looking for an instrument to assess the quality of physician patient interaction, it is important to recognize other quality of care measures that assess related concepts (Baker, 1990; Barr, 2004; Gericke et al., 2004; Gremigni, Sommaruga, & Peltenburg, 2008; Grogan, Conner, Norman, Willits, & Porter, 2000; Hendriks, Vrielink, van Es, De Haes, & Smets, 2004; Mead & Bower, 2002; Mercer, Maxwell, Heaney, & Watt, 2004; Nicolai, Demmel, & Hagen, 2007; Rimal, 2001; Ross, Steward, & Sinacore, 1995; Safran et al., 2006; Sixma, Kerssens, Campen, & Peters, 1998; Wolf, Putnam, James, & Stiles, 1978). Patient satisfaction is the most frequently assessed surrogate parameter in quality of care assessment (Baker, 1990; Gericke et al., 2004; Grogan et al., 2000; Langewitz et al., 1995; Ross et al., 1995; Wolf et al., 1978; Zandbelt, Smets, Oort, Godfried, & de Haes, 2004), but numerous related parameters exist, including satisfaction with decisions reached (Holmes-Rovner et al., 1996), patient-centeredness (Kjeldmand et al., 2006; Mead & Bower, 2000a), empathy (Mercer et al., 2004; Nicolai et al., 2007), active listening (Fassaert, van Dulmen, Schellevis, & Bensing, 2007), patient-empowerment, patient involvement (Lerman et al., 1990), shared decision-making (Edwards et al., 2003; Elwyn et al., 2003; Simon et al., 2006), patients general experience with care (Jenkinson, Coulter, & Bruster, 2002), as well as issues pertaining to the organization of the consultation (Grogan et al., 2000; Safran et al., 2006). There are, however, several reservations regarding the use of these surrogate parameters to assess the quality of physician patient interaction. Arguments against the use of patient satisfaction questionnaires state that they rarely assess observable physician behavior, often show a narrow range of scores with high ceiling effects (Garratt, Bjaertnes, Krogstad, & Gulbrandsen, 2005; Ross et al., 1995), barely detect any quality improvement, and are often tailored for an exclusive, disease-specific use (e.g., Barron & Kotak, 2006; Flood et al., 2006; Hagedoorn et al., 2003; Nordyke et al., 2006). Patient self-report instruments assessing shared decision behavior, as another possible surrogate parameter for interaction quality (Edwards et al., 2003; Simon et al., 2006), are narrowly and exclusively focused on this specific aspect of physician patient interaction and therefore also seem inappropriate for evaluating comprehensive physician communication training programs. Another aspect one should consider when assessing the quality of the physician patient interaction is the question of perspective. It is possible to assess the patient s perspective (Bitzer et al., 1999; Gericke et al., 2004; Grogan et al., 2000; Langewitz et al., 1995; Safran et al., 2006), the physician s perspective (Hahn, 2001; Kjeldmand et al., 2006), or use observation-based process measures to evaluate recorded consultations by independent raters (Cox, Smith, Brown, & Fitzpatrick, 2008; Elwyn et al., 2003; Mead & Bower, 2000a). The latter are considered the gold standard because they are the most objective. However, they tend to be too complex and time consuming to implement in routine care (Mead & Bower, 2000a). Physician self-report scales are prone to bias because they reflect the physicians subjective perception of their own performance (Kjeldmand et al., 2006). Therefore, patient self-report measures constitute a good compromise in terms of reliability and feasibility and are increasingly being used for quality assessment and improvement of care. The above considerations led us to conclude that there is a need to develop and validate a brief new instrument to directly assess the quality of physician patient interactions from the patient s perspective. Such an instrument should meet several requirements. First of all, it should transcend satisfaction ratings that have been criticized as too superficial because they tend to overestimate the quality of care and often do not correspond with objective and observable characteristics of the consultation (Langewitz et al., 1995; Ross et al., 1995; Williams, Coyle, & Healy, 1998). Ideally, such an instrument should rather focus on observable physician behaviors that are taught in physician communication training programs and that are associated with high-quality care, such as relationship building, patient involvement, and sharing of decisions. As a further requirement, the instrument should not be biased by patients social desirability considerations, socio-demographic, or clinical variables. Its briefness is another necessary condition to guarantee acceptability and to allow for its potential implementation in routine care.

J Clin Psychol Med Settings (2010) 17:125 136 127 The aim of the present paper is to describe the development and preliminary validation of such a brief, 14-item, patient self-report instrument, the Questionnaire on the Quality of Physician Patient Interaction (QQPPI). This questionnaire is practicable and efficient for use in routine care and it can also be used to evaluate physician communication training programs. During the validation process, special attention was paid to controlling for variables known to impact the internal validity of rating scales, such as patients tendencies toward social desirability or the instrument s susceptibility to bias by mental co-morbidities (Hahn et al., 1996; Ross et al., 1995). Method Study Sample and Procedures The present study was carried out in four outpatient clinics of the Medical University Hospital of Heidelberg (outpatient clinics for rheumatology, pain, general internal medicine, and diabetes) between July 2003 and March 2004. Study approval was obtained from the Heidelberg Universities ethical review board. On pre-selected days, all patients scheduled for a consultation were asked to participate in the study. Patients were eligible for the study if they were at least 18 years of age yet not older than 75 years, had sufficient knowledge of the German language, and had no cognitive or visual impairments. Patients were approached in the waiting rooms of the four outpatient clinics by study personnel and asked to participate in the study. Participation was voluntary and anonymous. All participants gave written informed consent. Participants completed a set of questionnaires directly before the consultation (T0) and immediately after the consultation (T1), while still in the waiting room area. Three weeks after the consultation (T2), retest-questionnaires were mailed to the participants, filled in at their home, and returned by mail in a prepaid envelope. Additionally, physicians working in the outpatient clinics completed a short set of questionnaires immediately after the consultations (T1). Measures The present study investigated the validity of the newly developed Questionnaire on the Quality of Physician Patient Interaction (QQPPI) ( Fragebogen zur Arzt-Patient-Interaktion FAPI ) by examining the performance of the QQPPI when concurrently administrated with other accepted quality-associated measures. Furthermore, several instruments were used to control for the influence of social desirability, clinical characteristics (e.g., mental co-morbidity, functional capacity) and to allow for a comparison between patients and physicians perspectives. Table 1 gives an overview of all instruments and assessment points used. Patients Questionnaires The Quality of Physician Patient Interaction (QQPPI; see Appendix) was developed to directly assess the quality of the physician patient interaction during a consultation. It places special emphasis on several aspects important to building a good physician patient relationship such as information exchange, patient involvement, and the sharing of decisions. It deliberately does not address organizational aspects around the consultation that may be important for mere satisfaction ratings, such as waiting time, premises of the clinic, or interaction with non-medical staff. Item generation. In a preliminary survey, exploratory, in-depth interviews were conducted with 20 patients with gastroenterological, cardiological, endocrinological, or rheumatological diagnoses who were attending an outpatient clinic for general internal medicine. Patients were asked to describe their expectations with regard to an adequate physician patient relationship. This information allowed an expert panel to generate possible questionnaire items. In addition to this, existing English and German questionnaires were screened for appropriate items. All items were scrutinized for ambiguity and repetition. Overall, nine items from the in-depth interviews, two items from the German version of the Patient Satisfaction Questionnaire (PSQ) (Langewitz et al., 1995), and three items from the Grogan Patient Satisfaction Questionnaire (Grogan et al., 2000) were included in the final version of the questionnaire. The selected items were considered to cover the main aspects of the physician patient interaction. All 14 items were rephrased to ensure that they were worded positively, avoided statements containing doublenegations, and did not confuse participants with a change of meaning in the response categories. Items in the questionnaire are to be rated on a 5-point Likert-scale from I do not agree to I fully agree (see Appendix). Content validity was addressed by using patient-generated issues from the initial interviews and having the instrument reviewed by an expert panel of physicians and patients to determine if each item captured the intended domain. Assessment of Further Quality-Associated Measures Several other Quality-Associated Measures were used to assess convergent validity with the QQPPI. This was necessary to confirm that the QQPPI measures a related, yet not identical construct. Consequently, we expected moderate to substantial correlations (.3.7) between the QQPPI and these other quality-associated measures.

128 J Clin Psychol Med Settings (2010) 17:125 136 Table 1 Overview of instruments used in the present study at defined assessment points Construct assessed Corresponding instrument Before consultation T0 Patients Quality-associated measures Quality of physician patient interaction Questionnaire on the Quality of Physician Patient Interaction (QQPPI) After consultation T1 Quality of health care Single item (QHC) Patient satisfaction with health care Single item (PSHC) Involvement in health care Perceived Involvement in Health Care Scale (PICS) Satisfaction with decision Satisfaction with Decision Scale (SWD) Social desirability Balanced Inventory of Desirable Responding (BIDR) Clinical characteristics Sociodemographic variables Basic documentation taken from Psy-BADO Functional capacity Single item State of health Single item Quality of life Short Form Scale (SF-12) Depression and somatization Patient Health Questionnaire German version (PHQ-D) Anxiety Anxiety subscale of Hospital Anxiety and Depression Scale (HADS) Physicians Diagnosis and presenting complaint Free text field Presumed patient satisfaction Single item (PPS) Physician patient interaction Difficult Doctor Patient Relationship Questionnaire (DDPRQ-10) Psy-BADO stands for a standardized psychosocial basic documentation that is commonly used in German studies Retest after 3 weeks T2 Patients global assessment of quality of health care (QHC) and patient satisfaction with health care (PSHC) were both assessed with a single item on a 5-point Likert scale. Involvement in health care decisions was measured with two subscales of the Perceived Involvement in Care Scale (PICS) (Lerman et al., 1990), which comprises a scale assessing doctor facilitation of patient involvement (doctor facilitation scale; PICS-A) and a scale assessing the level of information exchange (patient information scale; PICS-B). Satisfaction with the treatment decision was measured by the German version of the satisfaction with decision scale (SWD) (Holmes-Rovner et al., 1996). Social Desirability is a possible to the validity of a measure, therefore, social desirability was assessed in this study by the impression management subscale of the German version of the Balanced Inventory of Desirable Responding (BIDR) (Musch, Brockhaus, & Bröder, 2002). Assessment of clinical characteristics. Patients mental problems and Other Clinical Characteristics were assessed in the present study since they can bias quality evaluations (Hahn et al., 1996). The patients functional capacity and state of health were globally assessed with single item measures using 5-point Likert-scales. The patients quality of life was assessed by the 12-item version of the Short Form Scale (SF-12) (Bullinger & Kirchberger, 1998). The German version of the Patient Health Questionnaire (PHQ-D) (Löwe et al., 2002; Spitzer, Kroenke, & Williams, 1999) was used to screen participants for the existence and degree of mental problems. The PHQ-D scans the diagnostic criteria according to DSM-IV criteria and the level of, among others, somatization and depression (Löwe et al., 2003, 2004). The seven item anxiety subscale of the Hospital Anxiety and Depression Scale (HADS) (Zigmond & Snaith, 1983) was used to assess the level of anxiety in each patient. Physicians Questionnaires The physicians set of questionnaires asked for the patients diagnoses and the reasons for consultation. It globally assessed the presumed patient satisfaction (PPS) (5-point Likert-scale) with a single item. The Difficult Doctor Patient Relationship Questionnaire DDPRQ (Hahn

J Clin Psychol Med Settings (2010) 17:125 136 129 et al., 1996) was used to assess the physicians view on the difficulty of interacting with the patient. Statistical Analysis Descriptive statistics were used to characterize the study sample. To obtain a QQPPI total score, the mean value of all QQPPI item scores was calculated. A maximum of three missing values was considered acceptable for the QQPPI. Missing values were estimated by means of two-way imputation (Sijtsma & van der Ark, 2003; van Ginkel & van der Ark, 2005). To assess the item and scale characteristics of the QQPPI, item difficulty and item-total correlation were calculated. To investigate the underlying structure of the QQPPI, the items were subjected to factor analysis using maximum likelihood factor extraction with oblique rotations (Promax). Three criteria were used to determine the number of factors to extract: the scree plot, the Kaiser Gutman Rule, and solution interpretability. Internal consistency and test retest reliability were used as indicators of the QQPPI reliability. Internal consistency was assessed by means of Cronbach s alpha, and test retest reliability was assessed by means of Pearson correlation coefficients. Analyses of variance (ANOVAs) were calculated to detect whether there were differences in QQPPI total scores between the four different outpatient clinics and between the 19 participating physicians. Convergent validity of the QQPPI was assessed via Spearman correlations of the QQPPI total score with global ratings of QHC, PSHC, SWD, and PICS. To identify possible confounders, correlations between the QQPPI total score and clinical characteristics (PHQ-D, SF-12, etc.) were also calculated. The influence of social desirability was assessed by calculating Spearman correlation coefficients between the BIDR score and the QQPPI total score, and between the BIDR score and QQPPI single item scores. To assess whether physicians were able to estimate their patients levels of satisfaction and how their own ratings corresponded, correlations between physicians ratings (DDPRQ, PPS) and patients ratings (QQPPI, PSHC, QHC) were calculated. We analyzed how often physicians estimated their patients satisfaction levels correctly. All statistical analyses were performed with the SPSS (Version 15.0) software package and SAS-System Release 8.2. Results Sample Characteristics One hundred and fifty-four volunteers from four outpatient clinics participated in the study. Seven individuals were excluded from the study due to incomplete QQPPI ratings. The final sample consisted of 147 outpatients (mean age of 48.8 years; SD = 14.7; see Table 2) treated by 19 physicians (32% female). The test retest sample after three weeks included 122 respondents (17% non-respondents). Compared with respondents, non-respondents were more likely to be unmarried, V 2 (1, N = 147) = 6.7, p =.01. No other significant differences between respondents and nonrespondents were found. The demographic and clinical characteristics of the participants are shown in Table 2. Participant characteristics did not significantly differ between the four outpatient clinics. Descriptive Item Characteristics and Scale Characteristics of the QQPPI Item difficulties of the QQPPI ranged from.55 to.77, which confirms good item characteristics. The mean of the QQPPI total score was 3.61 with a standard deviation of.92 and a median of 3.64 (for means and SDs of single QQPPI items, see Table 3). Skewness was -.33; kurtosis was -.49. The Kolmogorov Smirnov test indicated that the QQPPI total score met normal distribution (D =.067; p =.10). The QQPPI total score did not differ as a function of patients gender, age, and level of education. The QQPPI total scores obtained by the 19 participating physicians ranged from 2.0 to 5.0 (SD =.40 1.10). The QQPPI total scores obtained in the four different outpatient clinics varied from 3.46 to 3.85 (SD =.86 1.01), F(3, 143) =.80, ns. QQPPI total scores relating to consultations with female physicians (M = 3.76, SD =.27) did not differ significantly from those with male physicians (M = 3.73, SD =.89), F(1, 17) =.01, ns. Factor Structure of the QQPPI To examine the factors underlying the QQPPI, an exploratory factor analysis was conducted. The factor structure of the QQPPI was assessed using the maximumlikelihood method of factor analysis. The Kaiser Meyer Olkin measure of sampling adequacy was.94, indicating that the correlation matrix was appropriate for the factor analysis. Bartlett s test of sphericity yielded an approximate chi-squared of 3883.85, p \.001, providing additional evidence that the analyzed data do not produce an identity matrix, and are thus approximately multivariate normal and acceptable for factor analysis. One factor with an eigenvalue greater than 1 emerged, accounting for 60.11% of the total variance. Factor loadings ranged from.64 to.84, and communalities ranged from.40 to.70 (see Table 3).

130 J Clin Psychol Med Settings (2010) 17:125 136 Table 2 Demographic and clinical characteristics of study participants Note: Data are n (%) or mean (SD). Functional capacity: range = 1 5; State of health: range = 1 5; SF-12: 12-Item Short Form Health. Survey: range = 0 100, with higher scores indicating more favorable status, respectively. PHQ-D: German Version of the Patient Health Questionnaire, level of somatization: range = 0 30; level of depression severity: range = 0 27. HADS: German Version of the Hospital Anxiety and Depression Scale, range: 0 21 with higher scores indicating increased level of severity Study sample (N = 147) Retest-sample (N = 122) Age in years, Mean (SD) 48.82 (14.65) 49.14 (14.62) Gender Female (%) 81 (55.10) 68 (55.74) Marital status Married, spouse present (%) 87 (59.18) 78 (68.93) Education level C10 years of education (%) 76 (51.70) 64 (52.46) Employment status (%) Full or part-time working 56 (38.90) 47 (39.50) Unemployed 17 (11.81) 15 (12.61) Homemaker 10 (6.94) 6 (5.04) Retired due to age 38 (26.39) 33 (27.73) Retired due to disease 14 (9.72) 11 (9.24) In education 9 (6.25) 7 (5.88) Functional capacity, Mean (SD) 3.09 (1.10) 3.08 (1.11) State of health, Mean (SD) 3.09 (0.94) 3.07 (0.93) PHQ-D, level of somatization (%) Minimal 49 (37.41) 42 (38.89) Low 30 (22.90) 25 (23.15) Medium 32 (24.43) 23 (21.30) High 20 (15.27) 18 (16.67) PHQ-D, level of depression severity (%) Minimal 65 (47.10) 55 (47.83) Mild 39 (28.26) 32 (27.83) Moderate 17 (12.32) 12 (10.44) Moderately severe 14 (10.15) 14 (12.17) Severe 3 (2.17) 2 (1.74) HADS-D, level of anxiety, Mean (SD) 8.18 (2.43) 8.25 (2.49) SF-12, physical health status, Mean (SD) 40.23 (12.35) 40.53 (12.40) SF-12, mental health status, Mean (SD) 47.27 (12.36) 48.36 (12.15) Reliability Analysis of the QQPPI Cronbach s alpha for the overall scale was.95. The test retest reliability of the QQPPI over a 3-week retest period was r =.59, suggesting that the QQPPI score was relatively stable over time. To check whether the retest values of the QQPPI were biased by a time effect, all calculations were repeated with the retest scores of the QQPPI. No significant differences were found. Convergent Validity of the QQPPI To analyze convergent validity of the QQPPI, we calculated correlations of the QQPPI with other quality-associated measures (see Table 4). As expected, all correlations were positive and significant. Most importantly, the QQPPI total score correlated substantially with the PICS-A and SWD scores (r =.64 and.59). There were also substantial but somewhat lower correlations with the global QHC assessment and the PICS-B scores (r =.54 and.52). The correlation between the QQPPI total score and PSHC was moderate (r =.38). These results indicate that the QQPPI and most of the other quality-associated measures tap highly related constructs, whereas patient satisfaction measured with the PSHC seems to be a somewhat different construct. Social Desirability The Spearman correlation coefficient between the QQPPI total score and social desirability, as measured with the German version of the BIDR, was r =-.05. The correlations between single QQPPI items and the BIDR score ranged from r =-.11 to r =.05. This indicates that the QQPPI is not severely biased by social desirability considerations.

J Clin Psychol Med Settings (2010) 17:125 136 131 Table 3 Descriptive statistics, factor loadings, communalities, and percent of variance for maximum likelihood extraction on the 14-item QQPPI (N = 147) Item Mean (SD) Factor loadings Communalities h 2 1. The physician seemed to be genuinely interested in my problems 4.05 (1.02).72.52 2. The physician gave me detailed information about the 3.89 (1.06).84.70 available treatment options 3. I felt I could have trusted the physician with my private problems 3.23 (1.33).68.47 4. The physician and I made all treatment decisions together 3.82 (1.13).79.62 5. The physician s explanations were easy to understand 3.93 (1.03).80.64 6. The physician spent sufficient time on my consultation 3.95 (1.18).83.68 7. The physician spoke to me in detail about the risks 3.54 (1.19).76.57 and side-effects of the proposed treatment 8. The physician understood my needs and problems, 3.84 (1.08).82.68 and took them seriously 9. The physician did all he/she could to put me at ease 3.48 (1.18).72.52 10. The doctor asked about how my illness affects my everyday life 3.17 (1.40).66.44 11. The doctor gave me enough chance to talk about all my problems 3.59 (1.20).76.58 12. The physician respects the fact that I may have a different 3.31 (1.17).76.57 opinion regarding treatment 13. The physician gave me a thorough examination 3.20 (1.41).64.40 14. The physician gave me detailed information about my illness 3.50 (1.27).78.61 Total score 3.61 (.92) Eigenvalue 8.42 Percent of variance 60.11% Note: QQPPI: Questionnaire on the Quality of Physician Patient Interaction, range = 1 5, with higher scores indicating more favorable ratings Influence of Clinical Characteristics on Quality Assessment To identify systematic bias caused by patients health condition, we analyzed correlations of quality-associated measures (QQPPI, PSHC, and QHC) with clinical characteristics (functional capacity, state of health, mental co-morbidity, and quality of life) (see Table 4). Whereas QQPPI total scores showed no significant correlation with any of these clinical characteristics, PSHC and QHC showed low correlations with functional capacity and state of health. The single QQPPI items showed moderate correlations with functional capacity and quality of life, especially with quality of life related to mental health status (SF-12). These results indicate that the QQPPI is less prone to systematic bias caused by patients health condition than these other two quality-associated measures. Comparison of Physicians and Patients Perspectives Physicians assessments regarding the difficulty of physician patient interaction (DDPRQ) did not correlate significantly with patients QQPPI scores, patients ratings regarding QHC, or PSHC (see Table 4). This means patients that are considered to be difficult can still be happy with the quality of physician patient interaction. Similarly, physicians PPS assessments did not significantly correlate with QQPPI scores or PSHC ratings. Only in 27.3% of the consultations did the physicians estimate their patient s satisfaction correctly, and the physicians underestimated their patients satisfaction with health care in 56.7% of the consultations. Discussion The aim of the present study was to develop and validate a brief patient self-report instrument on the quality of the physician patient interaction; this was intended as an addon questionnaire for routine use in ambulatory quality assessment. It had to transcend the satisfaction ratings that are often used as surrogate parameters for the quality of physician patient interaction because these are known to have some shortcomings (Garratt et al., 2005; Ross et al., 1995; Sitzia, 1999). Besides its use in routine care, it is also intended for evaluating physician communication training programs, with a focus on relationship building, including patient involvement, information exchange, and shared decision-making. Therefore, it focuses on the physician patient interaction being central to the consultation, and disregards organizational and service issues around the consultation.

132 J Clin Psychol Med Settings (2010) 17:125 136 Table 4 Correlations of patients QQPPI total scores and other quality-associated measures with clinical characteristics and physicians ratings at T1 Patients ratings Physician patient interaction (QQPPI) Patient satisfaction with health care (PSHC) Quality of health care (QHC) Patients ratings Quality-associated measures Patient satisfaction with health care (PSHC).38***.58*** Quality of health care (QHC).54***.58*** Satisfaction with decision (SWD).59***.48***.46*** Patients perceived involvement in care, Doctor facilitation scale (PICS A).64***.49***.51*** Patients perceived involvement in care, Patient information scale (PICS B).52***.31***.41*** Clinical characteristics Functional capacity (single item).06.21*.23* State of health (single item).13.25**.24** Physical health status (SF-12).03 -.10.00 Mental health status (SF-12).04.08.16 Somatization (PHQ-D) -.03 -.21* -.06 Depression (PHQ-D) -.08 -.23** -.13 Anxiety (HADS-D) -.09 -.21* -.08 Physicians ratings Difficulty in relationship (DDPRQ-10) -.10 -.15 -.17 Presumed patient satisfaction (PPS).10.17*.28** * p \.05; ** p \.01; *** p \.001 The main results of our study can be catalogued as follows. First, the QQPPI met good psychometric properties with good item and scale characteristics. In contrast to other measures, the QQPPI total score was independent of patients gender, age, and educational level (Garratt et al., 2005; Ross et al., 1995). Second, the results of the factor analysis suggest that the quality of physician patient interaction, as measured by the QQPPI, is a distinct, uni-dimensional construct. Third, reliability analysis revealed that the QQPPI is internally consistent (Cronbach s Alpha a =.95). Its internal consistency is even slightly superior to established patient satisfaction scales (Gericke et al., 2004; Grogan et al., 2000; Langewitz et al., 1995). Fourth, evidence was obtained for convergent validity of the QQPPI with various other qualityassociated questionnaires. Substantial associations were found between the QQPPI and patients perceived involvement in care (PICS B) (r =.64), satisfaction with decision (SWD) (r =.59) and the quality of health care (QHC) (r =.54). A moderate correlation was found with patient satisfaction with health care (PSHC) (r =.38). This indicates that the QQPPI assesses a related but distinct construct which is, however, closer to quality than to satisfaction ratings. During the validation process we paid special attention to controlling for variables known to impact the internal validity of rating scales, namely patients tendencies toward social desirability or the instrument s susceptibility to bias by patients health status. Previous research has shown that patients with mental problems show more difficult interactions (Hahn, Thompson, Wills, Stern, & Budner, 1994) with their physicians, and may therefore be more critical in their evaluations. Quality assessment may also be biased by other clinical characteristics, such as functional capacity, state of health, and quality of life. In our study, it was possible to show that the QQPPI is neither influenced by social desirability considerations nor by the patients health status. This makes the QQPPI superior to simple quality or satisfaction assessment, which showed low correlations with patients health status. The present study suggests that physicians do not have the ability to correctly predict their patients satisfaction and quality ratings. They seem to be more critical of their own performance because more than half of the physicians underestimated their patients satisfaction and quality evaluations. This corresponds to the findings of other studies (Dobkin et al., 2003; Zandbelt et al., 2004) and highlights the importance of assessing the quality of

J Clin Psychol Med Settings (2010) 17:125 136 133 physician patient interactions from multiple perspectives. It also stresses the importance of directly asking the patients because the physician is not very likely to guess his patient s appraisal correctly. It has been suggested that this discordance might be explained by social desirability considerations on behalf of the physicians and patients dependency considerations (Zandbelt et al., 2004), which may also apply to our sample. There was no systematic difference in the QQPPI total scores between the four different outpatient clinics. Interestingly, QQPPI total scores differed significantly at the level of the 19 physicians involved in the consultations, indicating that patients perceived considerable variation among their physicians with regard to their interaction qualities. However, it is beyond the scope of the present study to analyze the QQPPI s ability to differentiate between better and poorer communicators among physicians. Further studies concurrently assessing strictly objective measures of physician patient interaction are needed to show whether the QQPPI might be used to identify physicians who particularly require a communication training program. The present study shows the usefulness and applicability of the QQPPI to assess the quality of physician patient interactions during routine ambulatory care. In addition, the QQPPI is able to discriminate between the communication performances of individual physicians. Future research should also address the responsiveness of the instrument to change e.g., does it measure improvements in the quality of the physician patient interaction after communication skills training? In the meantime, there are reports of QQPPI implementation in a randomized controlled trial evaluating a comprehensive physician communication training program in the context of chronic pain (Bieber et al., 2006, 2008). Its results support the validity of using the QQPPI as an outcome measure of communication training programs, and they further demonstrate that it allows differentiation between the interaction skills of individual physicians. There are other potential applications for the QQPPI. Note that the instrument s items were derived from in-depth interviews with patients who described their expectations with regard to an adequate physician patient relationship. Consequently, it incorporates well the essence of patients wishes and expectations. One might think of using it as a teaching tool in medical schools to impart desirable physician behavior step by step. Instead of basing standardized patients evaluation of the student doctor on subjective comments, the QQPPI might be used as a more objective adjunct to the evaluation. It might also be worth considering to expand the QQPPI to evaluate physicians and link it to incentives or disincentives when it comes to yearly goals. Depending on different types of physician practices, the QQPPI could possibly be adapted and complemented in the future. Limitations Some limitations should be considered when interpreting the results of the present study. First, the QQPPI was developed and evaluated in a university outpatient setting, which is not necessarily representative of routine ambulatory care provided by office-based physicians. However, we expect only small setting influences on the direct quality of physician patient interactions, in contrast to broader quality assessments taking into account organizational factors, such as waiting time, facilities, access, or competence of medical assistants (Gericke et al., 2004; Grogan et al., 2000). A second limitation is the reliance on physicians and patients self-reports. Because self-report data may be influenced by global impressions unrelated to the actual quality of the physician patient interaction, a comparison of QQPPI ratings with objective observer-based ratings of the physician patient interaction will be necessary. We are currently tackling this challenge in another study. In summary, the QQPPI is a brief, valid, and reliable patient self-report instrument that allows the efficient assessment of the quality of the physician patient interaction in outpatient settings. It can be used in routine care and for evaluating physician communication training programs. Because QQPPI scores are independent of patient characteristics (i.e., age, gender, education) and are not confounded by social desirability or health status, the instrument is apt to identify genuine determinants of physician patient interactions. Acknowledgements We would like to thank all participating patients and physicians. Appendix Questionnaire on the Quality of Physician Patient Interaction (QQPPI) The following are a series of statements and assertions concerning today s consultation, including decisions and results. Please indicate to what extent you agree or disagree with these statements. There are 5 possible responses to the following questions: 1 = I DO NOT AGREE 2 = I PARTLY AGREE 3 = I AGREE 4 = I STRONGLY AGREE 5 = I FULLY AGREE Beside each question, please indicate the answer that best reflects the experiences you had in the course of today s consultation.

134 J Clin Psychol Med Settings (2010) 17:125 136 Appendix continued 1 The physician seemed to be genuinely interested in my problems. a 2 The physician gave me detailed information about the available treatment options. 3 I felt I could have trusted the physician with my private problems. b 4 The physician and I made all treatment decisions together. 5 The physician s explanations were easy to understand. 6 The physician spent sufficient time on my consultation. 7 The physician spoke to me in detail about the risks and side effects of the proposed treatment. 8 The physician understood my needs and problems and took them seriously. 9 The physician did all he/she could to put me at ease. b 10 The doctor asked about how my illness affects my everyday life. a 11 The doctor gave me enough time to talk about all my problems. a 12 The physician respects the fact that I may have a different opinion regarding treatment. 13 The physician gave me a thorough examination. 14 The physician gave me detailed information about my illness. a These items were taken from the Grogan Patient Satisfaction Questionnaire (Grogan et al., 2000) b These items were taken from the German version of the Patient Satisfaction Questionnaire (Langewitz et al., 1995) References Anstiss, T. (2009). Motivational interviewing in primary care. Journal of Clinical Psychology in Medical Settings, 16, 87 93. Baker, R. (1990). Development of a questionnaire to assess patients satisfaction with consultations in general practice. British Journal of General Practice, 40, 487 490. Barr, D. A. (2004). Race/ethnicity and patient satisfaction. Using the appropriate method to test for perceived differences in care. Journal of General Internal Medicine, 19, 937 943. Barron, R., & Kotak, A. (2006). Development of a patient satisfaction with treatment questionnaire for benign prostatic hyperplasia (BPH-PSTQ). Value in Health, 9, A55. Bieber, C., Muller, K. G., Blumenstiel, K., Hochlehnert, A., Wilke, S., Hartmann, M., et al. (2008). A shared decision-making communication training program for physicians treating fibromyalgia patients: Effects of a randomized controlled trial. Journal of Psychosomatic Research, 64, 13 20. Bieber, C., Müller, K. G., Blumenstiel, K., Schneider, A., Richter, A., Wilke, S., et al. (2006). Long-term effects of a shared decision making intervention on physician patient interaction and outcome in fibromyalgia. A qualitative and quantitative one year follow-up of a randomized controlled trial. Patient Education and Counseling, 63, 357 366. Bitzer, E., Dierks, M., Dörning, H., & Schwartz, F. (1999). Zufriedenheit in der Arztpraxis aus Patientenperspektive - Psychometrische Prüfung eines standardisierten Erhebungsinstrumentes. Zeitschrift für Gesundheitswissenschaften, 7, 196 209. Bullinger, M., & Kirchberger, I. (1998). SF-36. Fragebogen zum Gesundheitszustand. Hogrefe: Göttingen. Cox, E. D., Smith, M. A., Brown, R. L., & Fitzpatrick, M. A. (2008). Assessment of the Physician Caregiver Relationship Scales (PCRS). Patient Education and Counseling, 70, 69 78. Detmar, S. B., Muller, M. J., Wever, L. D., Schornagel, J. H., & Aaronson, N. K. (2001). The patient physician relationship. Patient physician communication during outpatient palliative treatment visits: An observational study. JAMA, 285, 1351 1357. Di Blasi, Z., Harkness, E., Ernst, E., Georgiou, A., & Kleijnen, J. (2001). Influence of context effects on health outcomes: A systematic review. Lancet, 357, 757 762. Dobkin, P. L., De Civita, M., Abrahamowicz, M., Bernatsky, S., Schulz, J., Sewitch, M., et al. (2003). Patient physician discordance in fibromyalgia. Journal of Rheumatology, 30, 1326 1334. Edwards, A., Elwyn, G., Hood, K., Robling, M., Atwell, C., Holmes- Rovner, M., et al. (2003). The development of COMRADE a patient-based outcome measure to evaluate the effectiveness of risk communication and treatment decision making in consultations. Patient Education and Counseling, 50, 311 322. Elwyn, G., Edwards, A., Wensing, M., Hood, K., Atwell, C., & Grol, R. (2003). Shared decision making: Developing the OPTION scale for measuring patient involvement. Quality and Safety in Health Care, 12, 93 99. Fassaert, T., van Dulmen, S., Schellevis, F., & Bensing, J. (2007). Active listening in medical consultations: Development of the Active Listening Observation Scale (ALOS-global). Patient Education and Counseling, 68, 258 264. Flood, E. M., Beusterien, K. M., Green, H., Shikiar, R., Baran, R. W., Amonkar, M. M., et al. (2006). Psychometric evaluation of the Osteoporosis Patient Treatment Satisfaction Questionnaire (OPSAT-Q), a novel measure to assess satisfaction with bisphosphonate treatment in postmenopausal women. Health and Quality of Life Outcomes, 4, 42. Ford, S., Schofield, T., & Hope, T. (2006). Observing decisionmaking in the general practice consultation: Who makes which decisions? Health Expectations, 9, 130 137. Frankel, R. M. (2004). Relationship-centered care and the patient physician relationship. Journal of General Internal Medicine, 19, 1163 1165. Garratt, A. M., Bjaertnes, O. A., Krogstad, U., & Gulbrandsen, P. (2005). The OutPatient Experiences Questionnaire (OPEQ): Data quality, reliability, and validity in patients attending 52 Norwegian hospitals. Quality and Safety in Health Care, 14, 433 437. Gericke, C. A., Schiffhorst, G., Busse, R., & Haussler, B. (2004). Ein valides Instrument zur Messung der Patientenzufriedenheit in ambulanter haus- und fachärztlicher Behandlung: Das Qualiskope-A. Gesundheitswesen, 66, 723 731. Goldman Sher, T., Cella, D., Leslie, W. T., Bonomi, P., Taylor, S. G., IV, & Serafian, B. (1997). Communication differences between physicians and their patients in an oncology setting. Journal of Clinical Psychology in Medical Settings, 4, 281 293. Gremigni, P., Sommaruga, M., & Peltenburg, M. (2008). Validation of the Health Care Communication Questionnaire (HCCQ) to measure outpatients experience of communication with hospital staff. Patient Education and Counseling, 71, 57 64. Grogan, S., Conner, M., Norman, P., Willits, D., & Porter, I. (2000). Validation of a questionnaire measuring patient satisfaction with general practitioner services. Quality in Health Care, 9, 210 215. Grol, R., Wensing, M., Mainz, J., Ferreira, P., Hearnshaw, H., Hjortdahl, P., et al. (1999). Patients priorities with respect to general practice care: An international comparison. European Task Force on Patient Evaluations of General Practice (EURO- PEP). Family Practice, 16, 4 11.

J Clin Psychol Med Settings (2010) 17:125 136 135 Hagedoorn, M., Uijl, S. G., Van Sonderen, E., Ranchor, A. V., Grol, B. M., Otter, R., et al. (2003). Structure and reliability of Ware s Patient Satisfaction Questionnaire III: Patients satisfaction with oncological care in the Netherlands. Medical Care, 41, 254 263. Hahn, S. R. (2001). Physical symptoms and physician-experienced difficulty in the physician patient relationship. Annals of Internal Medicine, 134, 897 904. Hahn, S. R., Kroenke, K., Spitzer, R. L., Brody, D., Williams, J. B. W., Linzer, M., et al. (1996). The difficult patient: Prevalence, psychopathology, and functional impairment. Journal of General Internal Medicine, 11, 1 8. Hahn, S. R., Thompson, K. S., Wills, T. A., Stern, V., & Budner, N. S. (1994). The difficult doctor patient relationship: Somatization, personality and psychopathology. Journal of Clinical Epidemiology, 47, 647 657. Hendriks, A. A., Vrielink, M. R., van Es, S. Q., De Haes, H. J., & Smets, E. M. (2004). Assessing inpatients satisfaction with hospital care: Should we prefer evaluation or satisfaction ratings? Patient Education and Counseling, 55, 142 146. Holmes-Rovner, M., Kroll, J., Schmitt, N., Rovner, D. R., Breer, M. L., Rothert, M. L., et al. (1996). Patient satisfaction with health care decisions: The satisfaction with decision scale. Medical Decision Making, 16, 58 64. Jenkinson, C., Coulter, A., & Bruster, S. (2002). The Picker Patient Experience Questionnaire: Development and validation using data from in-patient surveys in five countries. International Journal for Quality in Health Care, 14, 353 358. Kaplan, S. H., Greenfield, S., & Ware, J. E., Jr. (1989). Assessing the effects of physician patient interactions on the outcomes of chronic disease. Medical Care, 27, 110 127. Kjeldmand, D., Holmstrom, I., & Rosenqvist, U. (2006). How patientcentred am I? A new method to measure physicians patientcentredness. Patient Education and Counseling, 62, 31 37. Langewitz, W., Keller, A., Denz, M., Wössmer-Buntschu, B., & Kiss, A. (1995). Patientenzufriedenheits-Fragebogen (PZF): Ein taugliches Mittel zur Qualitätskontrolle der Arzt-Patient-Beziehung? Psychotherapie, Psychosomatik, Medizinische Psychologie, 45, 351 357. Lerman, C. E., Brody, D. S., Caputo, G. C., Smith, D. G., Lazaro, C. G., & Wolfson, H. G. (1990). Patients Perceived Involvement in Care Scale: Relationship to attitudes about illness and medical care. Journal of General Internal Medicine, 5, 29 33. Löwe, B., Gräfe, K., Quenter, A., Buchholz, C., Zipfel, S., & Herzog, W. (2002). Screening psychischer Störungen in der Primärmedizin: Validierung des Gesundheitsfragebogens für Patienten (PHQ-D). Psychotherapie, Psychosomatik, Medizinische Psychologie, 52, 104 105. Löwe, B., Gräfe, K., Zipfel, S., Spitzer, R. L., Herrmann-Lingen, C., Witte, S., et al. (2003). Detecting panic disorder in medical and psychosomatic outpatients: Comparative validation of the Hospital Anxiety and Depression Scale, the Patient Health Questionnaire, a screening question, and physicians diagnosis. Journal of Psychosomatic Research, 55, 515 519. Löwe, B., Spitzer, R. L., Gräfe, K., Kroenke, K., Quenter, A., Zipfel, S., et al. (2004). Comparative validity of three screening questionnaires for DSM-IV depressive disorders and physicians diagnoses. Journal of Affective Disorders, 78, 131 140. Mead, N., & Bower, P. (2000a). Patient-centredness: A conceptual framework and review of the empirical literature. Social Science and Medicine, 51, 1087 1110. Mead, N., & Bower, P. (2000b). Measuring patient-centredness: A comparison of three observation-based instruments. Patient Education and Counseling, 39, 71 80. Mead, N., & Bower, P. (2002). Patient-centred consultations and outcomes in primary care: A review of the literature. Patient Education and Counseling, 48, 51 61. Mercer, S. W., Maxwell, M., Heaney, D., & Watt, G. C. (2004). The consultation and relational empathy (CARE) measure: Development and preliminary validation and reliability of an empathy-based consultation process measure. Family Practice, 21, 699 705. Musch, J., Brockhaus, R., & Bröder, A. (2002). Ein Inventar zur Erfassung von zwei Faktoren sozialer Erwünschtheit. Diagnostica, 48, 121 129. Nicolai, J., Demmel, R., & Hagen, J. (2007). Rating Scales for the Assessment of Empathic Communication in Medical Interviews (REM): Scale development, reliability, and validity. Journal of Clinical Psychology in Medical Settings, 14, 367 375. Nordyke, R. J., Chang, C. H., Chiou, C. F., Wallace, J. F., Yao, B., & Schwartzberg, L. S. (2006). Validation of a patient satisfaction questionnaire for anemia treatment, the PSQ-An. Health and Quality of Life Outcomes, 4, 28. Ong, L. M., de Haes, J. C., Hoos, A. M., & Lammes, F. B. (1995). Doctor patient communication: A review of the literature. Social Science and Medicine, 40, 903 918. Rimal, R. N. (2001). Analyzing the physician patient interaction: An overview of six methods and future research directions. Health Communication, 13, 89 99. Rosenberg, E. E., Lussier, M. T., & Beaudoin, C. (1997). Lessons for clinicians from physician patient communication literature. Archives of Family Medicine, 6, 279 283. Ross, C. K., Steward, C. A., & Sinacore, J. M. (1995). A comparative study of seven measures of patient satisfaction. Medical Care, 33, 392 406. Roter, D. L., Stewart, M., Putnam, S. M., Lipkin, M., Jr., Stiles, W., & Inui, T. S. (1997). Communication patterns of primary care physicians. JAMA, 277, 350 356. Safran, D. G., Karp, M., Coltin, K., Chang, H., Li, A., Ogren, J., et al. (2006). Measuring patients experiences with individual primary care physicians. Results of a statewide demonstration project. Journal of General Internal Medicine, 21, 13 21. Safran, D. G., Taira, D. A., Rogers, W. H., Kosinski, M., Ware, J. E., & Tarlov, A. R. (1998). Linking primary care performance to outcomes of care. Journal of Family Practice, 47, 213 220. Schneider, J., Kaplan, S. H., Greenfield, S., Li, W., & Wilson, I. B. (2004). Better physician patient relationships are associated with higher reported adherence to antiretroviral therapy in patients with HIV infection. Journal of General Internal Medicine, 19, 1096 1103. Sijtsma, K., & van der Ark, L. A. (2003). Investigation and treatment of missing item scores in test and questionnaire data. Multivariate Behavioral Research, 38, 505 528. Simon, D., Schorr, G., Wirtz, M., Vodermaier, A., Caspari, C., Neuner, B., et al. (2006). Development and first validation of the Shared Decision-Making Questionnaire (SDM-Q). Patient Education and Counseling, 63, 319 327. Sitzia, J. (1999). How valid and reliable are patient satisfaction data? An analysis of 195 studies. International Journal for Quality in Health Care, 11, 319 328. Sixma, H. J., Kerssens, J. J., Campen, C. V., & Peters, L. (1998). Quality of care from the patients perspective: From theoretical concept to a new measuring instrument. Health Expectations, 1, 82 95. Spitzer, R. L., Kroenke, K., & Williams, J. B. (1999). Validation and utility of a self-report version of PRIME-MD: The PHQ primary care study. Primary Care Evaluation of Mental Disorders. Patient Health Questionnaire. JAMA, 282, 1737 1744. Stewart, M., Brown, J. B., Donner, A., McWhinney, I. R., Oates, J., Weston, W. W., et al. (2000). The impact of patient-centered care on outcomes. Journal of Family Practice, 49, 796 804. Swenson, S. L., Buell, S., Zettler, P., White, M., Ruston, D. C., & Lo, B. (2004). Patient-centered communication: Do patients really prefer it? Journal of General Internal Medicine, 19, 1069 1079.