What Respondents Learn from Questionnaires: The Survey Interview and the Logic of Conversation

Size: px
Start display at page:

Download "What Respondents Learn from Questionnaires: The Survey Interview and the Logic of Conversation"

Transcription

1 Survey Methodology Program Working Paper Series What Respondents Learn from Questionnaires: The Survey Interview and the Logic of Conversation Norbert Schwarz N B 003 Mi Survey Methodology Program Institute for Social Research University of Michigan P.O. Box Ann Arbor, Ml Phone (313) Fax (313)

2 Morris Hansen Lecture 1 What Respondents Learn from Questionnaires: The Survey Interview and the Logic of Conversation Norbert Schwarz Institute for Social Research University of Michigan Ann Arbor, Mi Phone (313) Fax (313) Norbert.Schwarz@um.cc.umich.edu January 1995 The 1993 Morris Hansen Lecture Washington Statistical Society To appear in Intemaiional Statistical Review A previous version of this article was delivered as the 1993 Morris Hansen Lecture at a meeting of the Washington Statistical Society, Washington, D.C, November 4, The reported research was supported by grant SWF from the Bundesminister fur Forschung und Technologie of the Federal Republic of Germany to the author; by grants Schw 278/2 and Str 264/2 from the Deutsche Forschungsgemeinschaft to the author and Fritz Strack, and Schw 278/5 from the Deutsche Forschungsgemeinschaft to the author, Herbert Bless, and Gerd Bonner.

3 What Respondents Learn from Questionnaires: The Survey Interview and the Logic of Conversation Morris Hansen Lecture 2 Although Morris Hansen is probably best known for his work on sampling theory, he has also worked extensively on non-sampling error in surveys. In his seminal paper, "Response Errors in Surveys", published in the Journal of the American Statistical Association in 1951 (Hansen, Hurwitz, Marks, & Mauldin, 1951), he observed, "Response errors may be due to the questionnaire design, the interviewing approach, the characteristics, attitudes, or knowledge of the respondent, or a great many other causes. Regardless of the source, any systematic attempt to control or measure response errors must be based on a clear formulation of the way they arise" (Hansen et al., 1951, p. 147). Providing this "clear formulation," however, has proven difficult. As a result, survey methodology has long been characterized by rigorous theories of sampling on the one hand, and the so called "art of asking questions" on the other hand. It has only been recently that we have seen the development of conceptual frameworks that allow us to specify the processes that underlie at least some of the response effects in survey measurement (e.g., Feldman & Lynch, 1988; Schwarz, 1993; Schwarz & Bless, 1992; Strack & Martin, 1987; Tourangeau & Rasinski, 1988). These frameworks grew out of the recent collaboration of cognitive and social psychologists and survey methodologists. This collaboration was initiated by two conferences, one held under the auspices of the National Academy in 1983 (see Jabine et al., 1984) and the other held at ZUMA, a German social science center, in 1984 (see Hippler, Schwarz, & Sudman, 1987). In the ten years since these initial conferences, work on cognitive aspects of survey measurement has developed at a rapid pace, several edited volumes have been published

4 Morris Hansen Lecture 3 (e.g., Jobe & Loftus, 1991; Schwarz & Sudman, 1992, 1993; Tanur, 1992) and a first textbook is about to go into press (Sudman, Bradburn, & Schwarz, in press). Moreover, several major survey centers, in the U.S. as well as in Europe, have established cognitive laboratories to help with questionnaire development. Drawing on psychological theories of language comprehension, memory, and judgment, researchers have begun to formulate explicit models of the question answering process and have tested these models in tightly controlled laboratory experiments and split-ballot surveys. This work links survey methodologists' expertise in the "art of asking questions" to recent developments in cognitive science, thus providing a useful theoretical and empirical basis for the explanation and control of response effects in survey measurement. To date, much of this work focused on the impact of features of the questionnaire, including question wording, question order, and the choice of response alternatives. Although this focus on questionnaire variables neglects other sources of response variance in survey measurement, it is well justified on empirical grounds. As Sudman and Bradburn (1974) observed in their seminal review of response effects, questionnaire variables account for a much larger chunk of nonsampling error than either interviewer or respondent variables. In line with this focus on questionnaire variables, I will explore how features of the questionnaire structure respondents' understanding of the question and the judgmental processes that underlie their answers. As researchers, we typically think of questionnaires as an instrument that we use to elicit information from respondents. However, our questionnaires are also an instrument with which we convey information to respondents. But, unfortunately, we are often not fully aware of what it is that we do convey. And as a result, we are surprised by the strong impact of apparently minor, and seemingly unimportant, variations in our questionnaires. In response to these surprises, survey methodologists have concluded over and over again that respondents engage in superficial processing and are apparently happy to provide meaningless answers. In the present paper, however, I want to suggest the opposite. Far from

5 Morris Hansen Lecture 4 providing superficial answers, our respondents work hard at making sense of the questions we ask. In doing so, they draw extensively on the irrformation that we provide in our questionnaires. But given that we are not fully aware of what the information is that we provide, we are often surprised by the answers and blame the respondents, rather than the questionnaire. So, let us first consider respondents' tasks. Respondents' Tasks From a cognitive perspective, answering a survey question requires that respondents solve several tasks, as Roger Tourangeau and many others noted (see Strack & Martin, 1987; Tourangeau, 1984; Tourangeau & Rasinski, 1988). As a first step, respondents have to interpret the question to understand what is meant. If the question is an opinion question, they may either retrieve a previously formed opinion from memory, or they may "compute" an opinion on the spot. To do so, they need to retrieve relevant information from memory to form a mental representation of the attitude object that they are to evaluate. If the question is a behavioral question, they need to determine which behavior they are supposed to report. Moreover, they have to retrieve or reconstruct relevant instances from memory and may need to determine if these instances fall within the references period. Once respondents have formed an opinion or an estimate of their behavior, they have to communicate it to the researcher. To do so, they may need to format their judgment to fit the response alternatives provided as part of the question. Moreover, respondents may wish to edit their response before they communicate it, reflecting a desire to present themselves in a positive light. Accordingly, interpreting the question, generating an opinion or a representation of the relevant behavior, formatting the response, and editing the answer are the main psychological components of a process that starts with respondents* exposure to a survey question and ends with their overt report (see Strack & Martin, 1987; Tourangeau & Rasinski, 1988). Although these tasks have been primarily investigated for household surveys, they hold as well for establishment surveys. From a cognitive

6 Morris Hansen Lecture 5 perspective, the key difference is that respondents in an establishment survey are more likely to rely on records than are respondents in a household survey, which may decrease reporting errors. On the other hand, establishment surveys are more likely to use self-aa^iiinistered questionnaires, which may compound problems at the question comprehension stage, due to a lack of potential clarification from the interviewer. Here, I will mainly address issues of question comprehension, which arise in all research situations, be they household surveys, establishment surveys, or psychological experiments. Moreover, some of my examples will be drawn from the domain of attitude and opinion measurement, which has not been a key aspect of government surveys in the past. However, President Clinton's recent executive order regarding the measurement of customer satisfaction is likely to change this, requiring agencies to address issues of attitude measurement. Hence, I will deliberately draw on examples pertaining to satisfaction measurement where appropriate. # Question Comprehension f*i When one looks at textbooks of survey methodology, one comes away with the impression that question comprehension is mainly an issue of choosing the right words. We are typically advised to avoid unfamiliar terms, to use simple wordings, and so on. Whereas all of this is true, the focus on words misses an essential point: Language comprehension is not about words per se, it is about speaker meaning (see Clark & Schober, 1992). For example, when I ask you, "What have you done today?", you have no difficulty understanding the words. Yet, do you know which information you are to provide? Should you report, for example, that you had a cup of coffee, that you took a shower, or what else? To determine which information you are to provide, you have to make inferences about the questioner's intentions. What is it that the questioner wants to know? To make these inferences, respondents draw on the tacit assumptions that govern the conduct of conversation in daily life. These tacit assumptions have been aniculated by Paul Grice (1975), a philosopher of language.

7 Morris Hansen Lecture ~ 6 According to Grice's analysis, conversations proceed according to a co-operativeness principle. This principle can be expressed in form of four deceptively simple maxims. There is a maxim of Quality that enjoins speakers not to say anything they believe to be false or lack adequate evidence for, and a maxim of relation that asks speakers to make their contribution relevant to the aims of the ongoing conversation. In addition, a maxim of quantity requires speakers to make their contribution as informative as is required, but not more informative than is required, while a maxim of manner holds that the contribution should be clear rather than obscure, ambiguous or wordy. In other words, speakers should try to be informative, truthful, relevant, and clear. As a result, "communicated information comes with a guarantee of relevance" as Sperber and Wilson (1986, p. vi) noted, and listeners interpret speakers' utterances "on the assumption that they are trying to live up to these ideals" (Clark & Clark, 1977, p. 122). And if the speaker does not live up to these ideals, listeners may either ask for clarification or may use the context of the speaker's utterance to determine the intended meaning. In the survey interview, asking for clarification is often not a very helpful thing to do. Many survey organizations explicitly instruct their interviewers to reiterate the question without changing its wording. And in the domain of attitude surveys, the interviewer may even be instructed to respond, "Whatever it means to you". In such cases, respondents have to resort to the context of the utterance to make sense of it. In the survey interview, this context includes preceding questions as well as formal features of the questionnaire, such as the response scales provided to respondents. That respondents draw on the context of a question to determine its meaning is everything but a deplorable artifact. It is what we all do in each and every conversation -- but what we want respondents to avoid in that particular type of conversation called "survey interview".

8 Morris Hansen Lecture ~ 7 Making Sense of the Question Asked Question Context and Fictitious Issues To begin with an extreme case, consider research in which respondents are asked to report their opinion about a highly obscure - or even completely fictitious - issue, such as the "Agricultural Trade Act of 1978" (e.g., Bishop, Tuchfarber, & Oldendick, 1986; Schuman & Presser, 1981). Questions of this type reflect public opinion researchers' concern that the "fear of appearing uninformed" may induce "many respondents to conjure up opinions even when they had not given the particular issue any thought prior to the interview" (Erikson, Luttberg, & Tedin, 1988, p. 44). To explore how meaningful respondents' answers are, survey researchers introduced questions about issues that don't exist (e.g., Bishop, Tuchfarber, & Oldendick, 1986; Schuman & Presser, 1981). Presumably, respondents' willingness to report an opinion on a fictitious issue casts some doubt on the reports provided in sunyey interviews in general. In fact, about 30% to 50% of the respondents do typically provide an answer to issues that are invented by the researcher. This has been interpreted as evidence for the operation- of social pressure that induces respondents to give meaningless answers in the absence of any knowledge. From a conversational point of view, however, these responses may be more meaningful than has typically been assumed. From this point of view, the sheer fact that a question about some issue is asked presupposes that this issue exists or else asking a question about it would violate each and every norm of conversational conduct. Respondents, however, have no reason to assume that the researcher would ask meaningless questions and will hence try to make sense of it. If the question is highly ambiguous, and the interviewer does not provide additional clarification, respondents are likely to turn to the context of the ambiguous question to determine its meaning, much as they would be expected to do in any other conversation. Once respondents have assigned a particular meaning to the issue, thus transforming the fictitious issue into a better defined issue that makes sense in the context of the interview, they may have no difficulty in reporting a subjectively meaningful opinion. Even if they have not given the particular

9 Morris Hansen Lecture 8 issue much thought, they may easily identify the broader set of issues to which this particular one apparently belongs. If so, they can use their general attitude toward the broader set of issues to determine their attitude toward this particular one. An experimental survey on educational policies may illustrate this point (Strack, Schwarz, & Wanke, 1991, Experiment 1). In this study, we asked a sample of German college students to report their attitude toward the German government's alleged plan to introduce an "educational contribution". For some subjects, this target question was preceded by a question that asked them to estimate the average tuition fees that students have to pay at U.S. universities (in contrast to Germany, where university education is free). Others had to estimate the amount of money that the Swedish government pays every student as financial support. As expected, students' attitude toward an "educational contribution" was more favorable when the preceding question referred to money that students receive from the government than when it referred to tuition fees that students have to pay. Subsequently, respondents were asked what the "educational contribution" implied. Content analyses of respondents' definitions of the fictitious issue clearly demonstrated that respondents used the context of the "educational contribution" question to determine its meaning. Thus, respondents turned to the content of related questions to determine the meaning of an ambiguous one. In doing so, they interpreted the ambiguous question in a way that made sense of it, and subsequently provided a subjectively meaningful response to their definition of the question. This finding stands in stark contrast to the assumption that responses to ill-defined terms are largely random in nature, representing something like a mental flip of coin as Converse (1964) and other early researchers hypothesized. As our results indicate, the assumption of random responding does not capture what's going on. What is at the heart of reported opinions about fictitious issues is not that respondents are willing to give subjectively meaningless answers by flipping a coin, but that researchers violate

10 Morris Hansen Lecture ~ 9 conversational rules by asking meaningless questions in a context that suggests otherwise. Our respondents, however, have no reason to suspect this may be the case and work hard at making sense of the question asked. To do so, they draw on the context of the question, much as they would be expected to do in any other conversation. In the survey interview, this context includes preceding questions, as we have seen in the present example, as well as the response alternatives offered by the researcher, to which I turn next. Response Alternatives Rating Scales With regard to response alternatives, let me again begin with an extreme case, namely the numeric values provided on a rating scale. According to textbook knowledge, an 11-point rating scale is an 11-point rating scale, independent of how the eleven points are graphically represented in the layout of the questionnaire. Hence, the scales shown in Figure 1 are presumably all the same. Figure 1 jt- r What we typically care about is the wording of the question and the nature of the verbal labels used to anchor the endpoints of the scale (see Dawes & Smith, 1985, for a review). Empirically, however, the specific numerical values used may strongly affect respondents' interpretation of the question asked. Suppose that I ask you, "How successful would you say you have been in life?", accompanied by a rating scale ranging from "not at all successful" to "extremely successful". What is the meaning of these endpoint labels? Does "not at all successful" refer to the absence of outstanding accomplishments or to the presence of explicit failure? In several studies, we observed that the respondents referred to the numeric values presented as part of the rating scale to determine the meaning of the question (Schwarz, Knauper, Hippler, Noelle-Neumann, & Clark, 1991).

11 Morris Hansen Lecture 10 In one of our studies, conducted with a representative sample of German adults, the rating scale ranged from "not at all successful" = -5 to "extremely successful" = +5, or from "not at all successful" = 0 to "extremely successful" = 10. Table 1 shows the results. Table 1 about here Whereas 34 percent of the respondents endorsed a value below the midpoint of the 0 to 10 scale, only 13 percent endorsed one of the formally equivalent values on the -5 to +5 scale. In addition, an inspection of the distributions along both scales indicated that the responses were dislocated towards the high end of the -5 to +5 scale, as compared to the 0 to 10 scale. This is also reflected in markedly different standard deviations for both scales. Subsequent experiments (Schwarz et al., 1991) indicated that this impact of numeric values is due to differential interpretations of the ambiguous endpoint label "not at all successful". When this label is combined with the numeric value "0", respondents interpret it to refer to the absence of noteworthy success. However, when the same label is combined with the numeric value "-5", they interpret it to refer to the presence of explicit failure. These interpretations reflect that a minus-to-plus format emphasizes the bipolar nature of the dimension that the researcher has in mind, implying that one endpoint label refers to the opposite of the other. Hence, "not at all successful" is interpreted as reflecting the opposite of success, that is, failure. In contrast, a rating scale format that presents only positive values suggests that the researcher has a unipolar dimension in mind. In that case, the scale values reflect different degrees of the presence of the crucial feature. Hence, "not at all successful" is now interpreted as reflecting the mere absence of noteworthy success, rather than the presence of failure. This differential interpretation of the same term is also reflected in the inferences that judges draw on the basis of a report given along the different scales. For example, in a follow-up experiment (Schwarz et al., 1991, Experiment 3), a fictitious student reported his academic success along one of the described scales, checking either a "2" or a formally equivalent "-3". When we asked subjects to

12 Morris Hansen Lecture - 11 estimate how often this student had failed an exam, they assumed that he failed twice as often when he checked a "-3" than when he checked a "2", although both values are formally equivalent along the rating scales used. In combination, these findings illustrate that "even the most unambiguous words show a range of meaning, or a degree of 'semantic flexibility', (...) that is constrained by the particular context in which these words occur" (Woll, Weeks, Fraps, Pendergrass, & Vanderplas, 1980, p. 60). Assuming that all contributions to an ongoing conversation are relevant, respondents turn to the context of a word to disambiguate its meaning, much as they would be expected to do in daily life. In a research situation, however, the contributions of the researcher include apparently formal features of questionnaire design, rendering them an important source of information of which respondents make systematic use (see Schwarz, in press; Schwarz & Hippler, 1991, for more detailed discussions). Far from demonstrating superficial and meaningless responding, findings of this type indicate that respondents systematically exploit the information available to them in an attempt to understand their task and to provide ;a meaningful answer. Frequency Scales Question interpretation. The same theme is reiterated by research into behavioral frequency reports. Suppose, for example, that I ask you to report how frequently you were "really irritated" recently. Before you can give an answer, you must figure out what I mean by "really irritated". Does this refer to major irritations such as fights with your spouse or does it refer to minor irritations such as having to wait for service in a restaurant? In one of our studies (Schwarz, Strack, Miiller, & Chassein, 1988), we asked respondents how frequently they felt really irritated and provided them with frequency alternatives that ranged either from "less than once a year" to "more than once every 3 months", or from "less than twice a week" to "several

13 Morris Hansen Lecture 12 times a day". Subsequently, we asked respondents to describe a typical example of their irritating experiences. As expected, respondents described less extreme examples of annoying experiences when presented with the high rather than the low frequency response alternatives. Given that major annoyances are unlikely to occur "several times a day", respondents who were given the high frequency response alternatives inferred that the researcher must have less severe experiences in mind than respondents who were given the low frequency response alternatives, which obviously referred to rare events. Thus, the response alternatives changed the meaning of the question stem. Accordingly, the same question stem combined with different frequency scales is likely to assess different experiences. Frequency estimates. However, the impact of frequency scales is not limited to respondents' interpretation of the question. Rather, these scales.provide.a rich source of information that respondents use in making a variety of different judgments. Specifically, respondents assume that we as researchers constructed a meaningful scale that reflects our knowledge about the distribution of the behavior addressed in the question. When presented a set of frequency alternatives of the type shown in Table 2, respondents assume that values in the middle range of the scale reflect the "average" or "typical" behavior, whereas the extremes of the scale correspond to the extremes of the distribution. These assumptions do not only influence respondents' interpretation of the question, but also their behavioral reports and related judgments. Table 2 In one of our studies (Schwarz, Hippler, Deutsch, & Strack, 1985, Experiment 1), we asked German respondents to report their TV consumption along one of the scales shown in Table 2. Answers along these scales can be compared by computing the percentage of respondents who endorsed values of more or less than 2.5h a day. As expected, the results showed a pronounced impact of the response alternatives. Whereas 37.5 percent of our respondents reported watching TV for 2.5h or more a day

14 Morris Hansen Lecture when presented the high frequency response alternatives, only 16.2 percent reported doing so when presented the low frequency response alternatives (see Table 2). In subsequent studies, we observed that the frequency range of the response alternatives influenced the reported frequency of doctor visits, headaches, alcohol consumption, and a host of other behaviors (see Schwarz & Scheuring, 1991; Schwarz, 1990). These findings reflect that mundane behaviors of a high frequency, such as watching TV, seeing a doctor or having a drink, for example, are not represented in memory as distinct episodes (see Bradburn, Rips, & Shevell, 1987; Schwarz, 1990, for reviews). Rather, the highly similar episodes blend together in a generic representation of the behavior that lacks temporal markers. Accordingly, respondents cannot recall the episodes to determine the frequency of the behavior but have to rely on an estimation strategy (see Menon, in press, for a more detailed discussion). In doing so, they use the range of the scale presented to them as a frame of reference. This strategy results in higher frequency estimates along scales that present high rather than low frequency response alternatives. Not surprisingly, respondents' reliance on the frame of reference suggested by the response alternatives increases as their knowledge about relevant episodes decreases (Schwarz & Bienias, 1990), or the complexity of the judgmental task increases (Bless, Bonner, Hild, & Schwarz, 1992). More importantly, however, the impact of response alternatives is completely eliminated when the informational value of the response alternatives is called into question. For example, telling respondents that they participate in a pretest designed to explore the adequacy of the response alternatives, or informing student subjects that the scale was taken from a survey of the elderly, wiped out the otherwise obtained impact of response alternatives (Schwarz & Hippler, unpublished data). Again, these findings illustrate that respondents assume the researcher to be a cooperative communicator, whose contributions are relevant to the ongoing conversation, unless the implicit guarantee of relevance is called into question.

15 Morris Hansen Lecture ~ 14 Comparative judgments. Finally, we also observed that the frequency range of the response alternatives influences subsequent comparative judgments. Given the assumption that the scale reflects the distribution of the behavior, checking a response alternative is the same as locating one's own position in the distribution. For example, checking 2h on the low frequency scale shown in Table 2 implies that a respondent's TV consumption is above average, whereas checking the same value on the high frequency scale implies that his or her TV consumption is below average. As a result, our respondents reported that TV plays a more important role in their leisure time (Schwarz et al., 1985, Experiment 1) when they had to report their TV consumption on the low rather than on the high frequency scale - despite the fact that they had just reported a lower absolute TV consumption to begin with. And in a related experiment, respondents described themselves as less satisfied with the variety of things they do in their leisure time (Schwarz et al., 1985, Experiment 2), when the scale suggested that they watch more TV than most other people (see also Schwarz & Scheming, 1988). The comparison information provided by the response scale is particularly relevant in the domain of satisfaction measurement, an issue that is likely to plague many of you in the mandated assessment of customer satisfaction. In a study that bears directly on customer satisfaction (Schwarz & Kraft, unpublished), we asked students to report their satisfaction with the interlibrary loan service provided by the University of Heidelberg. In assessing customers' satisfaction with some service, it seems a good idea to have them review their own experience with the respective service first. Hence, we asked respondents how many days they had to wait for the book they ordered, which is a key determinant of satisfaction with the library service. However, we provided our respondents either with a scale that ranged from "1 day" to "5 days and more", or with a scale that ranged from "less than 5 days" to "9 days and more". Whereas the first scale suggests that waiting periods of more than 5 days are rare, the second scale suggests that anything less than 5 days is rare. As expected, the student customers reported

16 Morris Hansen Lecture higher satisfaction with the service that they personally received when their waiting period seemed low rather than high in the context of the respective scale. Thus, the response scale used to assess their actual service experience provided a frame of reference that respondents used in forming a satisfaction judgment. Finally, frame of reference effects of this type are not limited to respondents themselves, but influence the users of their reports as well. For example, we observed in a related study (Schwarz, Bless, Bonner, Harlacher, & Kellenbenz, 1991, Experiment 2) that experienced medical doctors considered having the same physical symptom twice a week to reflect a more severe medical condition when "twice a week" was a high rather than a low response alternative on the symptoms checklist presented to them. Conclusions -ci; j ; The examples that I reviewed so far all illustrate how respondents use features oli.the questionnaire to determine the meaning of a question and to generate a useful answer. The findings'that we obtained are usually considered measurement "artifacts". From a conversational point of view, however, they simply reflect that respondents apply the tacit assumptions that govern the conduct of conversations in daily life to the research situation. Hence, they assume that every contribution is relevant to the goal of the ongoing conversation - and in a research situation, these contributions include preceding questions as well as apparently formal features of the questionnaire, such as the numeric values presented as part of a rating scale or the response alternatives presented as part of a frequency question. As a result, our scales used are all but "neutral" measurement devices. Rather, they constitute a source of information that respondents actively use in determining their task and in constructing a reasonable answer. While research methodologists have traditionally focused on the information that is provided by the wording of the question, we do need to pay equal attention to the information that is conveyed by apparently formal features of the questionnaire.

17 Morris Hansen Lecture 16 However, the norms of conversational conduct do not only license our use of the context of an utterance to determine its meaning. Rather, conversational norms also constrain the type of answer that is considered appropriate. This second aspect of conversational norms underlies another set of apparent "artifacts" in survey measurement. Making One's Answer Informative In general, cooperative speakers are supposed to provide information that is relevant to the goal of the conversation. This not only implies that the provided information should be substantively related to the topic of the conversation. Rather, it also implies that the provided information should be new to the recipient (Clark & Clark, 1977). Hence, the utterance should not reiterate information that the recipient already has, or may take for granted anyway. Accordingly, determining which information one should provide requires extensive inferences about the information that the recipient already has to identify what is, or is not, "informative". An example, taken from Strack and Martin (1987), may illustrate this point. Compare the following two question-answer sequences: Sequence A Question: How is your family? Answer: Sequence B Question: How is your spouse? Answer: Question: "And how is your family?

18 Morris Hansen Lecture What does the term "family" refer to in these sequences? In sequence A it seems to include the spouse, but in sequence B this does not seem to be the case. Having already provided information about the spouse, we are now likely to interpret the term "family" to refer to the remaining family members, excluding the spouse. This interpretation reflects that conversational norms ask us to avoid redundancy. Accordingly, we do not reiterate information that we have already provided, but interpret the question as a request for "new" information. In psycholinguistics, this is known as the "given-new" contract, that asks speakers to provide new information, rather than to reiterate information that has already been given (see Clark & Clark, 1977). General and Specific Questions In survey interviews, this aspect of conversational norms is particularly relevant when we ask a series of general as well as specific questions about a topic, as Bradburn (1982, 1983) suspected more than a decade ago. Lets consider an example that is particularly relevant to the measurement of customer satisfaction: What should be assessed first? Customers' global satisfaction with an agency or their satisfaction with specific aspects of the services offered? Or doesn't it make a difference? A study on general life-satisfaction may serve as an analogue that bears on these issues. In that study (Schwarz, Strack, & Mai, 1991), we asked a community sample of German adults living in Heidelberg to report their general life-satisfaction as well as their marital satisfaction and presented both questions in different orders. Table 3 about here As shown in the first column of Table 3, both measures were moderately correlated, r =.32, when the life-satisfaction question preceded the marital satisfaction question. This suggests that one's marital satisfaction does contribute to one's overall life-satisfaction but may not be the most crucial determinant. Reversing the question order, however, increased the correlation to r =.67, suggesting

19 Morris Hansen Lecture 18 that marital satisfaction may well be the most important determinant of general well-being. This order effect reflects a phenomenon that cognitive psychologists have studied very extensively (see Bodenhausen & Wyer, 1987, for a review). When we form a judgment, we hardly recall all information that might be potentially relevant. Rather, we truncate the information search as soon as enough information has come to mind to form a judgment with sufficient certainty. As a result, our judgments are based on the subset of relevant information that comes to mind most easily. This is usually information that we have recently used, for example, to answer a preceding question. In the present case, answering the marital satisfaction question brought information about one's marriage to mind and this information was most likely to be considered when the more general life-satisfaction question was asked later on. This interpretation is supported by a highly similar correlation of r =.61 when the wording of the general question explicitly asked respondents to include their marriage in evaluating their overall life-satisfaction. In a another condition, however, we deliberately evoked the conversational norm of nonredundancy. To do so, we introduced both questions by a joint lead-in that read, "We now have two questions about your life. The first pertains to your marital satisfaction and the second to your general life-satisfaction." Under this condition, the same question order that resulted in r =.67 without a joint lead-in, now produced a low and nonsignificant correlation or r =.18. This suggests that respondents deliberately ignored information that they had already provided in response to a specific question when making a subsequent general judgment, if the specific and the general questions were assigned to the same conversational context, thus evoking the application of conversational norms that prohibit redundancy. In that case, respondents apparently interpreted the general question as if it referred to aspects of their life that they had not yet reported on. In line with this interpretation, a condition in which respondents were explicitly asked how satisfied they are with "other aspects" of their life, "aside from their relationship", yielded a nearly identical correlation of r =.20. This pattern of findings is not only reflected in the correlations, but also in the means. Compared

20 Morris Hansen Lecture to the condition in which the general life-satisfaction question was asked first, unhappily married respondents reported lower life-satisfaction when the marital satisfaction question brought information about their unhappy marriage to mind. When the joint lead-in evoked the norm of non-redundancy, however, these respondents excluded their unhappy marriage from consideration, resulting in reports of higher general life-satisfaction relative to the baseline condition. The reports of happily married respondents, on the other hand, provided a mirror image of these findings. Suppose, however, that we do not restrict our specific questions to only one domain, such as respondents' marriage, but assess their satisfaction with several different domains before we ask the general question. What should happen in that case? On theoretical grounds, we may expect that asking questions about several specific domains decreases the impact of each domain on the general judgment, because the larger number of specific questions brings a more varied set of information to mind.-cwe tested this prediction by asking other respondents to report their satisfaction with three domains of.life, namely their job, their leisure time, and their marriage, before we assessed their general life-satisfaction. These data are shown in the second column of Table 3. r In this case, asking respondents about their job, leisure time and finally their marriage still increased the impact of marital satisfaction on a subsequent question of general life-satisfaction from r =.32 to r =.46, but this increase was much less pronounced than the increase to r =.67 that we observed when marital satisfaction was the only specific question asked. This reflects that the questions about job and leisure time satisfaction brought additional information to mind, thus reducing the impact of information about one's marriage. More importantly, however, asking several specific questions also changes the conversational implications of a joint lead-in. If only one specific question precedes the general one, the repeated use of the information on which the answer to the specific question was based results in redundancy in the response to the general question. Hence, this repeated use of the same information is avoided if both

21 Morris Hansen Lecture ~ 20 questions are assigned to the same conversational context, as we have seen. If several specific questions precede the general one, however, respondents may interpret the general question in two different ways. On the one hand, they may assume that it is a request to consider still other aspects of their life, much as if it were worded, "Aside of everything you already told us, how satisfied are you with the remaining aspects of your life?" On the other hand, they may interpret the general question as a request to integrate the previously reported aspects into an overall judgment, much as if it were worded, "Taking all these aspects together, how satisfied are you with your life-as-a-whole?". Note that this interpretational ambiguity does not arise if only one specific question is asked. In that case, an interpretation of the general question in the sense of "taking all aspects together" would make little sense because only one specific aspect was addressed to begin with. If several specific questions are asked, however, both interpretations of the general question are viable. In this case, the interpretation of the general question as a request for a final integrative summary judgment is legitimate from a conversational point of view. If several specific questions have been, asked, an integrative judgment is informative because it does provide "new" information about the relative importance of the respective domains, which are in the focus of the conversation. Moreover, "summing up" at the end of a series of related thoughts is acceptable conversational practice - whereas there is little to sum up if only one thought was offered. Accordingly, respondents may interpret a general question as a request for a summary judgment if it is preceded by several specific ones, even if all questions are explicitly placed into the same conversational context. Our data support with this prediction, as shown in the second column of Table 3. When marital satisfaction was the only specific question asked, placing the general and the specific question into the same conversational context by a joint lead-in significantly reduced the correlation from r =.67 to r =.18, as we have already seen. When three specific questions were asked, however, the correlation of r =.46 was not affected, r =.48. This indicates that respondents adopted a "Taking-all-aspects-together" interpretation of the general question if it was preceded by three, rather than one, specific questions. This

22 Morris Hansen Lecture - 21 interpretation is further supported by a similar correlation of r =.53 when the general question was reworded to request an integrative judgment, and a highly dissimilar correlation of r =.11 when the reworded question required the consideration of other aspects of one's life. In combination, these findings emphasize that the interpretation of an identically worded question may change as a function of conversational variables, resulting in markedly different responses. Moreover, the emerging differences are not restricted to the means or margins of the response distribution, as social scientists have frequently hoped. Rather, context variables may result in different correlational patterns, thus violating the assumption that context effects would be restricted to differences in the means, whereas the relationship between variables would be "form resistant" (Schuman & Duncan, 1974; Stouffer & DeVinney, 1949). In the present case, we may conclude that marriage is the major determinant of life-satisfaction as indicated by a correlation of.67, pretty irrelevant as indicated by a correlation of.18, or somewhere in between, depending on the specifics of question order.and conversational belongingness. As government agencies move into the assessment of customer satisfaction, issues of this type are likely to provide more than one headache. r Summary Let me sum up at this point. Depending on your point of view, you may adopt one of two very different summaries. The first summary emphasizes that our respondents were happy to provide an answer to a question about a completely fictitious "Educational Contribution"; were biased by the numeric values of a rating scale; and were equally biased by a set of haphazardly chosen frequency response alternatives. Moreover, their evaluation of the overall quality of their life not only depended on what happened to come to mind due to a preceding question, but was biased by variables such as the number of specific questions asked and the presence or absence of a lead-in. And I could extend this list by including numerous other phenomena that time did not allow me to cover today. In combination, these findings

23 Morris Hansen Lecture - 22 seem to confirm survey methodologists wildest nightmares: Obviously, the responses we obtain are strongly affected by apparently irrelevant features of the research instrument, calling the meaning of the responses into question. This summary, by and large, reflects the traditional approach to response effects in survey measurement, which are typically referred to as "artifacts" or "errors". Yet, there is a second summary. This summary emphasizes the systematic and meaningful nature of the observed findings. According to this summary, our respondents worked hard to make sense of an ambiguous question about an obscure "Educational Contribution" by turning to the context of the question to determine its meaning. Moreover, wondering what may be meant by "not at all successful", they turned to the numeric values of the rating scale to determine if "not at all successful" referred to the absence of outstanding achievements or to the presence of explicit failures. And having to estimate their TV consumption, which they could not recall from memory, they used a piece of information that they could assume to be useful: Namely, the scale provided by the researcher, who presumably is an expert on the issue under investigation. Little did they know that we had constructed a scale that was likely to lead them astray. And finally, when asked to answer related questions, they paid close attention to the information they had provided earlier. In fact, they did their best to provide useful information by avoiding potential redundancy. Does this type of thing reflect that our respondents gave superficial and meaningless answers? Hardly so. What they did is exactly what they would be supposed to do in any conversation other than the survey interview. Specifically, they behaved according to the co-operative principle that governs the conduct of conversation in everyday life. They assumed that every contribution to the conversation is relevant to its goals. In the survey interview, the contributions of the researcher include apparently formal aspects of the questionnaire, such as response scales, lead-ins, and the like. And much as in any other conversation, our respondents referred to apparently related contributions to determine the meaning of the questions asked. Moreover, they tried to make their own contribution as informative as possible and assumed that the researcher is not interested in reiterations of information they

24 Morris Hansen Lecture 23 had already provided. All of this does not reflect superficial responding, but adequate conversational conduct. The one thing that our respondents missed is that we, as researchers, did not obey conversational norms. We violated each and every norm of conversational conduct by asking a question about an issue that does not exist. And we made contributions that were of dubious relevance by picking arbitrary response alternatives. Our respondents, however, had no reason to suspect this and gave us more credit than we deserved. As a result, they were influenced by features of our contributions that we as researchers would usually consider irrelevant. Why is it, then, that most of us feel that the reviewed findings pose a problem? Shouldn't we be happy that our respondents are co-operative communicators, who by and large do their best to provide useful information? I think we should. What renders the reviewed findings problematic is not the behavior of our respondents, but our own misleading assumptions about question comprehension.,-^we have long assumed that a given question has a given meaning, independent of the context in which it is asked or the response scales that are provided. After all, the wording of the question stem remainsxthe same, and the question stem is what survey methodologists have been most concerned with. But question comprehension is not about words per se. Question comprehension is about speaker meaning. And to infer what the speaker has in mind, listeners refer to the context of the utterance. Hence, the same question acquires a different meaning in a different context ~ and when asked a different question, respondents give a different answer. In fact, we would hope they do so. What renders this fact problematic is only that we are often not aware of the extent to which minor changes in our questionnaire affect the specific meaning of the question asked. Rather than blaming respondents for superficial answers, we should therefore turn our attention to our own conversational conduct, asking ourselves what the information is that we convey in our research instruments. Doing so is likely to bring us a long way to the understanding of response effects that Morris Hansen requested more than four decades ago. His variance components models of survey response error

25 Morris Hansen Lecture provided an important tool for identifying different sources of response variation. Subsequent applications of these models demonstrated that the major source of response variation is the specific nature of the task that we present to respondents, as Sudman and Bradburn (1974) observed 20 years later. If we want to understand the systematic processes that underlie this source of response variance, thus reducing some of the randomness in this variance component, we need to move from the "an of asking questions" to the scientific analysis of the question answering process. Current psychological theories of language comprehension, memory, judgment, and interpersonal behavior provide the relevant theoretical framework. Moreover, the derived principles are amenable to empirical testing in systematic experimentation, in surveys as well as in the psychological laboratory. Whereas there is much that remains to be learned, I hope that today's lecture could illustrate that the recent collaboration of cognitive psychologists and survey methodologists holds considerable promise for our understanding of some of the key problems of survey measurement. After all, "any systematic attempt to control response errors must be based on a clear formulation of the way they arise" (Hansen et al., 1951, p. 147), as Morris Hansen observed four decades ago. References Bishop, G.F., Oldendick, R.W., & Tuchfarber, R.J. (1986). Opinions on fictitious issues: the pressure to answer survey questions. Public Opinion Quarterly Bless, H., Bonner, G., Hild, T., & Schwarz, N. (1992). Asking difficult questions: Task complexity increases the impact of response alternatives. European Journal of Social Psychology. 22, Bless, H., Strack, F., & Schwarz, N. (in press). The informative functions of research procedures: Bias and the logic of conversation. European Journal of Social Psychology. Bodenhausen, G.V., & Wyer, R.S. (1987). Social cognition and social reality: Information acquisition and use in the laboratory and the real world. In H. J. Hippler, N. Schwarz, & S. Sudman (Eds.), Social information processing and survey methodology (pp. 6-41). New York: Springer Verlag. Bradburn, N. (1982). Question wording effects in surveys. In R. Hogarth (Ed.), Question framing and response consistency (pp ). San Francisco: Jossey-Bass. Bradburn, N.M. (1983). Response effects. In P.H. Rossi, & J.D. Wright (Eds.), The handbook of survey

26 Morris Hansen Lecture research (pp.)- New York: Academic Press. Bradburn, N. M., Rips, L.J., & Shevell, S.K. (1987). Answering autobiographical questions: The impact of memory and inference on surveys. Science Clark, H. H. (1977). Inferences in comprehension. In La Berge, D. & Samuels, S. (Eds.), Basic processes in reading: Perception and comprehension (pp ). Hillsdale, NJ: Erlbaum. Clark, H.H. (1985). Language use and language users. In G. Lindzey, & E. Aronson (Eds.), Handbook of social psychology (Vol. 2, pp ). New York: Random House. Clark, H. H., & Clark, E. V. (1977). Psychology and language. New York: Harcourt, Brace, Jovanovich. Clark, H. H., & Haviland, S. E. (1977). Comprehension and the given-new contract. In R. O. Freedle (Ed.), Discourse production and comprehension (pp. 1-40). Hillsdale, NJ: Erlbaum. Clark, H. H., & Schober, M. F. (1992). Asking questions and influencing answers. In J. M. Tanur (Ed.), Questions about Questions (pp ). New York: Russel Sage. Converse, P. E. (1964). The nature of belief systems in mass politics. In D. Apter (Ed.), Ideology and discontent (pp ). New York: Free Press of Glencoe. Converse, P. E. (1970). Attitudes and non-attitudes: Continuation of a dialogue. In E. R. Tufte (Ed.), The Quantitative analysis of social problems (pp ). Reading, MA: Addison-Wesley. Dawes, R. M., and T. Smith (1985). "Attitude and Opinion Measurement." In G. Lindzey, &. E. Aronson (Eds.), Handbook of Social Psychology (Vol. 2, pp ). New York: Random House. Erikson, R. S., Luttberg, N. R., & Tedin, K.T. (1988). American public opinion (3rd ed.). New York: Macmillan. Feldman, J. M., & Lynch, J. G. (1988). Self-generated validity and other effects of measruement- on belief, attitude, intention, and behavior. Journal of Applied Psychology. 73, Grice, H. P. (1975). Logic and conversation. In P. Cole, & J.L. Morgan (Eds.), Syntax and semantics. 3: Speech acts, pp New York: Academic Press. Hansen, M. H., Hurwitz, W. N., Marks, E. S., & Mauldin, W. P. (1951). Response errors in surveys. Journal of the American Statistical Association. 46, Hippler, H.J., Schwarz, N., & Sudman, S. (Eds.) (1987). Social information processing and survey methodology. New York: Springer Verlag. Jabine, T.B., Straf, M.L., Tanur, J.M., & Tourangeau, R. (Eds.) (1984). Cognitive aspects of survey methodology: Building a bridge between disciplines. Washington, DC: National Academy Press. Jobe, J., & Loftus, E. (Eds.) (1991). Cognitive aspects of survey methodology. Special issue of Applied Cognitive Psychology. 5^ Menon, G. (in press). Judgments of behavioral frequency: Memory search and retrieval strategies. In N. Schwarz & S. Sudman (Eds.), Autobiographical memory and the validity of retrospective reports. New York: Springer Verlag. Schuman, H., & Duncan, O. D. (1974). Questions about attitude survey questions. In H. L. Costner (Ed.), Sociological Methodology. SAn Francisco: Jossey-Bass. Schuman, H., & Kalton, G. (1985). Survey methods. In G. Lindzey, & E. Aronson (Eds.), Handbook of social psychology (Vol. I). New York: Random House Schuman, H., & Presser, S. (1977). Question wording as an independent variable in survey analysis. Sociological Methods and Research. 6, Schuman, H., & Presser, S. (1981). Questions and answers in attitude surveys. New York: Academic Press. Schwarz, N. (1990). Assessing frequency reports of mundane behaviors: Contributions of cognitive psychology to questionnaire construction. In C. Hendrick & M. S. Clark (Eds.), Research

27 Morris Hansen Lecture ~ 26 methods in personality and social psychology (Review of Personality and Social Psychology, Vol. 11, pp ). Beverly Hills, CA: Sage. Schwarz, N. (1993). Context effects in attitude measurement. Bulletin of the International Statistical Institute (49th Session, Vol. 1, pp ). Florence, Italy: ISI. Schwarz, N. (in press). Judgment in a social context: Biases, shortcomings, and the logic of conversation. In M. Zanna (Ed.), Advances in experimental social psychology (Vol. 26). San Diego, CA: Academic Press. Schwarz, N., & Bienias, J. (1990). What mediates the impact of response alternatives on frequency reports of mundane behaviors? Applied Cognitive Psychology, 4, Schwarz, N., Bless, H., Bohner, G., Harlacher, TJ., & Kellenbenz, M. (1991). Response scales as frames of reference: The impact of frequency range on diagnostic judgment. Applied Cognitive Psychology, 5, Schwarz, N., & Hippler, H.J. (1987). What response scales may tell your respondents: Informative functions of response alternatives. In H.J. Hippler, N. Schwarz, & S. Sudman (Eds.), Social information processing and survey methodology (pp ). New York: Springer Verlag. Schwarz, N., & Hippler, H.J. (1991). Response alternatives: The impact of their choice and ordering. In P. Biemer, R. Groves, N. Mathiowetz, & S. Sudman (Eds.), Measurement error in surveys (pp ). Chichester: Wiley. Schwarz, N., Hippler, H.J., Deutsch, B., & Strack, F. (1985). Response scales: Effects of category range on reported behavior and subsequent judgments. Public Opinion Quarterly. 49, Schwarz, N., Knauper, B., Hippler, H. J., Noelie-Neumann, E., & Clark, F. (1991). Rating scales: Numeric values may change the meaning of scale labels. Public Opinion Quarterly. 55, Schwarz, N. & Scheuring, B. (1988). Judgments of relationship satisfaction: Inter- and intraindividual comparison strategies as a function of questionnaire structure. European Journal of Social Psychology. 18, Schwarz, N., & Scheuring, B. (1991). Die Erfassung gesundheitsrelevanten Verhaltens: Kognitionspsychologische Aspekte und methodologische Implikationen. (The assessment of health relevant behaviors.) In J. Haisch (Ed.), Gesundheitspsvchologic Zur Sozialpsvchologie der Prevention und Krankheitsbewaltigung (pp ). Heidelberg, FRG: Asanger. Schwarz, N., & Strack, F. (1991a). Context effects in attitude surveys: Applying cognitive theory to social research. In W. Stroebe & M. Hewstone (Eds.), European Review of Social Psychology (Vol. 2, pp ). Chichester: Wiley. Schwarz, N., & Strack, F. (1991b). Evaluating one's life: A judgment model of subjective well-being. In F. Strack, M. Argyle, & N. Schwarz (Eds.), Subjective well-being. An interdisciplinary perspective (pp ). Oxford: Pergamon. Schwarz, N., Strack, F., & Mai, H.P. (1991). Assimilation and contrast effects in part-whole question sequences: A conversational logic analysis. Public Opinion Quarterly, 55, Schwarz, N., Strack, F., Muller, G., & Chassein, B. (1988). The range of response alternatives may determine the meaning of the question: Further evidence on informative functions of response alternatives. Social Cognition, 6, Schwarz, N. & Sudman, S. (Eds.) (1992). Context effects in social and psychological research. New York: Springer Verlag. Schwarz, N. & Sudman, S. (1993). Autobiographical memory and the validity of retrospective reports. New York: Springer Verlag. Sperber, D., & Wilson, D. (1986). Relevance: Communication and cognition. Cambridge, MA: Harvard University Press. Stouffer, S. A., & DeVinney, L. C. (1949). How personal adjustment varied in the army - by

28 Morris Hansen Lecture background characteristics of the soldiers. In S. A. Stouffer, E. A. Suchman, L. C. DeVinney, S. A. Star, & R. M. Williams (Eds.), The American soldier: Adjustment during army life. Princeton, NJ: Princeton University Press. Strack, F. (in press). Urteilsprozesse in standardisierten Befragungen: kognitive und kommunicative Einflusse. (Judgmental processes in standardized interviews: cognitive and communicative influences.) Heidelberg, FRG: Springer Verlag. Strack, F. (1992a). Order effects in survey research: Activative and informative functions of preceding questions. In N. Schwarz & S. Sudman (Eds.), Context effects in social and psychological research (pp ). New York: Springer Verlag. Strack, F. (1992b). The different routes to social judgments: Experiential versus informational strategies. In L. Martin & A. Tesser (Eds.), The construction of social judgment (pp ). Hillsdale, NJ: Erlbaum. Strack, F., & Martin, L. (1987). Thinking, judging, and communicating: A process account of context effects in attitude surveys. In H.J. Hippler, N. Schwarz, & S. Sudman (Eds.), Social information processing and survey methodology (pp ). New York: Springer Verlag. Strack, F., Martin, L.L., & Schwarz, N. (1988). Priming and communication: The social determinants of information use in judgments of life-satisfaction. European Journal of Social Psychology Strack, F., & Schwarz, N. (1992). Communicative influences in standardized question situations: The case of implicit collaboration. In K. Fiedler & G. Semin (Eds.), Language, interaction and social cognition (pp ). Beverly Hills: Sage. Strack, F., Schwarz, N., & Wanke, M. (1991). Semantic and pragmatic aspects of context effects in social and psychological research. Social Cognition. 9, Sudman, S., & Bradburn, N. M. (1974). Response effects in surveys: A review and synthesis. Chicago: Aldine. Sudman, S., Bradbum, N., & Schwarz, N. (in press). Applications of cognitive science to survey methodology. San Francisco, CA: Jossey-Bass. ' Tanur, J. M. (Ed.) (1992). Questions about questions. New York: Russel Sage. Tourangeau, R. (1984). Cognitive science and survey methods: A cognitive perspective. In T. Jabine, M. Straf, J. Tanur, & R. Tourangeau (Eds.), Cognitive aspects of survey methodology: Building a bridge between disciplines (pp ). Washington, DC: National Academy Press. Tourangeau, R. (1987). Attitude measurement: A cognitive perspective. In H.J. Hippler, N. Schwarz, & S. Sudman (Eds.), Social information processing and survey methodology. New York: Springer Verlag. Tourangeau, R. (1992). Attitudes as memory structures: Belief sampling and context effects. In N. Schwarz & S. Sudman (Eds.), Context effects in social and psychological research. New York: Springer Verlag. Tourangeau, R., & Rasinski, K.A. (1988). Cognitive processes underlying context effects in attitude measurement. Psychological Bulletin Woll, S. B., D. G. Weeks, C. L. Fraps, J. Pendergrass, and M. A. Vanderplas (1980). Role of sentence context in the encoding of trait descriptors. Journal of Personality and Social Psychology

MEASURING HAPPINESS IN SURVEYS: A TEST OF THE SUBTRACTION HYPOTHESIS. ROGER TOURANGEAU, KENNETH A. RASINSKI, and NORMAN BRADBURN

MEASURING HAPPINESS IN SURVEYS: A TEST OF THE SUBTRACTION HYPOTHESIS. ROGER TOURANGEAU, KENNETH A. RASINSKI, and NORMAN BRADBURN MEASURING HAPPINESS IN SURVEYS: A TEST OF THE SUBTRACTION HYPOTHESIS ROGER TOURANGEAU, KENNETH A. RASINSKI, and NORMAN BRADBURN Abstract Responses to an item on general happiness can change when that item

More information

Mental Construal Processes and the Emergence of Context Effects in Attitude Measurement

Mental Construal Processes and the Emergence of Context Effects in Attitude Measurement Survey Methodology Program Working Paper Series Mental Construal Processes and the Emergence of Context Effects in Attitude Measurement Norbert Schwarz Herbert Bless N 005 Survey Methodology Program Institute

More information

Some Implications for Contingent Valuation Surveys

Some Implications for Contingent Valuation Surveys Survey Methodology Program Working Paper Series Cognition, Communication, and Survey Measurement: Some Implications for Contingent Valuation Surveys Norbert Schwarz N tt 007 Survey Methodology Program

More information

Choose an approach for your research problem

Choose an approach for your research problem Choose an approach for your research problem This course is about doing empirical research with experiments, so your general approach to research has already been chosen by your professor. It s important

More information

I. Introduction and Data Collection B. Sampling. 1. Bias. In this section Bias Random Sampling Sampling Error

I. Introduction and Data Collection B. Sampling. 1. Bias. In this section Bias Random Sampling Sampling Error I. Introduction and Data Collection B. Sampling In this section Bias Random Sampling Sampling Error 1. Bias Bias a prejudice in one direction (this occurs when the sample is selected in such a way that

More information

Why do Psychologists Perform Research?

Why do Psychologists Perform Research? PSY 102 1 PSY 102 Understanding and Thinking Critically About Psychological Research Thinking critically about research means knowing the right questions to ask to assess the validity or accuracy of a

More information

2 Critical thinking guidelines

2 Critical thinking guidelines What makes psychological research scientific? Precision How psychologists do research? Skepticism Reliance on empirical evidence Willingness to make risky predictions Openness Precision Begin with a Theory

More information

Audio: In this lecture we are going to address psychology as a science. Slide #2

Audio: In this lecture we are going to address psychology as a science. Slide #2 Psychology 312: Lecture 2 Psychology as a Science Slide #1 Psychology As A Science In this lecture we are going to address psychology as a science. Slide #2 Outline Psychology is an empirical science.

More information

ISC- GRADE XI HUMANITIES ( ) PSYCHOLOGY. Chapter 2- Methods of Psychology

ISC- GRADE XI HUMANITIES ( ) PSYCHOLOGY. Chapter 2- Methods of Psychology ISC- GRADE XI HUMANITIES (2018-19) PSYCHOLOGY Chapter 2- Methods of Psychology OUTLINE OF THE CHAPTER (i) Scientific Methods in Psychology -observation, case study, surveys, psychological tests, experimentation

More information

Protocol analysis and Verbal Reports on Thinking

Protocol analysis and Verbal Reports on Thinking Protocol analysis and Verbal Reports on Thinking An updated and extracted version from Ericsson (2002) Protocol analysis is a rigorous methodology for eliciting verbal reports of thought sequences as a

More information

Implicit Information in Directionality of Verbal Probability Expressions

Implicit Information in Directionality of Verbal Probability Expressions Implicit Information in Directionality of Verbal Probability Expressions Hidehito Honda (hito@ky.hum.titech.ac.jp) Kimihiko Yamagishi (kimihiko@ky.hum.titech.ac.jp) Graduate School of Decision Science

More information

The Regression-Discontinuity Design

The Regression-Discontinuity Design Page 1 of 10 Home» Design» Quasi-Experimental Design» The Regression-Discontinuity Design The regression-discontinuity design. What a terrible name! In everyday language both parts of the term have connotations

More information

RESPONSE FORMATS HOUSEKEEPING 4/4/2016

RESPONSE FORMATS HOUSEKEEPING 4/4/2016 RESPONSE FORMATS Allyson L. Holbrook Associate Professor of Public Administration and Psychology & Associate Research Professor at the Survey Research Laboratory of the University of Illinois at Chicago

More information

DOING SOCIOLOGICAL RESEARCH C H A P T E R 3

DOING SOCIOLOGICAL RESEARCH C H A P T E R 3 DOING SOCIOLOGICAL RESEARCH C H A P T E R 3 THE RESEARCH PROCESS There are various methods that sociologists use to do research. All involve rigorous observation and careful analysis These methods include:

More information

Research Methodology in Social Sciences. by Dr. Rina Astini

Research Methodology in Social Sciences. by Dr. Rina Astini Research Methodology in Social Sciences by Dr. Rina Astini Email : rina_astini@mercubuana.ac.id What is Research? Re ---------------- Search Re means (once more, afresh, anew) or (back; with return to

More information

How to interpret results of metaanalysis

How to interpret results of metaanalysis How to interpret results of metaanalysis Tony Hak, Henk van Rhee, & Robert Suurmond Version 1.0, March 2016 Version 1.3, Updated June 2018 Meta-analysis is a systematic method for synthesizing quantitative

More information

Overview of the Logic and Language of Psychology Research

Overview of the Logic and Language of Psychology Research CHAPTER W1 Overview of the Logic and Language of Psychology Research Chapter Outline The Traditionally Ideal Research Approach Equivalence of Participants in Experimental and Control Groups Equivalence

More information

Tip sheet. A quick guide to the dos and don ts of mental health care and inclusion. 1. Ask questions. Practical tips

Tip sheet. A quick guide to the dos and don ts of mental health care and inclusion. 1. Ask questions. Practical tips A quick guide to the dos and don ts of mental health care and inclusion Much of the rejection felt by those in church with mental health problems comes from accidental actions and words, delivered with

More information

Motivational Interviewing

Motivational Interviewing Motivational Interviewing By: Tonia Stott, PhD What is Motivational Interviewing? A client-centered, directive method for enhancing intrinsic motivation to change by exploring and resolving ambivalence

More information

Why are you calling me? How study introductions change response patterns

Why are you calling me? How study introductions change response patterns Quality of Life Research (2006) 15: 621 630 Ó Springer 2006 DOI 10.1007/s11136-005-4529-5 Why are you calling me? How study introductions change response patterns Dylan M. Smith 1,2, Norbert Schwarz 2,

More information

Psych 1Chapter 2 Overview

Psych 1Chapter 2 Overview Psych 1Chapter 2 Overview After studying this chapter, you should be able to answer the following questions: 1) What are five characteristics of an ideal scientist? 2) What are the defining elements of

More information

Doing High Quality Field Research. Kim Elsbach University of California, Davis

Doing High Quality Field Research. Kim Elsbach University of California, Davis Doing High Quality Field Research Kim Elsbach University of California, Davis 1 1. What Does it Mean to do High Quality (Qualitative) Field Research? a) It plays to the strengths of the method for theory

More information

What is Science 2009 What is science?

What is Science 2009 What is science? What is science? The question we want to address is seemingly simple, but turns out to be quite difficult to answer: what is science? It is reasonable to ask such a question since this is a book/course

More information

AGENT-BASED SYSTEMS. What is an agent? ROBOTICS AND AUTONOMOUS SYSTEMS. Today. that environment in order to meet its delegated objectives.

AGENT-BASED SYSTEMS. What is an agent? ROBOTICS AND AUTONOMOUS SYSTEMS. Today. that environment in order to meet its delegated objectives. ROBOTICS AND AUTONOMOUS SYSTEMS Simon Parsons Department of Computer Science University of Liverpool LECTURE 16 comp329-2013-parsons-lect16 2/44 Today We will start on the second part of the course Autonomous

More information

Asking and answering research questions. What s it about?

Asking and answering research questions. What s it about? 2 Asking and answering research questions What s it about? (Social Psychology pp. 24 54) Social psychologists strive to reach general conclusions by developing scientific theories about why people behave

More information

In this chapter we discuss validity issues for quantitative research and for qualitative research.

In this chapter we discuss validity issues for quantitative research and for qualitative research. Chapter 8 Validity of Research Results (Reminder: Don t forget to utilize the concept maps and study questions as you study this and the other chapters.) In this chapter we discuss validity issues for

More information

The influence of (in)congruence of communicator expertise and trustworthiness on acceptance of CCS technologies

The influence of (in)congruence of communicator expertise and trustworthiness on acceptance of CCS technologies The influence of (in)congruence of communicator expertise and trustworthiness on acceptance of CCS technologies Emma ter Mors 1,2, Mieneke Weenig 1, Naomi Ellemers 1, Dancker Daamen 1 1 Leiden University,

More information

15.301/310, Managerial Psychology Prof. Dan Ariely Recitation 8: T test and ANOVA

15.301/310, Managerial Psychology Prof. Dan Ariely Recitation 8: T test and ANOVA 15.301/310, Managerial Psychology Prof. Dan Ariely Recitation 8: T test and ANOVA Statistics does all kinds of stuff to describe data Talk about baseball, other useful stuff We can calculate the probability.

More information

Writing Reaction Papers Using the QuALMRI Framework

Writing Reaction Papers Using the QuALMRI Framework Writing Reaction Papers Using the QuALMRI Framework Modified from Organizing Scientific Thinking Using the QuALMRI Framework Written by Kevin Ochsner and modified by others. Based on a scheme devised by

More information

Neuroscience and Generalized Empirical Method Go Three Rounds

Neuroscience and Generalized Empirical Method Go Three Rounds Bruce Anderson, Neuroscience and Generalized Empirical Method Go Three Rounds: Review of Robert Henman s Global Collaboration: Neuroscience as Paradigmatic Journal of Macrodynamic Analysis 9 (2016): 74-78.

More information

Measuring and Assessing Study Quality

Measuring and Assessing Study Quality Measuring and Assessing Study Quality Jeff Valentine, PhD Co-Chair, Campbell Collaboration Training Group & Associate Professor, College of Education and Human Development, University of Louisville Why

More information

Introduction to Research Methods

Introduction to Research Methods Introduction to Research Methods 8-10% of the AP Exam Psychology is an empirical discipline. Psychologists develop knowledge by doing research. Research provides guidance for psychologists who develop

More information

Boston Library Consortium Member Libraries

Boston Library Consortium Member Libraries Digitized by the Internet Archive in 2011 with funding from Boston Library Consortium Member Libraries http://www.archive.org/details/dopeoplemeanwhatoobert B31 1415.Odr DEWEY Massachusetts Institute

More information

CHAPTER 3 DATA ANALYSIS: DESCRIBING DATA

CHAPTER 3 DATA ANALYSIS: DESCRIBING DATA Data Analysis: Describing Data CHAPTER 3 DATA ANALYSIS: DESCRIBING DATA In the analysis process, the researcher tries to evaluate the data collected both from written documents and from other sources such

More information

CHAPTER 3 METHOD AND PROCEDURE

CHAPTER 3 METHOD AND PROCEDURE CHAPTER 3 METHOD AND PROCEDURE Previous chapter namely Review of the Literature was concerned with the review of the research studies conducted in the field of teacher education, with special reference

More information

RELATIONSHIPS BETWEEN COGNITIVE PSYCHOLOGY AND SURVEY RESEARCH ~

RELATIONSHIPS BETWEEN COGNITIVE PSYCHOLOGY AND SURVEY RESEARCH ~ RELATIONSHIPS BETWEEN COGNITIVE PSYCHOLOGY AND SURVEY RESEARCH ~ Monroe Sirken, National Center for Health Statistics; Douglas Herrmann, Indiana State University Monroe Sirken, NCHS, 6525 Belcrest Road,

More information

Temporalization In Causal Modeling. Jonathan Livengood & Karen Zwier TaCitS 06/07/2017

Temporalization In Causal Modeling. Jonathan Livengood & Karen Zwier TaCitS 06/07/2017 Temporalization In Causal Modeling Jonathan Livengood & Karen Zwier TaCitS 06/07/2017 Temporalization In Causal Modeling Jonathan Livengood & Karen Zwier TaCitS 06/07/2017 Introduction Causal influence,

More information

HOW TO IDENTIFY A RESEARCH QUESTION? How to Extract a Question from a Topic that Interests You?

HOW TO IDENTIFY A RESEARCH QUESTION? How to Extract a Question from a Topic that Interests You? Stefan Götze, M.A., M.Sc. LMU HOW TO IDENTIFY A RESEARCH QUESTION? I How to Extract a Question from a Topic that Interests You? I assume you currently have only a vague notion about the content of your

More information

There are two types of presuppositions that are significant in NLP: linguistic presuppositions and epistemological presuppositions.

There are two types of presuppositions that are significant in NLP: linguistic presuppositions and epistemological presuppositions. Presuppositions by Robert Dilts. Presuppositions relate to unconscious beliefs or assumptions embedded in the structure of an utterance, action or another belief; and are required for the utterance, action

More information

Critical Thinking and Reading Lecture 15

Critical Thinking and Reading Lecture 15 Critical Thinking and Reading Lecture 5 Critical thinking is the ability and willingness to assess claims and make objective judgments on the basis of well-supported reasons. (Wade and Tavris, pp.4-5)

More information

Technical Specifications

Technical Specifications Technical Specifications In order to provide summary information across a set of exercises, all tests must employ some form of scoring models. The most familiar of these scoring models is the one typically

More information

Issues That Should Not Be Overlooked in the Dominance Versus Ideal Point Controversy

Issues That Should Not Be Overlooked in the Dominance Versus Ideal Point Controversy Industrial and Organizational Psychology, 3 (2010), 489 493. Copyright 2010 Society for Industrial and Organizational Psychology. 1754-9426/10 Issues That Should Not Be Overlooked in the Dominance Versus

More information

Durkheim. Durkheim s fundamental task in Rules of the Sociological Method is to lay out

Durkheim. Durkheim s fundamental task in Rules of the Sociological Method is to lay out Michelle Lynn Tey Meadow Jane Jones Deirdre O Sullivan Durkheim Durkheim s fundamental task in Rules of the Sociological Method is to lay out the basic disciplinary structure of sociology. He begins by

More information

Behaviorism: An essential survival tool for practitioners in autism

Behaviorism: An essential survival tool for practitioners in autism Behaviorism: An essential survival tool for practitioners in autism What we re going to do today 1. Review the role of radical behaviorism (RB) James M. Johnston, Ph.D., BCBA-D National Autism Conference

More information

My Notebook. A space for your private thoughts.

My Notebook. A space for your private thoughts. My Notebook A space for your private thoughts. 2 Ground rules: 1. Listen respectfully. 2. Speak your truth. And honor other people s truth. 3. If your conversations get off track, pause and restart. Say

More information

Commentary on The Erotetic Theory of Attention by Philipp Koralus. Sebastian Watzl

Commentary on The Erotetic Theory of Attention by Philipp Koralus. Sebastian Watzl Commentary on The Erotetic Theory of Attention by Philipp Koralus A. Introduction Sebastian Watzl The study of visual search is one of the experimental paradigms for the study of attention. Visual search

More information

2 Psychological Processes : An Introduction

2 Psychological Processes : An Introduction 2 Psychological Processes : An Introduction 2.1 Introduction In our everyday life we try to achieve various goals through different activities, receive information from our environment, learn about many

More information

SENTENCE COMPLETION TEST FOR DEPRESSION. LONG FORM Version 3.1 SCD-48

SENTENCE COMPLETION TEST FOR DEPRESSION. LONG FORM Version 3.1 SCD-48 SENTENCE COMPLETION TEST FOR DEPRESSION LONG FORM Version 3.1 SCD-48 Dr Stephen Barton Division of Psychiatry and Behavioural Sciences University of Leeds, UK. (Copyright, 1999) SENTENCE COMPLETION TEST

More information

Best Practice Model Communication/Relational Skills in Soliciting the Patient/Family Story Stuart Farber

Best Practice Model Communication/Relational Skills in Soliciting the Patient/Family Story Stuart Farber Best Practice Model Communication/Relational Skills in Soliciting the Patient/Family Story Stuart Farber Once you have set a safe context for the palliative care discussion soliciting the patient's and

More information

Introduction to Research Methods

Introduction to Research Methods Introduction to Research Methods Updated August 08, 2016 1 The Three Types of Psychology Research Psychology research can usually be classified as one of three major types: 1. Causal Research When most

More information

DON M. PALLAIS, CPA 14 Dahlgren Road Richmond, Virginia Telephone: (804) Fax: (804)

DON M. PALLAIS, CPA 14 Dahlgren Road Richmond, Virginia Telephone: (804) Fax: (804) DON M. PALLAIS, CPA 14 Dahlgren Road Richmond, Virginia 23233 Telephone: (804) 784-0884 Fax: (804) 784-0885 Office of the Secretary PCAOB 1666 K Street, NW Washington, DC 20006-2083 Gentlemen: November

More information

1 The conceptual underpinnings of statistical power

1 The conceptual underpinnings of statistical power 1 The conceptual underpinnings of statistical power The importance of statistical power As currently practiced in the social and health sciences, inferential statistics rest solidly upon two pillars: statistical

More information

Recording Transcript Wendy Down Shift #9 Practice Time August 2018

Recording Transcript Wendy Down Shift #9 Practice Time August 2018 Recording Transcript Wendy Down Shift #9 Practice Time August 2018 Hi there. This is Wendy Down and this recording is Shift #9 in our 6 month coaching program. [Excuse that I referred to this in the recording

More information

The Psychology of Inductive Inference

The Psychology of Inductive Inference The Psychology of Inductive Inference Psychology 355: Cognitive Psychology Instructor: John Miyamoto 05/24/2018: Lecture 09-4 Note: This Powerpoint presentation may contain macros that I wrote to help

More information

COURSE: NURSING RESEARCH CHAPTER I: INTRODUCTION

COURSE: NURSING RESEARCH CHAPTER I: INTRODUCTION COURSE: NURSING RESEARCH CHAPTER I: INTRODUCTION 1. TERMINOLOGY 1.1 Research Research is a systematic enquiry about a particular situation for a certain truth. That is: i. It is a search for knowledge

More information

Living My Best Life. Today, after more than 30 years of struggling just to survive, Lynn is in a very different space.

Living My Best Life. Today, after more than 30 years of struggling just to survive, Lynn is in a very different space. Living My Best Life Lynn Allen-Johnson s world turned upside down when she was 16. That s when her father and best friend died of Hodgkin s disease leaving behind her mom and six kids. Lynn s family was

More information

From Individual to Community: Changing the Culture of Practice in Children s Mental Health

From Individual to Community: Changing the Culture of Practice in Children s Mental Health From Individual to Community: Changing the Culture of Practice in Children s Mental Health An interview with Ron Manderscheid, a national leader on mental health and substance abuse and member of FrameWorks

More information

What Constitutes a Good Contribution to the Literature (Body of Knowledge)?

What Constitutes a Good Contribution to the Literature (Body of Knowledge)? What Constitutes a Good Contribution to the Literature (Body of Knowledge)? Read things that make good contributions to the body of knowledge. The purpose of scientific research is to add to the body of

More information

PSYCHOLOGICAL CONSCIOUSNESS AND PHENOMENAL CONSCIOUSNESS. Overview

PSYCHOLOGICAL CONSCIOUSNESS AND PHENOMENAL CONSCIOUSNESS. Overview Lecture 28-29 PSYCHOLOGICAL CONSCIOUSNESS AND PHENOMENAL CONSCIOUSNESS Overview David J. Chalmers in his famous book The Conscious Mind 1 tries to establish that the problem of consciousness as the hard

More information

Experimental Research in HCI. Alma Leora Culén University of Oslo, Department of Informatics, Design

Experimental Research in HCI. Alma Leora Culén University of Oslo, Department of Informatics, Design Experimental Research in HCI Alma Leora Culén University of Oslo, Department of Informatics, Design almira@ifi.uio.no INF2260/4060 1 Oslo, 15/09/16 Review Method Methodology Research methods are simply

More information

Value From Regulatory Fit E. Tory Higgins

Value From Regulatory Fit E. Tory Higgins CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE Value From Regulatory Fit E. Tory Higgins Columbia University ABSTRACT Where does value come from? I propose a new answer to this classic question. People experience

More information

Understanding Your Coding Feedback

Understanding Your Coding Feedback Understanding Your Coding Feedback With specific feedback about your sessions, you can choose whether or how to change your performance to make your interviews more consistent with the spirit and methods

More information

The Attribute Index - Leadership

The Attribute Index - Leadership 26-Jan-2007 0.88 / 0.74 The Attribute Index - Leadership Innermetrix, Inc. Innermetrix Talent Profile of Innermetrix, Inc. http://www.innermetrix.cc/ The Attribute Index - Leadership Patterns Patterns

More information

CHAPTER 1 Understanding Social Behavior

CHAPTER 1 Understanding Social Behavior CHAPTER 1 Understanding Social Behavior CHAPTER OVERVIEW Chapter 1 introduces you to the field of social psychology. The Chapter begins with a definition of social psychology and a discussion of how social

More information

Answers to end of chapter questions

Answers to end of chapter questions Answers to end of chapter questions Chapter 1 What are the three most important characteristics of QCA as a method of data analysis? QCA is (1) systematic, (2) flexible, and (3) it reduces data. What are

More information

INTERVIEWS II: THEORIES AND TECHNIQUES 5. CLINICAL APPROACH TO INTERVIEWING PART 1

INTERVIEWS II: THEORIES AND TECHNIQUES 5. CLINICAL APPROACH TO INTERVIEWING PART 1 INTERVIEWS II: THEORIES AND TECHNIQUES 5. CLINICAL APPROACH TO INTERVIEWING PART 1 5.1 Clinical Interviews: Background Information The clinical interview is a technique pioneered by Jean Piaget, in 1975,

More information

Therapeutic methods of experienced music therapists as a function of the kind of clients and the goals of therapy

Therapeutic methods of experienced music therapists as a function of the kind of clients and the goals of therapy Music Therapy Today October 2002 Therapeutic methods of experienced music therapists as a function of the kind of clients and the goals of therapy Klaus Drieschner and Almut Pioch Abstract This article

More information

POLI 343 Introduction to Political Research

POLI 343 Introduction to Political Research POLI 343 Introduction to Political Research Session 5: Theory in the Research Process, Concepts, Laws and Paradigms Lecturer: Prof. A. Essuman-Johnson, Dept. of Political Science Contact Information: aessuman-johnson@ug.edu.gh

More information

1. Before starting the second session, quickly examine total on short form BDI; note

1. Before starting the second session, quickly examine total on short form BDI; note SESSION #2: 10 1. Before starting the second session, quickly examine total on short form BDI; note increase or decrease. Recall that rating a core complaint was discussed earlier. For the purpose of continuity,

More information

Chapter 11. Experimental Design: One-Way Independent Samples Design

Chapter 11. Experimental Design: One-Way Independent Samples Design 11-1 Chapter 11. Experimental Design: One-Way Independent Samples Design Advantages and Limitations Comparing Two Groups Comparing t Test to ANOVA Independent Samples t Test Independent Samples ANOVA Comparing

More information

Effects of Sequential Context on Judgments and Decisions in the Prisoner s Dilemma Game

Effects of Sequential Context on Judgments and Decisions in the Prisoner s Dilemma Game Effects of Sequential Context on Judgments and Decisions in the Prisoner s Dilemma Game Ivaylo Vlaev (ivaylo.vlaev@psy.ox.ac.uk) Department of Experimental Psychology, University of Oxford, Oxford, OX1

More information

Evaluation Models STUDIES OF DIAGNOSTIC EFFICIENCY

Evaluation Models STUDIES OF DIAGNOSTIC EFFICIENCY 2. Evaluation Model 2 Evaluation Models To understand the strengths and weaknesses of evaluation, one must keep in mind its fundamental purpose: to inform those who make decisions. The inferences drawn

More information

Optimization and Experimentation. The rest of the story

Optimization and Experimentation. The rest of the story Quality Digest Daily, May 2, 2016 Manuscript 294 Optimization and Experimentation The rest of the story Experimental designs that result in orthogonal data structures allow us to get the most out of both

More information

January 2, Overview

January 2, Overview American Statistical Association Position on Statistical Statements for Forensic Evidence Presented under the guidance of the ASA Forensic Science Advisory Committee * January 2, 2019 Overview The American

More information

T. Kushnir & A. Gopnik (2005 ). Young children infer causal strength from probabilities and interventions. Psychological Science 16 (9):

T. Kushnir & A. Gopnik (2005 ). Young children infer causal strength from probabilities and interventions. Psychological Science 16 (9): Probabilities and Interventions 1 Running Head: PROBABILITIES AND INTERVENTIONS T. Kushnir & A. Gopnik (2005 ). Young children infer causal strength from probabilities and interventions. Psychological

More information

Pooling Subjective Confidence Intervals

Pooling Subjective Confidence Intervals Spring, 1999 1 Administrative Things Pooling Subjective Confidence Intervals Assignment 7 due Friday You should consider only two indices, the S&P and the Nikkei. Sorry for causing the confusion. Reading

More information

ON GENERALISING: SEEING THE GENERAL IN THE PARTICULAR AND THE PARTICULAR IN THE GENERAL IN MARKETING RESEARCH

ON GENERALISING: SEEING THE GENERAL IN THE PARTICULAR AND THE PARTICULAR IN THE GENERAL IN MARKETING RESEARCH ON GENERALISING: SEEING THE GENERAL IN THE PARTICULAR AND THE PARTICULAR IN THE GENERAL IN MARKETING RESEARCH Ian F. Wilkinson University of New South Wales Track: Marketing Education Keywords: Marketing

More information

9.0 L '- ---'- ---'- --' X

9.0 L '- ---'- ---'- --' X 352 C hap te r Ten 11.0 10.5 Y 10.0 9.5 9.0 L...- ----'- ---'- ---'- --' 0.0 0.5 1.0 X 1.5 2.0 FIGURE 10.23 Interpreting r = 0 for curvilinear data. Establishing causation requires solid scientific understanding.

More information

9 research designs likely for PSYC 2100

9 research designs likely for PSYC 2100 9 research designs likely for PSYC 2100 1) 1 factor, 2 levels, 1 group (one group gets both treatment levels) related samples t-test (compare means of 2 levels only) 2) 1 factor, 2 levels, 2 groups (one

More information

ASKING COMPARATIVE QUESTIONS

ASKING COMPARATIVE QUESTIONS ASKING COMPARATIVE QUESTIONS THE IMPACT OF THE DIRECTION OF COMPARISON MICHAELA WANKE NORBERT SCHWARZ ELISABETH NOELLE-NEUMANN Abstract Questions assessing comparative judgments are often phrased as directed

More information

so that a respondent may choose one of the categories to express a judgment about some characteristic of an object or of human behavior.

so that a respondent may choose one of the categories to express a judgment about some characteristic of an object or of human behavior. Effects of Verbally Labeled Anchor Points on the Distributional Parameters of Rating Measures Grace French-Lazovik and Curtis L. Gibson University of Pittsburgh The hypothesis was examined that the negative

More information

Cognitive Authority. Soo Young Rieh. School of Information. University of Michigan.

Cognitive Authority. Soo Young Rieh. School of Information. University of Michigan. Cognitive Authority Soo Young Rieh School of Information University of Michigan rieh@umich.edu Patrick Wilson (1983) developed the cognitive authority theory from social epistemology in his book, Second-hand

More information

THE EFFECTS OF SELF AND PROXY RESPONSE STATUS ON THE REPORTING OF RACE AND ETHNICITY l

THE EFFECTS OF SELF AND PROXY RESPONSE STATUS ON THE REPORTING OF RACE AND ETHNICITY l THE EFFECTS OF SELF AND PROXY RESPONSE STATUS ON THE REPORTING OF RACE AND ETHNICITY l Brian A. Harris-Kojetin, Arbitron, and Nancy A. Mathiowetz, University of Maryland Brian Harris-Kojetin, The Arbitron

More information

Validity Arguments for Alternate Assessment Systems

Validity Arguments for Alternate Assessment Systems Validity Arguments for Alternate Assessment Systems Scott Marion, Center for Assessment Reidy Interactive Lecture Series Portsmouth, NH September 25-26, 26, 2008 Marion. Validity Arguments for AA-AAS.

More information

Critical Thinking Assessment at MCC. How are we doing?

Critical Thinking Assessment at MCC. How are we doing? Critical Thinking Assessment at MCC How are we doing? Prepared by Maura McCool, M.S. Office of Research, Evaluation and Assessment Metropolitan Community Colleges Fall 2003 1 General Education Assessment

More information

The truth about lying

The truth about lying Reading Practice The truth about lying Over the years Richard Wiseman has tried to unravel the truth about deception - investigating the signs that give away a liar. A In the 1970s, as part of a large-scale

More information

School of Nursing, University of British Columbia Vancouver, British Columbia, Canada

School of Nursing, University of British Columbia Vancouver, British Columbia, Canada Data analysis in qualitative research School of Nursing, University of British Columbia Vancouver, British Columbia, Canada Unquestionably, data analysis is the most complex and mysterious of all of the

More information

ORIGINS AND DISCUSSION OF EMERGENETICS RESEARCH

ORIGINS AND DISCUSSION OF EMERGENETICS RESEARCH ORIGINS AND DISCUSSION OF EMERGENETICS RESEARCH The following document provides background information on the research and development of the Emergenetics Profile instrument. Emergenetics Defined 1. Emergenetics

More information

The Research Process. T here is the story of a Zen Buddhist who took a group of monks into

The Research Process. T here is the story of a Zen Buddhist who took a group of monks into 01-Rudestam.qxd 2/12/2007 2:28 PM Page 3 1 The Research Process T here is the story of a Zen Buddhist who took a group of monks into the forest, whereupon the group soon lost their way. Presently one of

More information

International Journal of Public Opinion Research Advance Access published April 1, 2005 RESEARCH NOTE

International Journal of Public Opinion Research Advance Access published April 1, 2005 RESEARCH NOTE International Journal of Public Opinion Research Advance Access published April 1, 2005 International Journal of Public Opinion Research The Author 2005. Published by Oxford University Press on behalf

More information

Langer and Rodin (1976) Aims

Langer and Rodin (1976) Aims Langer and Rodin (1976) Aims Langer and Rodin aimed to investigate the effect of personal control on general well-being and engagement in activities in elderly people in a nursing home. In the context

More information

Whose psychological concepts?

Whose psychological concepts? 1 Whose psychological concepts? Jan Smedslund The Structure of Psychological Common Sense Mahwah, NJ: Erlbaum, 1997. 111 pp. ISBN 0-8058-2903-2. $24.95 Review by Bertram F. Malle Socrates charge against

More information

Is it possible to give a philosophical definition of sexual desire?

Is it possible to give a philosophical definition of sexual desire? Issue 1 Spring 2016 Undergraduate Journal of Philosophy Is it possible to give a philosophical definition of sexual desire? William Morgan - The University of Sheffield pp. 47-58 For details of submission

More information

MBA SEMESTER III. MB0050 Research Methodology- 4 Credits. (Book ID: B1206 ) Assignment Set- 1 (60 Marks)

MBA SEMESTER III. MB0050 Research Methodology- 4 Credits. (Book ID: B1206 ) Assignment Set- 1 (60 Marks) MBA SEMESTER III MB0050 Research Methodology- 4 Credits (Book ID: B1206 ) Assignment Set- 1 (60 Marks) Note: Each question carries 10 Marks. Answer all the questions Q1. a. Differentiate between nominal,

More information

This is a large part of coaching presence as it helps create a special and strong bond between coach and client.

This is a large part of coaching presence as it helps create a special and strong bond between coach and client. Page 1 Confidence People have presence when their outer behavior and appearance conveys confidence and authenticity and is in sync with their intent. It is about being comfortable and confident with who

More information

PSYCHOLOGIST-PATIENT SERVICES

PSYCHOLOGIST-PATIENT SERVICES PSYCHOLOGIST-PATIENT SERVICES PSYCHOLOGICAL SERVICES Welcome to my practice. Because you will be putting a good deal of time and energy into therapy, you should choose a psychologist carefully. I strongly

More information

Evaluating the Causal Role of Unobserved Variables

Evaluating the Causal Role of Unobserved Variables Evaluating the Causal Role of Unobserved Variables Christian C. Luhmann (christian.luhmann@vanderbilt.edu) Department of Psychology, Vanderbilt University 301 Wilson Hall, Nashville, TN 37203 USA Woo-kyoung

More information

(CORRELATIONAL DESIGN AND COMPARATIVE DESIGN)

(CORRELATIONAL DESIGN AND COMPARATIVE DESIGN) UNIT 4 OTHER DESIGNS (CORRELATIONAL DESIGN AND COMPARATIVE DESIGN) Quasi Experimental Design Structure 4.0 Introduction 4.1 Objectives 4.2 Definition of Correlational Research Design 4.3 Types of Correlational

More information

Chapter 18: Categorical data

Chapter 18: Categorical data Chapter 18: Categorical data Labcoat Leni s Real Research The impact of sexualized images on women s self-evaluations Problem Daniels, E., A. (2012). Journal of Applied Developmental Psychology, 33, 79

More information

Does executive coaching work? The questions every coach needs to ask and (at least try to) answer. Rob B Briner

Does executive coaching work? The questions every coach needs to ask and (at least try to) answer. Rob B Briner Does executive coaching work? The questions every coach needs to ask and (at least try to) answer Rob B Briner 1 Some of The Questions 1. What are the general claims made for executive coaching and what

More information