ANSWERS: Research Methods

Similar documents
Research Methods. Page 1 of 23

Higher Psychology RESEARCH REVISION

AS Psychology Curriculum Plan & Scheme of work

Research Methods in Psychology UNIT 3 PSYCHOLOGY 2013

Experimental Psychology

Psychology Research Process

2 Critical thinking guidelines

Investigative Biology (Advanced Higher)

9 research designs likely for PSYC 2100

PLS 506 Mark T. Imperial, Ph.D. Lecture Notes: Reliability & Validity

UNIT 3 & 4 PSYCHOLOGY RESEARCH METHODS TOOLKIT

Political Science 15, Winter 2014 Final Review

SAMPLE. 1. Explain how you would carry out an experiment into the effect playing video games has on alertness.

AP Psychology -- Chapter 02 Review Research Methods in Psychology

Clever Hans the horse could do simple math and spell out the answers to simple questions. He wasn t always correct, but he was most of the time.

Measurement page 1. Measurement

Lecture 4: Research Approaches

Introduction to Research Methods

SAMPLE. 1. Explain how you would carry out an experiment into the effect playing video games has on alertness.

investigate. educate. inform.

Human intuition is remarkably accurate and free from error.

Unit 1 Exploring and Understanding Data

HUMAN-COMPUTER INTERACTION EXPERIMENTAL DESIGN

Lecture 5 Conducting Interviews and Focus Groups

Empirical Knowledge: based on observations. Answer questions why, whom, how, and when.

GCSE PSYCHOLOGY UNIT 2 FURTHER RESEARCH METHODS

The Scientific Approach: A Search for Laws Basic assumption of science: Events are governed by some lawful order. Goals of psychology: Measure and

Psychology - MR. CALLAWAY Mundy s Mill High School Unit RESEARCH METHODS

Psychology Research Process

Evidence-based practice tutorial Critical Appraisal Skills

Quantitative Methods in Computing Education Research (A brief overview tips and techniques)

ISC- GRADE XI HUMANITIES ( ) PSYCHOLOGY. Chapter 2- Methods of Psychology

Psychology of Dysfunctional Behaviour RESEARCH METHODS

A questionnaire must be designed very carefully. There are a wide variety of types of questions to choose from, as outlined below:

Psych 1Chapter 2 Overview

EXPERIMENTS IN RESEARCH

THE RESEARCH ENTERPRISE IN PSYCHOLOGY

Step 3-- Operationally define the variables B. Operational definitions. 3. Evaluating operational definitions-validity

PSYC1024 Clinical Perspectives on Anxiety, Mood and Stress

Statistics is the science of collecting, organizing, presenting, analyzing, and interpreting data to assist in making effective decisions

Design of Experiments & Introduction to Research

The median accuracy score for the standard interview and the cognitive interview. Median 10 15

Chapter 13 Summary Experiments and Observational Studies

The degree to which a measure is free from error. (See page 65) Accuracy

Lesson 8 Descriptive Statistics: Measures of Central Tendency and Dispersion

Samples, Sample Size And Sample Error. Research Methodology. How Big Is Big? Estimating Sample Size. Variables. Variables 2/25/2018

Clinical problems and choice of study designs

Chapter 13. Experiments and Observational Studies. Copyright 2012, 2008, 2005 Pearson Education, Inc.

Students will understand the definition of mean, median, mode and standard deviation and be able to calculate these functions with given set of

POST GRADUATE DIPLOMA IN BIOETHICS (PGDBE) Term-End Examination June, 2016 MHS-014 : RESEARCH METHODOLOGY

04/12/2014. Research Methods in Psychology. Chapter 6: Independent Groups Designs. What is your ideas? Testing

Final Exam: PSYC 300. Multiple Choice Items (1 point each)

CHAPTER 3 DATA ANALYSIS: DESCRIBING DATA

Chapter 1: Explaining Behavior

Statistics is the science of collecting, organizing, presenting, analyzing, and interpreting data to assist in making effective decisions

Asch Model Answers. Aims and Context

A to Z OF RESEARCH METHODS AND TERMS APPLICABLE WITHIN SOCIAL SCIENCE RESEARCH

Describe what is meant by a placebo Contrast the double-blind procedure with the single-blind procedure Review the structure for organizing a memo

Research Methods & Design Outline. Types of research design How to choose a research design Issues in research design

Reliability and Validity checks S-005

VALIDITY OF QUANTITATIVE RESEARCH

Collecting & Making Sense of

In this chapter we discuss validity issues for quantitative research and for qualitative research.

Choosing and Using Quantitative Research Methods and Tools

RESEARCH METHODS: PSYCHOLOGY AS A SCIENCE

CHAPTER LEARNING OUTCOMES

Sample Exam Questions Psychology 3201 Exam 1

CHAPTER 8 EXPERIMENTAL DESIGN

Quantitative survey methods

AQA (A) Research methods. Model exam answers

INTRODUCTION TO PSYCHOLOGY. Radwan Banimustafa

Paper Airplanes & Scientific Methods

An Introduction to Research Statistics

HPS301 Exam Notes- Contents

What is Psychology? chapter 1

Making a psychometric. Dr Benjamin Cowan- Lecture 9

UNIT II: RESEARCH METHODS

GLOSSARY OF GENERAL TERMS

Biostatistics. Donna Kritz-Silverstein, Ph.D. Professor Department of Family & Preventive Medicine University of California, San Diego

Variables Research involves trying to determine the relationship between two or more variables.

Critical Appraisal Series

Readings: Textbook readings: OpenStax - Chapters 1 4 Online readings: Appendix D, E & F Online readings: Plous - Chapters 1, 5, 6, 13

Designing Psychology Experiments: Data Analysis and Presentation

Running head: Investigating the merit of discreetness in health care services targeting young men. Edvard K.N. Karlsen

GCE. Psychology. Mark Scheme for June Advanced Subsidiary GCE Unit G541: Psychological Investigations. Oxford Cambridge and RSA Examinations

STATISTICS AND RESEARCH DESIGN

Psychology 205, Revelle, Fall 2014 Research Methods in Psychology Mid-Term. Name:

Psychology: The Science

Research Methodology. Characteristics of Observations. Variables 10/18/2016. Week Most important know what is being observed.

By Hui Bian Office for Faculty Excellence

Describe how social influence research has contributed to our understanding of social change.

Myers Psychology for AP* David G. Myers PowerPoint Presentation Slides by Kent Korek Germantown High School Worth Publishers, 2010

Six Sigma Glossary Lean 6 Society

PSYC 221 Introduction to General Psychology

Communication Research Practice Questions

Designing Psychology Experiments: Data Analysis and Presentation

STA630 Research Methods Solved MCQs By

OVERVIEW OF RESEARCH METHODS II. Lecturer: Dr. Paul Narh Doku Contact: Department of Psychology, University of Ghana

Asking and answering research questions. What s it about?

M2. Positivist Methods

Transcription:

ANSWERS: Research Methods Advice: Most of these answers will fit in the boxes if writing is small, and students can use continuation sheets wherever necessary. Please note that they are not definitive answers as the aim is for students to be able to talk around the main points and expand on them wherever possible. This will provide cues to aid memory in revision, and will provide them with the necessary content for AO1 and essay questions. Thus, whilst these answers may be useful to guide students, it is optimal that students write in their own words and practise précis, the skill of writing concisely. Contents Research Methods (pages 172 to 173) Page 172 Page 173 Aims and Hypotheses (pages 174 to 175) Page 174 Page 175 Variables (page 176) Page 176 Experimental Research Designs (page 177) Page 177 Non-experimental Research Designs (page 178) Page 178 Factors Associated with Research Design (pages 179 to 180) Page 179 Page 180 Reliability and Validity (page 181 to 182) Page 181 Page 182 Sampling (page 183) Page 183 Qualitative Analysis of Data (page 184) Page 184 Quantitative Analysis of Data (pages 185 to 186) Page 185 Page 186 Graphs and Charts (pages 187 to 188) Page 187 Page 188 Research Methods Crib Sheets (pages 193 to 195) 1

Research Methods (pages 172 to 173) Page 172 Fill in the blanks: Research methods take either a quantitative or qualitative approach, which depends on whether the data collected is numerical or non-numerical. Thus, quantitative = numbers and qualitative = words. Quantitative methods are concerned with objective measurement and so try to quantify and describe behaviour. In contrast, qualitative methods are concerned with gaining in-depth data and so try to establish valid (true) explanations for behaviour. All methods can be used in a scientific or non-scientific way, so do not make the mistake of seeing quantitative as the former and qualitative as the latter. Both approaches have strengths and weaknesses and so should be seen as equally valuable. It is optimal to combine the approaches and this is called triangulation. Advantages and disadvantages Laboratory experiments Advantages: 1. The highly controlled environment of the laboratory, in particular the direct manipulation of the IV by the experimenter, enables cause and effect to be established. Causal relationships can be identified because, of all the experimental methods, this one provides the most confidence that the IV has caused the effect on the DV. 2. Laboratory experiments take the traditional scientific approach and the strength of this is that they are objective. They involve precise measurements and so are not as subject to researcher bias as less objective methods. Disadvantages: 1. The laboratory is an artificial environment and consequently the research lacks mundane realism, i.e., it is not like real life. This means the findings may not generalise to settings other than the laboratory and so the research lacks ecological validity. 2. Laboratory experiments are reductionist as they focus on only two variables, when in real life there are usually many interacting variables and multiple cause and effects involved in behaviour. Therefore, the laboratory experiment is oversimplified. Field experiments Advantages: 1. The field experiment takes place in a natural setting and so usually has greater mundane realism than laboratory experiments, and consequently may have greater generalisability to real life and so high ecological validity. 2. There is control over the IV and so cause and effect can be established to some extent, but not necessarily due to the disadvantage of lack of control. 2

Disadvantages: 1. There is less control in a field experiment, which means confounding variables may be causing the effect on the DV rather than the IV. This means internal validity is lower and it is difficult to infer cause and effect. 2. Most field experiments cannot involve informed consent, right to withdraw, or debriefing, and so the ethical implications are a weakness. Quasi-experiments Advantages: 1. A quasi-experiment enables us to research behaviours that could not otherwise be investigated experimentally because it involves a naturally occurring IV. This means it can be used to investigate phenomena that would not be practical or ethical to manipulate in a laboratory or field experiment, where the IV is controlled. 2. The experimental environment is controlled by the experimenter, which enables better control of confounding variables, and greater confidence that the IV has been isolated. Disadvantages: 1. Cause and effect can only be inferred when the experimenter directly manipulates the IV, and so in a quasi-experiment association only can be identified, which limits the conclusiveness of the findings. 2. Quasi-experiments are reductionist as they focus on only two variables when in real life there are usually many interacting variables and multiple cause and effects involved in behaviour. Thus, the quasi-experiment is oversimplified. Natural experiments Advantages: 1. A natural experiment enables us to research behaviours that could not otherwise be investigated experimentally because it involves a naturally occurring IV. This means it can be used to investigate phenomena that would not be practical or ethical to manipulate in a laboratory or field experiment, where the IV is controlled. 2. The natural experiment takes place in a natural setting and so usually has greater mundane realism than the controlled environments of laboratory and quasi-experiments, and consequently may have greater generalisability to real life and so high ecological validity. Disadvantages: 1. Cause and effect can only be inferred when the experimenter directly manipulates the IV, and so in a natural experiment association only can be identified, which limits the conclusiveness of the findings. 2. There is less control in a natural experiment, which means confounding variables may be causing the effect on the DV rather than the IV. This means internal validity is lower and so findings can be difficult to interpret and it may not be possible to infer associations. 3

Page 173 Correlational analysis Advantages: 1. Correlational analysis shows the direction and strength of relationships and so its greatest use is prediction. One variable can be predicted from the other, e.g., A level passes from GCSE grades. 2. It is a useful method to use when manipulation of the variables is impossible, and thus, a great advantage is that it can be used when an experiment cannot. Disadvantages: 1. Cause and effect cannot be established because the variables are not directly manipulated and consequently association only can be identified. This means the findings are descriptive rather than explanatory, as they describe the relationship rather than explaining the effect of one variable on the other. 2. Only two variables are investigated, but other factors may be involved that were not known of or were not accounted for in the research. This means the inferred association would lack validity. Naturalistic observation Advantages: 1. The naturalistic observation involves looking at behaviour as it occurs naturally and so has greater mundane realism than more artificial methods. Consequently, it may have greater generalisability to real life and so high ecological validity. 2. Naturalistic observation is less biased by participant reactivity, e.g., demand characteristics, which means the behaviour observed is more genuine and so the research may have greater internal validity. Disadvantages: 1. Observer bias may lead to imprecise recording or interpretation of findings. Consequently, they may lack reliability (lack of consistency) and validity. 2. Observations describe behaviour but do not explain it. Interviews Advantages: 1. The interview can yield rich detailed data, which has high validity because it reveals more about how the participant makes their experiences meaningful. 2. The interview can be more flexible as the more unstructured interviews can be participant-led rather than researcher-led. Disadvantages: 1. Interviewer bias is a weakness because question setting is subjective and data analysis is vulnerable to misinterpretation, either deliberately or unconsciously. The researcher may be drawn to data that corroborates the research hypothesis and may disregard data that doesn t and so validity will be reduced. 4

2. Participant reactivity is a problem as answers may be biased by evaluation apprehension and social desirability, which means validity would be low, as the answers would lack truth. Questionnaire surveys Advantages: 1. The questionnaire is very flexible as open and closed questions can be used, thus, both quantitative and qualitative data can be gathered, and consequently a wide range of phenomena can be investigated. 2. On a practical level they are quick and economical to conduct and consequently a large sample can be obtained. Disadvantages: 1. Researcher bias in question setting, implementation, or analysis can reduce validity as the researcher may be drawn to data that corroborates the research hypothesis and may disregard data that doesn t. 2. Participant reactivity is a problem as answers may be biased by evaluation apprehension and social desirability, which means validity would be low, as the answers would lack truth. Aims and Hypotheses (pages 174 to 175) Page 174 Hypothesis: A specific testable statement that predicts the expected outcome of the study Experimental/alternative hypothesis Fill in the blanks: An experimental hypothesis predicts a difference between two conditions. 1. To investigate the effect of alcohol on perceived attractiveness of the opposite sex (field experiment). 3. To investigate a gender difference in aggressive behaviour (natural experiment). Non-experimental hypothesis Fill in the blanks: Non-experimental research, e.g., interviews and observations, may not be analysed quantitatively and so will not predict a difference or association. Instead, the hypothesis will predict what the researcher expects to occur or the themes (patterns of response) the researcher expects to discover. 2. To investigate if women self-disclose more then men in a survey (questionnaire). 5. To investigate if chimps behaviour does evidence a theory of mind (observation). 5

Correlational hypothesis Fill in the blanks: A correlational hypothesis predicts an association or relationship between two variables. It is a special kind of non-experimental hypothesis. 4. To investigate the association between personality and self-esteem. 6. To investigate the relationship between stress and illness. Directional and non-directional hypotheses Page 175 Directional: 1. Participants ratings of the attractiveness of the opposite sex will be higher in the alcohol condition than in the non-alcohol condition. 2. The amount of self-disclosure in a survey will be higher in female participants than male participants. 3. The number of observed aggressive behaviours will be higher in male participants than female participants. Non-directional: 4. There will be a correlation between self-report measures of personality and self-esteem. 5. Not applicable as hypothesis must be directional, i.e., chimps behaviour will evidence a theory of mind. 6. There will be a correlation between self-report measures of stress and illness. Null hypotheses Experimental/alternative Fill in the blanks: Predicts no difference between the two conditions. The IV has no effect on the DV, e.g., there will be no significant difference between X and Y and any differences that do exist are due to chance and random variables. Correlational Fill in the blanks: Predicts no relationship between the two variables, e.g., there is no correlation between X and Y and any association that exists is due to chance or random variables. Fill in the blanks: Analysis of the results will reveal whether a significant difference or relationship does exist. If results prove significant the experimental or correlational hypothesis is accepted and the null hypothesis is rejected. Finally, write a null hypothesis for each of the examples: 1. There is no difference between the alcohol and non-alcohol condition in ratings of the attractiveness of the opposite sex and any differences that do occur are due to chance and/or random variables. 2. There is no difference between male and female participants in the amount of self-disclosure in a survey and any differences that do occur are due to chance and/or random variables. 6

3. There is no difference between male and female participants in the number of aggressive behaviours exhibited and any differences that do occur are due to chance and/or random variables. 4. There is not a correlation between self-report measures of personality and self-esteem and any association that does occur is due to chance and/or random variables. 5. Chimps behaviour does not evidence a theory of mind and any behaviour that does support this is due to chance. 6. There is not a correlation between self-report measures of stress and illness and any association that does occur is due to chance and/or random variables. Variables (page 176) Page 176 1. Hypothesis: Experimental, non-directional; Variables: IV = gender, DV = percentage of conformity 2. Hypothesis: Correlational, directional; Variables: V1 = stress, V2 = anxiety 3. Hypothesis: Experimental, non-directional; Variables: IV = culture, DV = attachment type 4. Hypothesis: Experimental, directional; Variables: IV = type of processing, DV = number of words recalled 5. Hypothesis: Correlational, non-directional; Variables: V1 = number of hours sleep, V2 = mental alertness 6. Hypothesis: Experimental, directional; Variables: IV = personality type, DV = obedience rating 7. Hypothesis: Correlational, non-directional; Variables: V1 = number of life events experienced, V2 = vulnerability to illness 8. Hypothesis: Correlational, non-directional; Variables: V1 = physical attractiveness of one member, V2 = physical attractiveness of the other member Experimental Research Designs (page 177) Page 177 Fill in the blanks: The three designs aim to control participant variation, i.e., individual differences between the participants, which could interfere with the effect of the IV on the DV. All three designs share a common characteristic of experiments: two conditions, and the IV is varied across these. This usually involves a control condition, which is not exposed to the IV and so acts as a baseline, and an experimental condition, which is influenced by the IV and so shows the effect of this in comparison to the control condition. Independent design Strengths Avoids order effects: The participants only experience one condition and so are less likely to guess the demand characteristics, and they are also less likely to experience other order effects such as boredom, fatigue, and the practice effect. 7

Random allocation: This means every participant has an equal chance of being allocated to either condition. This is a strength because it reduces bias in allocation and minimises participant variation. Weaknesses Participant variables: There may be consistent individual differences between the two groups of participants. For example, if one group was more alert than the other this would systematically distort results on a quick response test. Random allocation counters this weakness. Number of participants: You need more participants than you do with a repeated measures design, as there are two groups instead of one. Matched participants design Strengths Avoids order effects: The participants only experience one condition and so are less likely to guess the demand characteristics, and they are also less likely to experience other order effects such as boredom, fatigue, and the practice effect. Minimises participant variables: Participants are matched on important variables and so there is less participant variation (individual differences) between participants. Weaknesses Does not eliminate participant variables: It is impossible to control for all individual differences and so participant variation is minimised not eliminated. Difficult to achieve a good match: It can be difficult to find participants who match on a number of key variables. A large pool of participants is needed to draw from, making this time consuming and less practical than the other designs. Repeated measures design Strengths Minimises participant variables: As the same participants are in each condition participant variation is reduced, but it is not eliminated, as there will still some individual differences between the participants. Less participants are needed: As there is only one group, less participants are needed. 8

Weaknesses Order effects: Practice effect, fatigue, or boredom may all affect the second condition and so differences may be due to this rather than the action of the IV and so the internal validity of the research is constrained. Counterbalancing is used to address this. Demand characteristics are easier to guess: As the participants experience two conditions they are more likely to guess the purpose of the study and so demand characteristics may reduce the internal validity of the research. Non-experimental Research Designs (page 178) Page 178 Naturalistic observation Overt or covert observation: The researcher needs to decide whether to conceal themselves (covert) or not (overt), which depends on what is being investigated. Participant or non-participant observation: Participant observation is when the researcher becomes a member of the group they are observing in order to observe more natural behaviour, e.g., John McIntyre s report on football hooligans. However, participant observation is not always possible and for some investigations non-participant observation may be more practical and ethical, e.g., when investigating alcohol or drug abuse. Event, time, and point sampling: To avoid data overload these different forms of sampling are used. Event sampling is when only relevant events or behaviours are recorded. Time sampling is when observations are recorded only during specific time periods. Point sampling is when one individual is observed and their current behaviour categorised, and then a second individual and so on. Recording the data, e.g., frequencies, observation criteria, notes, video or audio recordings: There are many ways to record the data, some of which involve interpretation in order to categorise the behaviour into frequencies or observation criteria, e.g., moving forward could be recorded as an action or interpreted as an aggressive behaviour, depending on what is being investigated. Ethical considerations: Naturalistic observations often cannot involve informed consent, right to withdraw or debriefing and so the ethical implications are a weakness. 9

Interviews Structured, semi-structured, or unstructured: A structured interview has a fixed format of questions, which means the same questions are asked in the same order for each participant. Semistructured also has the same questions, but the order is not fixed, which means they can be selected to suit the flow of the interview and so encourage the participant to be at ease and more forthcoming. Unstructured is participant-led as participants answers direct the questions. This is the format taken in the clinical interview. Constructing good questions: This is complex because it is important that the questions are clear and unambiguous as if they communicate different meanings to different participants the answers will not be comparable. Also, they should be free from bias and subjectivity to avoid leading the participant. Ethical considerations: Ethical issues include abuse of power, particularly in clinical interviews. Deception, informed consent, and protection of participants are also key issues. Questionnaire surveys Closed and open questions: Closed questions involve a fixed response, which the participant must choose from, e.g., a Likert scale. This is easier to score and analyse. Open questions allow the participants to answer freely and so qualitative analysis is needed, which can be more difficult and time consuming, but can also yield more meaningful data. Ambiguity and bias: Ambiguity must be avoided, as data is of little value if answers cannot be compared, which they can t if different participants have interpreted the question differently. Biased questions must also be avoided as they can lead the witness or provoke reactive answers that are not valid if they are not true. Attitude scale construction: A Likert scale is the usual way to do this and involves the participant giving self-report ratings on a 5-point scale to indicate their level of agreement/non-agreement with whatever was communicated in the question. For positive statements such as It is important to get 8 hours sleep per night, strongly disagree scores 1, through to strongly agree, which scores 5. Whereas for negative statements such as It is not important to maintain a regular sleep pattern, the scoring is reversed, with strongly disagree scored as 5 and strongly agree scored as 1. This is so that the scores on the questionnaire can be related to each other. Ethical considerations: Deception, informed consent, protection of participants, right to withdraw, debrief, and confidentiality can all be issues. 10

Factors Associated with Research Design (pages 179 to 180) Page 179 Operationalisation Advantages of operationalisation: It provides a clear and objective definition of the variables and so enables the hypothesis to be tested empirically. Limitations of operationalisation: Operational definitions are circular and as the accuracy of the operationalisation is often disputed it can lack validity. Also, as the operational definition must be precise, it often only covers part of the meaning of the variable or concept and so it may be oversimplistic and reductionist. Pilot study Test materials: The pilot study allows for a trial run of the material and so questions can be checked for clarity and ambiguity. This means that adjustments can be made if there are problems before the main study. This saves time and money, as findings would be valueless if there had been ambiguity. Test procedure: The procedure can also be checked for design errors and timings. This also ascertains whether it is replicable, which is essential for testing reliability. Control of experimental designs the weaknesses of the designs are potential confounding variables Independent design Participant variables are the weakness of this design and these are controlled by large samples and random allocation: Large samples are used to control for individual differences because they are more likely to be representative and have a more even spread of any differences. Random allocation is when every participant has an equal chance of being allocated to either condition and this controls for individual differences because it ensures that they are randomly distributed, which increases internal validity. It minimises bias in the allocation process as this can lead to participants with certain characteristics being favoured for one condition over another, which would distort the findings. 11

Repeated measures design Order effects are the weakness of this design and these are controlled by counterbalancing, e.g., ABBA: Order effects are the weakness of being in two conditions and can systematically affect the second condition. To control for this, counterbalancing is used where the group is split in two and half of the participants do condition A first, followed by condition B and the second group do vice versa, and so this is known as the ABBA design. Consequently, any order effects are balanced out and so any differences are more likely to be due to the action of the IV and internal validity is higher. Page 180 Further confounding variables and bias Situational variables: Situational variables (e.g., noise, temperature, and time of day) and participant variables are the two most common forms of systematic and unsystematic error. Systematic error occurs when the experience of all the participants in one condition is different to those in the other condition; for example, one group could complete a test in quiet conditions and the other in noisy conditions. Unsystematic error or random bias occurs when individual participants experience differs. Distraction and confusion: Distractions in the environment or confusion in the procedure or materials can also be a source of error and so confound the research. It is difficult to know if changes in the DV are due to confounding variables or the action of the IV, and so internal validity is reduced. The relationship between the researcher and participant Demand characteristics and participant reactivity, e.g., evaluation apprehension, social desirability bias, the Hawthorne effect: Demand characteristics are cues in the experiment, which suggest the purpose of the research and can lead to the participants behaving as they think the experimenter wants rather than how they would behave naturally. This is a form of participant reactivity, as is evaluation apprehension, which is anxiety about being assessed and judged. This can lead to the social desirability effect, which involves people answering in a way that presents them in a good light. Investigator effects, e.g., experimenter expectancy: Investigator effects include researcher bias in the design, implementation, analysis, and/or interpretation of research. Experimenter expectancy is when the experimenter s expectations have an effect on the research findings, e.g., giving away the demand characteristics. This can be systematic, e.g., if one condition is cued by the experimenter to engineer the results that were predicted, or unsystematic (random bias). Control of confounding variables and bias Hold confounding variables such as noise, temperature, and time of day constant: Standardising the environment through keeping the variables constant can control situational variables. Similarly personal variables also need to be standardised, but this can be difficult to do. 12

Standardised instructions and procedures control for distraction and confusion, and participant reactivity and investigator effects. They also ensure research is replicable: Standardised instructions control what is said to the participants and the standardised procedure also ensures uniformity of experience for the participants. This controls for confounding variables, e.g., distraction and confusion and avoids some participants being treated more favourably than others. This means that the conditions are comparable. Control of participant reactivity and researcher effects Single-blind procedure: This controls for participant reactivity as the research hypothesis is withheld from the participants and so nor are they aware of which condition they are in. This reduces the chance of demand characteristics confounding the research. Double-blind procedure: This controls for participant reactivity and experimenter expectancy as this procedure involves a research assistant collecting the data without any knowledge of the research hypothesis, and, as in the single-blind procedure it is withheld from the participants as well. Thus, neither the research assistant gathering the data nor the participants know the research hypothesis and so nor are they aware of the conditions. Thus, experimenter expectancy and participant reactivity are controlled for. Reliability and Validity (page 181) Page 181 Reliability Fill in the blanks: Reliability is based on consistency. If the research produces the same results every time it is carried out then it is reliable. Internal reliability = consistency within the method Measuring instruments: A ruler or clock gives the same measurements when tested on different occasions and there is consistency within the method of measurement as the difference between 0cm and 5cm is the same as that between 5cm and 10cm. However, the Likert rating scales lack such consistency, as the difference between 1 and 2 on the scale may not be perceived to be the same as the difference between 4 and 5. This measure is subjective, compared to the ruler, which is objective, and so may lack reliability. Unreliable measurers reduce internal validity. Reliability of observations: Two or more observers are usually used to control for subjectivity, i.e., personal bias in the observations. Problems with reliability arise because it can be difficult to categorise complex behaviour into observation criteria. External reliability = consistency between uses of the method 13

Reliability of psychological tests: To test the consistency of psychological tests over time the test must be taken once and then again on a later occasion. The time between each test must be long enough to prevent a practice effect but not so long that the measures may have changed in some way. Fill in the blanks: Internal and external reliability can be checked using correlational techniques. Techniques to check internal reliability Split-half technique: This is used to establish the internal reliability of psychological tests. Half the scores, e.g., the even numbers, are correlated with the other half, e.g., the odd numbers, to see how similar they are, which would support internal consistency and thus reliability. Inter-rater reliability (or inter-judge reliability): Inter-rater reliability is used to test the accuracy of the observations. If the same behaviour is rated the same by two different observers then the observations are reliable. Observers must be well trained and have precise clear observation criteria. A number of measures are taken and correlated to test for reliability. Techniques to check external reliability Test retest reliability: This involves testing once and then again at a later date, i.e., replication of the original research. Meta-analyses draw on this when they compare the findings from different studies that have tested the same hypothesis, e.g., Milgram s study of obedience. Consistency and thus reliability indicate validity. Page 182 Validity Fill in the blanks: Campbell and Stanley (1966) have distinguished between internal and external validity. Internal validity = does it measure what it set out to? Is the effect genuine? Experimental validity is the IV really responsible for the effect on the DV? To be valid the research must measure what it claims in the hypothesis, i.e., that it is the IV that causes the effect on the DV. If this happens the research has truth because the effect is genuine as it is caused by the IV rather than a confounding variable. Coolican (1994) identifies threats to internal validity, i.e., other factors that could have caused the effect on the DV: Confounding variables: Situational and participant variables could be responsible for the changes in the DV rather than the IV. Unreliable measures: Measures that are inconsistent, e.g., rating scales lack reliability and validity as there is no true measure. Standardisation: A lack of standardisation means participants do not experience the same research process and so findings are not comparable. 14

Randomisation: Bias in allocation due do a lack of randomisation may systematically distort the results and so reduce internal validity, for example, if participants in one condition were picked because they were expected to perform well on a memory test. Demand characteristics: This can lead to participant reactivity and behaviour, which is not the participants natural behaviour and so internal validity is reduced. Participant reactivity: Evaluation apprehension and social desirability can also lead to behaviour that is not the participants natural behaviour. Good research design increases internal validity: Accounting for the above in the research design will increase internal validity. Checking internal validity Replication: If internal validity is high then replication should be possible, if low it will be difficult. Thus, validity and reliability are interlinked if the research has truth (validity) it should be consistent (reliability) and so replication is possible, and reliability is also an indicator of validity. External validity = generalisability to other settings (ecological) and populations Coolican (1994) identifies four main aspects to external validity: Populations: Findings have population validity if they generalise to other populations. Most importantly it must be determined if the findings generalise to the target population from which the sample was drawn. Population validity is questionable if a restricted sample was used, e.g., a particular age group, as the findings are less likely to generalise to other age groups. Locations: Findings have ecological validity if they generalise to other settings. Of particular concern is whether they generalise to real-life situations. A lack of mundane realism is a key weakness of artificial research and this often limits ecological validity because the findings are less likely to generalise to real-life settings. Measures or constructs: Findings have construct validity if the measures generalise to other measures of the same variable, e.g., does a measure of recall of word lists generalise to everyday memory? Times: Findings have temporal validity if they generalise to other time periods, e.g., do findings from the past generalise to the current context? Or do current findings generalise to the past or future? This is difficult to achieve as to some extent all research is era-dependent and contextdependent. Checking external validity Meta-analyses: A meta-analysis involves the comparison of findings from many studies that have investigated the same hypothesis. If findings are consistent (reliable) across populations, locations, and periods in time then this indicates validity, e.g., Van IJzendoorn and Kroonenberg s (1988) meta-analysis of the cross-cultural Strange Situation studies. Thus, if it has validity it is likely to replicate, and reliability in the meta-analysis is used as an indicator of validity. So it would seem that you rarely have one without the other, apart from consistently wrong findings! 15

Sampling (page 183) Page 183 Fill in the blanks: Research is conducted on people, and the group of people that the researcher is interested in is called the target population. However, it is usually not possible to use all of the people from here and so a sample must be selected. Those selected are called participants for research purposes. Thus, research is conducted on a sample but the researcher hopes that the findings will be true (valid) for the target population. For this to happen the sample must be representative of the target population. If the sample is representative then the findings can be generalised back to the target population. If not, the findings lack population validity. Therefore, the key issue is the generalisability of the sample, and this is based on two key factors: Type of sampling. Size of the sample. Random sampling Random methods every participant has an equal chance of being selected: These include methods such as selecting names out of a hat, or everybody in the population being assigned a number and a computer or random number table is used to generate the numbers that will be selected for the sample. Evaluation: It can be difficult to obtain a random sample because of problems in identifying all members of a population. Once identified it may not be possible to contact all potential participants. It is expensive and time-consuming given that it doesn t actually produce a truly representative sample, as this is an impossibility. Opportunity sampling Availability: This involves selecting anybody who is available at the time of the study to take part. This is a popular method and as much as 90% of the research in psychology textbooks favour such a method because participants were mainly undergraduates at American universities that had been selected using opportunity sampling. Evaluation: This is a weak form of sampling because opportunity samples are usually drawn from a restricted population, as the American undergraduates illustrate, and so are not very representative. Also, although anybody who is available can be selected, this doesn t always happen in practice as the researcher may approach people who they think look friendly, or less intimidating, or because they find them attractive. Thus, opportunity sampling is inherently biased. 16

Sample size There is no ideal number of participants, but a number of factors must be considered: It is expensive and time consuming to use large samples of hundreds of participants. If samples are too small (less than 10 in each condition) this reduces the chance of obtaining a meaningful effect. Sampling bias is likely to be greater with a smaller sample than with larger ones. The size of the population is relevant. If a relatively large sample is drawn from a small population then it will be very biased. Golden rule: The smaller the likely effect being studied, the larger the sample size needed to demonstrate it (15 participants per condition is a good rule of thumb). Qualitative Analysis of Data (page 184) Page 184 Data can take many forms: Written records, e.g., notes or transcripts. Audio or video recordings. Direct quotations from participants. Principles of qualitative analysis Gather data: Data is gathered using non-experimental methods, which include naturalistic observation, interview, questionnaire, and case study. Consider categories suggested by participants: This avoids researcher bias, which may happen if the researcher constructed the categories. The researcher must note the categories spontaneously used by the participants, arrange items into groups, and then compare these groupings with the categories suggested by the participants themselves. The researcher then forms the final set of categories but these may change if new information comes to light. Analyse the meanings, attitudes, and interpretations, e.g., DISCOURSE ANALYSIS: Written transcripts are made and then the researcher looks closely at the words people use and the meanings behind them. It is highly subjective and the researcher needs to have excellent interpretative skills. The researcher will look for recurrent themes and patterns in the data, which may or may not fit with the previously constructed categories. Consider the research hypothesis and possibly how it has changed as a result of the investigation: At the end of the study the researcher will consider how their hypothesis changed during the course of the investigation. 17

Making qualitative data quantitative, e.g., CONTENT ANALYSIS: The researcher may quantify the data by counting the number of items that fall into each category. This is done to summarise the qualitative data and usually accompanies, rather than replaces, the more in-depth qualitative analysis. Evaluation: Qualitative analysis considers the context and the participants as individuals and so there is more depth to the findings. But it is highly subjective as the analysis and interpretations are very vulnerable to researcher bias. Consequently, qualitative analysis used to be considered less scientific than the more objective quantitative analysis. However, this is not the case as all research can be scientific if implemented correctly. Qualitative analysis is difficult to replicate and so lacks reliability (consistency) and bias may also reduce validity. However, the data is more meaningful and so it often has more real-life validity. It provides explanations whereas quantitative analysis is mainly descriptive. Quantitative Analysis of Data (pages 185 to 186) Page 185 Level of measurement Nominal: Categories or frequencies, e.g., gender. Ordinal: Data that can be placed in rank order, e.g., rating scales. Interval: Data that has fixed intervals, e.g., temperature. Ratio: Data that has an absolute zero, e.g., height. Measures of central tendency MODE Calculate the mode in the example below: 6 MEDIAN Calculate the median in the example below: 6 MEAN Calculate the mean in the example below: 5.9 Page 186 Measures of dispersion Variation ratio Calculate the variation ratio in the example below: 11 x 100 15 = 73.33% 18

Range Calculate the range in the example below: 10 (top value) 1 (bottom value) + 1 = 10 Interquartile range Calculate the interquartile range in the example below: 8.5 5 = 3.5 You need to calculate the mean, 7, and then take as close as possible to 50% of the scores above and below this, i.e., the 6 scores around the mean. You then calculate the mean value of the scores above and below the upper (8 and 9 = 8.5) and lower (6 and 4 = 5) boundaries and minus the lower from the upper to get the interquartile range. Standard deviation Calculate the standard deviation in the example below: 1. Mean = 6.4 2. score (x) mean (x) d d 2 1 6.4 5.4 29.16 3 6.4 3.4 11.56 3 6.4 3.4 11.56 4 6.4 2.4 5.76 6 6.4 0.4 0.16 6 6.4 0.4 0.16 7 6.4 0.6 0.36 7 6.4 0.6 0.36 7 6.4 0.6 0.36 7 6.4 0.6 0.36 8 6.4 1.6 2.56 9 6.4 2.6 6.76 9 6.4 2.6 6.76 9 6.4 2.6 6.76 10 6.4 3.6 12.96 3. total of d 2 = 95.6 4. variance, s 2 = d 2 = 95.6 N 1 14 = 6.8285714 where N = number of participants 5. The square root of the variance is the standard deviation, SD = 2.61 (2 d.p.) 19

Graphs and Charts (pages 187 to 188) Page 187 Sketch an example of a frequency polygon: See Psychology for AS Level page 286. Sketch an example of a histogram: See Psychology for AS Level page 287. Page 188 Sketch an example of a bar chart: See Psychology for AS Level page 287. Sketch an example of a scattergraph for positive correlation, no correlation, and negative correlation: See Psychology for AS Level page 288. Research Methods Crib Sheets (pages 193 to 195) The crib sheet answers are a condensed version of the answers given throughout this section, so see above. 20