Meta-analysis in sport and exercise research: Review, recent developments, and recommendations
|
|
- Martin Price
- 5 years ago
- Views:
Transcription
1 European Journal of Sport Science, June 2006; 6(2): ORIGINAL ARTICLE Meta- in sport and exercise research: Review, recent developments, and recommendations M.S. HAGGER University of Nottingham Abstract The purpose of this article is to provide a general overview of the principles and practice of conducting quantitative psychometric meta-analytic reviews in the sport and exercise sciences and highlight some of the recent developments and recommendations from researchers regarding the conduct and validity of meta-analytic methods. After outlining the historical context, the general principles involved in a quantitative cumulation of research findings empirical studies is reviewed. Subsequently, recent controversies and issues surrounding the use of meta- are reviewed with examples provided from the sport and exercise psychology literature. Specifically, the basis for and selection of meta-analytic models (use of fixed vs. random effects models), the treatment of data from theories that explicitly demand testing the effects of multiple independent variables on a dependent variable (use of multiple regression), and how to treat studies that contain multiple tests of a given effect (use of averaging and structural equation modeling methods) are covered. Recommendations are provided for researchers conducting meta-analytic studies based on these issues. Keywords: Quantitative cumulative research, research synthesis, psychology, effect size No compendium of research synthesis would be complete without visiting the contribution that meta- has made to the understanding of findings in the sport and exercise sciences. Meta was a term first coined by Glass (1976) to refer to the emerging philosophy of cumulating research evidence in the scientific literature. The term was later to become synonymous with the set of statistical procedures currently used in many fields of science and social science to objectively assimilate and quantify the size of effects a number of independent empirical studies while simultaneously eliminating inherent biases in the research. The term has now become synonymous with these techniques and is now considered the state-of-the art procedure for the quantitative synthesis of research findings studies. The aim of this article is to provide a historical view of meta-, outline the important features and techniques of meta-, highlight the key contribution that meta-analytic studies have made to understanding effects studies, point out some limitations of the techniques, introduce some recent developments and issues in meta, and provide some recommendations for researchers on the use of meta- to make sense of the research literature in the sport and exercise sciences. The review will focus predominantly on psychometric meta- and will give examples from the sport and exercise psychology literature throughout. However, the issues and recommendations are equally applicable to all types of quantitative data that can be meta-analysed and all disciplines in the sport and exercise sciences. Meta-: Historical perspective, definitions, and procedures Despite the long-heritage for meta- identified by Biddle (see Biddle, this collection), until approximately 25 years ago, the narrative literature review was the only credible means available to a researcher to evaluate the existence and nature of a given hypothesis in the literature. Such a hypothesis might be tested by a correlation between two Correspondence: Martin S. Hagger, School of Psychology, University of Nottingham, University Park, Nottingham, NG7 2RD, UK. msh@psychology.nottingham.ac.uk ISSN print/issn online # 2006 European College of Sport Science DOI: /
2 104 M. S. Hagger variables, or by testing the effect of an independent variable such as a psychological construct (e.g. selfesteem) on a dependent variable such as a measure of behavior (e.g. physical activity). The narrative review may qualitatively examine trends in published research that tests a hypothesized relationship and may provide a quantitative summary of findings studies in the form of a simple vote count of the number of significant tests of the hypothesis. However, the vote-count procedure, while intuitively appealing, was criticized for focusing on statistical significance alone, and not the quality and representativeness of the research (Hunter, Schmidt, & Jackson, 1982). In the late 1970s and early 1980s several researchers sought a means of cumulating research findings studies that could address these limitations. The result was the development of the research synthesis techniques now referred to as meta-, which is a set of parsimonious procedures that enable the synthesis and distillation of vast amounts of literature into meaningful summaries whilst simultaneously accounting for any extant biases in literature (Glass, 1976; Hedges & Olkin, 1985; Hunter et al., 1982; Rosenthal & Rubin, 1982). Glass (1976) pioneered a technique that has come to be known as meta- to resolve the inherent problems associated with the vote count procedure. One problem with the vote count procedure is that the resulting statistic i.e. ratio of significant to nonsignificant findings is inherently biased by the limitations of each individual study. As Hunter et al. (1982) point out, it is not unusual to find apparent ambiguities in empirical tests of a given relationship or difference, representing the effect of one variable on another; some tests are significant and others non-significant. This presents a considerable dilemma for the investigator and his or her attempt to resolve the nature of the effect. Intuitively, one may suggest that characteristics of the sample or moderating variables may be responsible for the inconsistency. This could indeed be the case. However, Glass and subsequent authors of metaanalytic techniques suggested that such observed inconsistencies may not be inconsistencies at all and may, in fact, be artefacts of the inherent biases or sources of error evident in any empirical study. Among the most common of these within-study artefacts are sampling error and measurement error. The key statistic or metric in meta-analytic research is the measure of effect size. Effect size represents either the strength of the hypothesized relationships between variables or the magnitude of the difference between the levels of two variables. These effect sizes can be expressed in raw-score or standardized forms. Standardized forms of these effect size metrics are often used because metaanalytic procedures demand a common metric when synthesizing results. The most frequently utilized standardized effect size metrics are the zero-order Pearson correlation coefficient or r that measures the strength of a relationship between two variables and Cohen s d that represents the standardized difference between the means of two variables (Field, 2003). Of course, these effect sizes can also be expressed as the raw scores and sometimes this is more meaningful for individual studies because these are often more accessible and interpretable since the original units of measurement are used (Morris & DeShon, 2003). There are also other effect size statistics which are less popularly used, such as intraclass correlation and explained variance. For reasons of parsimony, illustration, and because research in sport and exercise psychology has focused on Pearson s r and Cohen s d as measures of effect size I will confine this review to these effect size metrics. In statistical terms these effect size metrics are equivalent and represent a standardized measure of the size of an effect in an empirical study. Indeed, statistical procedures for meta- often convert or transform one effect size metric to another. Differences in meta-analytic procedures therefore differ not in the type of effect size they adopt but in the procedures used to calculate the error associated with each effect and the means used to correct the measure for artefacts of bias as shall be seen later. The general procedure in meta- is first to identify the sample of studies that has tested the effect under consideration. While traditional empirical studies in sport and exercise science use the person as the unit of, the individual study is the unit of in meta-. Although the sample size in a meta- can, and often does, exceed the actual number of studies in the because some studies may yield more than one test of the effect. In some respects the identification of the sample of studies is the most challenging aspect of the meta- due to the difficulties in tracking down the necessary data and identifying the relevant constructs that constitute a test of the hypothesized effect under scrutiny. This gives rise to some specific difficulties with the procedure which shall be visited later. Once the sample has been identified, the effect sizes are each converted into a common effect size metric such as Pearson s r or Cohen s d and are then averaged studies to produce a single averaged effect size. There are a number of approaches to the calculation of the averaged effect size for a given difference between two means or relationship between two variables and associated measures of dispersion or spread of the averaged effect size, such as the standard deviation or the interval. For
3 Meta- sport and exercise research 105 example, a researcher may want to calculate a standardized effect size from an experimental study that has means for a dependent variable in an experimental or treatment group and a control group. Glass (1976) original approach was to calculate the difference in the means between the two groups and divide it by the standard deviation of the control group to produce the standardized effect size metric. The control group standard deviation was used because, theoretically, this should provide the best estimate as it is ostensibly from a representative group of people who have not been affected by the treatment. However, Hunter and Schmidt (1990) suggest that the pooled withingroup standard deviation is a better denominator because it is less subject to sampling error in small sample sizes. These subtle have been the source of considerable debate in the meta- literature, but most recent applications of meta have capitalized on findings from simulation studies and have adopted the procedures that minimize bias from artefacts like sampling and measurement error (Field, 2001). In the process of computing the average effect size studies, the effect sizes for each study are often corrected for the artefacts related inherent biases in the study. In all meta-analytic methods, the most common artefact of bias is the error attributed to the selection of the participants in the study, known as sampling error. However, some metaanalytic techniques also correct for errors due to variable measurement and the limits set on the variable measures, known as measurement error and range restriction, respectively. After sampling error, measurement error is the source of systematic bias that is most frequently corrected for in the extant literature. Generally, correction for measurement error in psychometric meta- and in the sport and exercise psychology domain is achieved through reliability coefficients from psychometric questionnaires. However, there are many more types of measurement artifacts that could potentially be used in correcting an effect size for bias such as dichotomization of continuous dependent and independent variables, deviations from perfect construct validity in the dependent and independent variables, transient errors of measurement, random response errors of measurement, measurement error due to scorer disagreement, and variance due to extraneous factors (Hunter & Schmidt, 1990; Schmidt & Hunter, 1999). The correction for sampling error involves the weighting of each individual effect size by the sample size because this ostensibly reflects the precision of the method of sampling used in the study 1. In inferential statistics, small sample sizes are often responsible for attenuating (reducing) the size of an effect. In methods of meta-, it is assumed that studies with larger samples are more representative and therefore studies using larger samples better infer the true relationship in the population. Therefore each effect size is weighted by the sample size such that studies with larger samples are granted more weight in the average. Studies can also be corrected for a second artefact of bias; measurement error. Hunter and Schmidt (1990) proposed a method to correct for a specific type of measurement error in psychometric data, the type often available to sport and exercise psychologists, using Cronbach alpha coefficients. Alpha coefficients measure the internal consistency of the variable and reflect the extent to which self-report measures tap a construct reliably. Alphas are an adequate correction for the effect of poor measurement which, if left uncorrected, will likely to bias the effect size downwards. Other means to correct for measurement error are test retest reliability coefficients and correlations with external criteria. Neither of these statistics are a perfect means to correct for measurement error and the practice of correcting for measurement bias in psychometric data has been criticized because the errors do not reflect biases found in real studies because the measures are not error-free. However, the advent of structural equation modeling techniques has made this less of a problem (Martin, 1982). In addition, there is a statistical price to pay for the correction of measurement bias: an increase in the sampling error of the corrected effect size. Despite these cautionary considerations, correcting an effect size for measurement error can provide a better estimate of the true value of a relationship in the population. The result of a meta- is an average corrected or weighted correlation coefficient (Pearson s r) or standardized mean difference (often Cohen s d) the available studies after correcting for sampling and measurement error and the average corrected standard deviation of the effect size. It is important to note that often insufficient data exists at the individual study level to correct for biases at that level. In such cases, Hunter and Schmidt (1990) advocate that corrections be made studies using artefact distributions. This is done by correcting the effect sizes using the distribution of the artefacts to be corrected for at the group level. For example, in psychometric meta-, 1 It is important to note that while weighting the individual effect size by the sample size is widely advocated, recent evidence suggests that using the inverse variance weight is preferable because it takes into account the variability of individual scores within the primary studies (see Lipsey & Wilson, 1996 for a more detailed discussion).
4 106 M. S. Hagger some studies often do not report reliability statistics like Cronbach alpha for all variables, such as selfreport measures of behavior (Hagger, Chatzisarantis, & Biddle, 2002). In such cases it is often useful to use the distribution of that artefact (measurement error) derived from the studies that do include sufficient information on reliability statistics as a basis for correcting the behavior variable in studies that do not. It is possible to conduct a significance test of the resulting effect size. This is done by expressing the averaged effect size statistic in standard deviations or a z-score and then conducting a univariate z-test or alternative (e.g. Hedges Q) to establish the probability of finding that size of an effect by chance. An alternative means is to establish whether the 95% about the averaged effect size measure include the value of zero. If it does not, then the researcher can make a reasonable case that the true value of the effect is different from zero, in other words, statistically significant. However, some authors have criticized the use of such a test because it only tends to include the variance attributable to sampling error in each individual study only. Instead, researchers advocate the use of credibility which is calculated from the corrected standard deviations for each study as well as the variability arising in studies (Field, 2001; Hunter & Schmidt, 1990; Whitener, 1990). In addition to establishing the average size and central tendency i.e. variability of the effect studies, and testing the hypothesis that the effect size is significantly different from zero, meta- also permits the researcher to establish whether the effect size is different studies i.e. whether it is homogenous or heterogenous. In calculating the average effect size studies, the researcher can calculate the amount of error variance attributable to the corrected artefacts (i.e. sampling and measurement error) relative to the total amount of random error in the effect size without the corrections. This provides an estimate as to whether variations due to sampling and measurement bias, i.e. bias inherent in the methodology rather than from other external variables, is responsible for most of the variation in the effect size the sample of studies. If the amount of error that could be attributable to methodological artefacts is high, Hunter and Schmidt (1990) recommend a level greater than 75%, it is likely that the effect size is homogenous, i.e. it is the best estimate of the true value of that relationship in the population. The 25% residual variance unaccounted for by error attributable to methodological artefacts is considered relatively unsubstantial relative to the majority of the variance which is accounted for by the biases corrected for in the meta- (Hunter & Schmidt, 1990). However, strictly speaking, the case is only truly homogenous and unaffected by extraneous variables if 100% of the random error is accounted for by the methodological artefacts corrected for in the meta. However, if the artefacts account for a modest amount (i.e., less that 75%) of variance in the effect size, then it is said to be heterogeneous and it suggests that external variables may moderate or affect the relationship. On finding heterogeneity in the relationship, a researcher would be compelled to search for in the sample of studies that accounted for the unattributed variance. A may be a demographic variable like gender or age, but may also be the conditions of the experiment such as controlled or uncontrolled, or even the publication status of the study, published or unpublished. Once the studies have been classified into separate samples according to the, a meta- is conducted on the effect of interest for each sample of studies and the degree of variance account for by the artefact calculated to see whether accounting for the has resulted in homogenous groups. The limitations of meta- Classification of study variables Meta- is not without its critics. Two of the most often proposed critiques lie in the means used to classify samples of studies and what Rosenthal called the file-drawer problem. Some meta-analytic studies have been criticized because of the limitations of classifying certain constructs into categories for the main or. For example, a researcher wanting to look at the relationship between emotion and sports is faced with the formidable task of classifying studies that have used a wealth of different measures and conceptualizations of emotion (e.g. affect, anxiety, stress, mood etc.) into logical and manageable groups to test the emotion-sports relationship. Researchers conducting meta-analyses often therefore give painstaking detail in the selection and coding process they applied to their sample of eligible studies in order to provide a semblance of transparency and objectivity in the method used to identify the salient variables relevant to the test of the hypothesized effect. In some cases the coding process is unambiguous and relatively straightforward. For example, in a recent meta- examining the relationships among variables from a specific social cognitive theory in an exercise context (Hagger et al., 2002), the theory of planned behavior, the identification of the salient variables and concomitant effect sizes were clearly identifiable and easy to classify as an
5 Meta- sport and exercise research 107 equivalent test of the same relationships because measures of these constructs in almost all of the studies are adhered to standardized measures. However, some researchers conducting meta-analytic reviews in sport and exercise psychology have reported some debate over the classification and inclusion status of variables to be included in the. For example, Carron, Hausenblas, and Mack (1996) reported that they engaged in considerable discussion as to how many effect sizes should be included in studies that examined several relationships between sources of social support (e.g. family, children, exercise partners) and exercise behavior. The resolution involved careful coding of the variables in a systematic fashion by experts or coders. At this juncture, it is also important to obtain a check of the reliability of the coders decisions. This can be done by examining the consistency between the independent evaluations of the coders and can be assessed using intraclass reliability coefficients. However, despite such care and rigor applied by researchers conducting such analyses, the process of classifying such variables and measures is a subjective one, adopting judgments that are not based on the rigorous hypothesis testing and falsification principles on which the calculation of the corrected averaged effect size statistic is based. This must be recognized as a limitation to metaanalytic findings. The file drawer problem Rosenthal and DiMatteo (2000) suggest that metaanalytic reviews of published studies are often biased because they neglect the other possible tests of the relationship that are unavailable to the researcher lost in the file-drawers of researchers who have either not bothered to, or failed to, get them published because their research showed support for a null hypothesis i.e. no effect. Rosenthal observed that researchers tended not to submit (and editors tended not to accept) negative outcome studies for publication. Thus there is likely to be a search bias and a publication bias in the sample of studies collated by researchers conducting a meta. Again this is an element of meta- that introduces an element of subjectivity in the process and is not dependent upon the hypothesistesting and falsification principles on which the meta-analytic calculations are based. Instead, the availability of studies is dependent on the degree of effort the researcher invests in conducting his/her literature search, the goodwill of researchers in making available their data on request, and the nature of the review process that determines which studies get published. Indeed, Spence and Blanchard (2001) have recently noted a pervasive publication bias within the sport and exercise psychology literature suggesting that studies with statistically significant tests of hypothesized effects are more likely to be published. Possible solutions put forward are increased rigor in the identification of studies for the including the commitment of the researcher to tracking down fugitive literature from researchers in the field that has not been made available in published form. Further, an additional statistic has been proposed, known as the fail safe N, which represents the number of studies with null results (i.e. tests of the effect size of interest that found zero effect) that would have to be found in order to reduce the corrected averaged effect size to a trivial level. A large value for the fail-safe N that is preferably greater than the number of studies in the is desirable because, provided the researcher has done an adequate job in their literature search, it is unlikely that such a number of studies exists. Some recent developments in meta- Fixed vs. random effects models Over the past 20 years, three approaches to meta have gained popularity and the majority of meta-analyses in social science adopt one of these strategies (Field, 2001). While the principles behind the general approach are the same, the algorithms used to calculate the corrected effects sizes differ in several ways. One of the major, which is also a source of considerable debate, is the assumption regarding the population from which the studies from the meta- are drawn. In methods of meta-, there are two main assumptions regarding the underlying population and these are manifested in two main models of meta-analytic algorithms, fixed effects or random effects. A fixed effects model assumes that all of the studies in the meta- come from the same population. This means that the true size of the effect under scrutiny will be the same for all of the studies included in the meta-. In this case the effect size is assumed to be homogenous, and the only source of variation in the effect size is assumed to be variations within each study i.e. sampling and measurement error. A random-effects model, on the other hand, does not assume that the study is drawn from the same population, rather it is drawn from but one of a universe of possible population effect sizes, termed a superpopulation. In this case there are two sources of variation in a given effect size, that arising from within the study itself, just as in the fixed effects model, but also that arising from variations in the population effect between studies. The random effects model is therefore termed a heterogenous case.
6 108 M. S. Hagger The importance of the distinction between the two models lies in the researcher s desire to generalize on the findings of their meta-. Since a fixed effects model assumes that the population effect size all of the studies will be the same, it assumes that the sample of studies represents all of the possible tests of the effect. In other words, the sample of studies reflects the universe of studies. In such a case, the averaged effect size can be assumed to be the true value of that effect. However, as we have seen earlier, a researcher is unlikely to have obtained all of the possible studies despite his or her best efforts in pursuing fugitive literature. In such a case, the researcher cannot assume that the sample of studies represents the universe of possible studies testing the effect under scrutiny. A random-effects model, therefore, is most appropriate because it does not assume that the sample represents all possible tests of the effect and is, in fact, just a sample of all of the possible studies that could be done to test the effect. As a consequence, the random effects model is preferable for researchers wishing to make inferences of generalizability regarding the effect size to other studies not included in the. If a researcher adopts a fixed effects model he or she can only generalize the effect for that particular set of studies. Of the different methods of meta-, the methods put forward by and Rosenthal and Rubin (1982) are traditionally fixed effect models of meta-. Vevea (1998) have produced a random effects version of their model, but these are seldom applied in studies that have adopted these authors approach (Field, 2001). The Hunter and Schmidt (1990) method is generally considered a random effects model. Most research in social science has adopted the Olkin or Rosenthal and Rubin models because of their intuitive simplicity and straightforward calculations. However, recent research has suggested that the adoption of fixed effects models for real-world data may result in biased estimates of the true size of an effect. This is because real world data often varies in the size of a given effect. One of the reasons for this is that there are often numerous variables present in real world data that affect the effect size in the population. There is also evidence that in the absence of such, real-world data still does not conform to a homogenous case of the effect and exhibits random variation in the effect the population (Field, 2001). Furthermore, simulation studies that have generated data for populations conforming to the heterogeneous cases has shown that fixed effects models often result in an increased likelihood of making a Type I error, that is, accepting the existence of a hypothesized effect when it is zero or non-existent (Field, 2001, 2003). Field therefore advocates the adoption of random effects models such as those advocated by Hunter and Schmidt (1990) or Vevea (1998) for researchers dealing with real world data or for those with which to generalize beyond the sample of studies testing a given effect used in their meta-. A similar view has been put forward by Hunter and Schmidt (2000). They reviewed the methods used in meta-analytic articles published in the journal Psychological Bulletin and found that of the 21 studies found, all used fixed effects models were followed by analyses and none used a random effects model. They warned against the use of a fixed effects model as it tended to inflate the Type I error rate to up to 11% for studies with sample sizes of 25 and up to 28% for studies with sample sizes up to 100. Following Hunter and Schmidt, I conducted a similar review of the methods of meta- adopted in sport and exercise psychology research to examine whether a similar trend in the models used was present in this area of the sport and exercise sciences. Initially, a review of several pertinent journals publishing research in sport and exercise psychology and the sport and exercise sciences was conducted including European Journal of Sport Sciences, International Journal of Sport and Exercise Psychology, Journal of Applied Sport Psychology, Journal of Sport and Exercise Psychology, Journal of Sports Sciences, Medicine and Science in Sports and Exercise, Psychology of Sport and Exercise, Research Quarterly for Exercise and Sport, and The Sport Psychologist, as well as online and electronic databases such as Information of Sciences Institute Social Science Citation Index and PsychInfo. The search revealed 18 studies that have used meta to provide a cumulative research synthesis in sport and exercise psychology research. A summary of the research articles, their main findings, the approach and method of meta- used, and the specific analyses adopted are provided in Table I. The topics of the studies identified in the search were diverse. Half of the articles focused on emotional constructs such as anxiety and mood, a dependent or an independent variable in exercise, sport, or physical activity (Beedie, Terry, & Lane, 2000; Craft, Magyar, Becker, & Feltz, 2003; Jokela & Hanin, 1999; Kline, 1990; Long & Hollin, 1995; Petruzzello, Landers, Hatfield, Kubitz, & Salazar, 1991; Rowley, Landers, Kyllo, & Etnier, 1995; Schlicht, 1994), two studies focused on the theories of reasoned action and planned behavior (Hagger et al., 2002; Hausenblas, Carron, & Mack, 1997), and two studies on motivational constructs from selfdetermination theory (Chatzisarantis, Hagger, Biddle, Smith, & Wang, 2003) and achievement goal theory (Ntoumanis & Biddle, 1999). The remaining
7 Meta- sport and exercise research 109 Table I. Methodological characteristics of meta-analytic reviews conducted in the field of sport and exercise psychology. Authors Study topic Main effect size statistic used and summary findings Approach Meta- model Significance test of corrected effect size Moderator Beedie, Terry, and Lane (2000) Carron, Colman, Wheeler, and Stevens (2002) Carron, Hausenblas, and Mack (1996) Chatzisarantis, Hagger, Biddle, Smith, and Wang (2003) Craft, Magyar, Becker, and Feltz (2003) Etnier, Salazar, Landers, and Petruzzello (1997) The Profile of Mood States and athletic Cohesion and in sport Social influence and exercise Perceived locus of causality in exercise, sport, and physical education Relationships between competitive state anxiety constructs and sport The influence of physical fitness and exercise upon cognitive functioning Small effect of mood state on sport achievement but stronger effect on sport Moderate to large effect of cohesion on sport moderated by publication status and gender Social influence generally had a small to moderate effect on exercise behaviors, cognition, and affect Pearson s r and b regression coefficients: Autonomous locus of causality mediated the competenceintention relationship Pearson s r and b regression coefficients: Self- aspect of competitive state anxiety had strongest and most consistent effect on sport Exercise found to have a small positive effect on cognition Hunter and Schmidt (1990) No formal test of significance cited Used 95% Used 95% Random effects Used 90% credibility Hedges Q-test Fisher s z-test No test of and conducted but made no formal comparisons No test of but conducted and used univariate F-tests to test Used test of but did not conduct a Used credibility to test for followed up by analyses and z-tests to test Used test of followed up by analyses and z-tests to test No test of but conducted and used univariate F-tests to test
8 110 M. S. Hagger Table I (Continued) Authors Study topic Main effect size statistic used and summary findings Approach Meta- model Significance test of corrected effect size Moderator Hagger, Chatzisarantis, and Biddle (2002) Hausenblas, Carron, and Mack (1997) Hausenblas and Symons Downs (2001) Jokela and Hanin (1999) Kline (1990) Kyllo and Landers (1995) Review of the theories of reasoned action and planned behavior in exercise Application of the theories of reasoned action and planned behavior in exercise Body image in athletes and non-athletes Individual zones of optimal functioning in sport Anxiety and sport Effect of goal setting in sport and exercise Pearson s r and b regression coefficients: Control and attitudes were strong, unique predictors of intention, and intention was the sole predictor of exercise behavior Cohen s d and Pearson s r : Attitudes and control had strongest effect on intention and intention the strongest effect on behavior Athletes had significantly higher body image ratings than non-athletes Hunter and Schmidt (1990) Supported the in-out of the zone hypothesis Pearson s r : Small negative relationship between anxiety and sport, moderated by age, skill level, duration, sport characteristics, time of measurement, and study characteristics Moderate positive effect of goal setting on sport Hunter and Schmidt (1990) Random effects Used 90% credibility Used 95% Fisher s z-test Hedges Q-test Random effects Used 95% Hedges Q-test Used credibility to test for followed up by analyses and to test Used test of but analyses not based on results Used test of but analyses not based on results Used Hunter and Schmidt s (1990) 75% rule Used Hunter and Schmidt s (1990) 75% rule followed by tests of using same rule Used test of followed up by analyses and 95% used to test
9 Meta- sport and exercise research 111 Table I (Continued) Authors Study topic Main effect size statistic used and summary findings Approach Meta- model Significance test of corrected effect size Moderator Long and Van Stavel (1995) Marshall and Biddle (2001) Ntoumanis and Biddle (1999) Petruzzello, Landers, Hatfield, Kubitz, and Salazar (1991) Rowley, Landers, Kyllo, and Etnier (1995) Schlicht (1994) Effects of exercise on anxiety Applications of the transtheoretical model to physical activity and exercise Motivational climate in sport and physical activity The anxietyreducing effects of acute and chronic exercise Effect of iceberg mood state profile on success in athletes Does physical exercise reduce anxious emotions? Exercise training positively affected anxiety levels Level of physical activity, self-efficacy, and behavioral pros from decisional balance increased stage of change, however unable to detect whether changes in variables stage reflect qualitatively different stages or an underlying continuum Mastery climate positive affected adaptive motivational outcomes Aerobic forms of exercise reduced levels of anxiety Successful athletes had a more positive mood profile, although the effect was small Pearson s r : Small negative relationship between anxiety and anxious emotions Hunter and Schmidt (1990) Hunter and Schmidt (1990) Hunter and Schmidt (1990) Hedges Q-test Used test of followed up by analyses and Q statistic used to test Random effects Used 95% Random effects Used Cohen s effect size criterion Used credibility to test for followed up by analyses and to test No conducted Fisher s z-test Used test of followed up by analyses and univariate F-tests to test No formal test reported, but reported SDs indicated results were not significantly different from zero Random effects Used 95% Used test of followed up by analyses and univariate F-tests to test Used Hunter and Schmidt s (1990) 75% rule followed by tests of using same rule
10 112 M. S. Hagger articles represented exclusive meta-analytic treatment of the social influences on exercise (Carron et al., 1996), the relationship between group cohesion and sport (Carron, Colman, Wheeler, & Stevens, 2002), exercise and cognitive functioning (Etnier, Salazar, Landers, & Petruzzello, 1997), body image in athletic and non-athletic populations (Hausenblas & Symons Downs, 2001), goal setting in sport and exercise (Kyllo & Landers, 1995), and the application of the transtheoretical model to physical activity (Marshall & Biddle, 2001). Importantly, only six meta-analyses (Chatzisarantis et al., 2003; Hagger et al., 2002; Kline, 1990; Marshall & Biddle, 2001; Ntoumanis & Biddle, 1998; Schlicht, 1994) adopted a random effects model (Hunter & Schmidt, 1990) to calculate the corrected effect sizes in their, the remainder used fixed effects models. Interestingly, only one meta- has actually acknowledged the distinction between fixed versus random effects models and stated their use of a fixed effects model (Craft et al., 2003). Furthermore, only three of the meta-analyses corrected for within-study statistical artefacts other than sampling error, namely measurement error (Chatzisarantis et al., 2003; Hagger et al., 2002; Marshall & Biddle, 2001), and none corrected for range restriction. However, some authors acknowledged this as a limitation of their (Jokela & Hanin, 1999). Although most meta-analyses conducted a formal test for the of the corrected effect sizes studies, few adopted the 75% rule advocated by Hunter and Schmidt for the ratio of variability attributed to corrected artefacts to the total variance in the effect sizes the studies when conducting analyses. In summary, the present review suggests that the majority of researchers in sport and exercise psychology adopted a fixed rather than random effects model when conducting meta-analytic reviews supporting the findings of Hunter and Schmidt (2000) in the general psychology literature. The ability of the sport and exercise psychology researchers adopting a fixed effects model of meta- to generalize their findings must therefore be limited given the assumptions of that underlie the fixed effects model. Future researchers should take heed of there developments and are recommended to used random effects models in meta-analytic studies on applied, real-world data 2. Use of multiple regression in meta- Often a researcher is not interested in testing the effect of a single independent variable on a dependent variable in isolation studies. Rather, they are interested in the unique effects of a number of independent variables on a dependent variable in a sample of studies. Since these independent variables may be related to each other as well as the dependent variable, the effect reflected in a zero-order correlation between any one of the independent variables and the dependent variable may be misleading because the independent variable may share variance with another independent variable as well as the dependent (for a detailed explanation see Trafimow, 2004). In individual studies adopting correlational data, the question of the effect of multiple independent variables on a dependent variable is usually analyzed using linear multiple regression techniques. Researchers adopting meta- in the sport and exercise domain have recently applied linear multiple regression analyses to the resultant correlations corrected for artefacts from a meta- to provide a multivariate test of the effect (e.g. Chatzisarantis et al., 2003; Craft et al., 2003; Hagger et al., 2002). This is a very powerful technique as it permits the researcher to establish the pattern of relationships among constructs under scrutiny and therefore resolve, within the limits of the meta, which independent variables make the most substantial contribution to explaining variance in the dependent variable studies. To illustrate this method we turn to a recent example from the sport and exercise psychology literature. Spence (1999) criticised the meta- conducted by Hausenblas et al. (1997) for not conducting a regression using the corrected effect size estimates derived from their meta- of the theory of planned behavior. Spence suggested that a true test of the theory required a regression to identify the unique effects of the independent variables of attitudes, subjective norms, and perceived behavioral control on intentions to exercise (Ajzen, 1991; Ajzen & Fishbein, 1980). This was resolved by Hagger et al. (2002) who conducted a path on their meta-analytic derived corrected correlation matrix of the theory of planned behavior constructs studies. The path comprised a series of multiple regressions reflecting the network of relationships among the theory constructs. The regression weights from Hagger 2 It is important to note that the Hunter and Schmidt (1990) approach, a random effects model of meta-, generally performs well in simulation studies with respect to the Type I error rate in heterogenous cases (i.e. when the population effect size varies studies) [Field, 2001]. However, it seems that the Hunter and Schmidt method is too liberal (i.e. more null results are found to be significant) in sets of studies in which the population effect size is homogenous (i.e. the same population effect size studies).
11 Meta- sport and exercise research 113 et al. s the path were substantially attenuated in comparison to the zero-order effect sizes put forward by Hausenblas et al., supporting Spence s claims and research using this theory in other areas of social psychology (Armitage & Conner, 2001). In summary, when a researcher is interested in examining the unique effects of two or more variables on a dependent variable in data derived from a meta, it is important to apply linear multiple regression analyses to the corrected correlations among the variables of interest in order to understand the true nature of the relationships the sample of studies. Studies with multiple tests of the same effect One dilemma that researchers conducting a meta are often faced with is individual studies in their sample that have made several tests of the same effect in the same sample. Often this arises if the researcher s coding system used to classify the independent and dependent variables responsible for the effect of interest encompasses a number of independent variables that have been used in tests of the effect within a given study. In such cases it is usually acceptable to adopt the average effect size those multiple tests of a relationship (Wolf, 1986). For example, Carron et al. (1996) examined the effects of social influence variables on exercise behavior and other outcome variables and found that some studies reported multiple tests of the same effect. In their sample, some studies provided independent tests of the effect of several different sources of support on the same dependent variable. In order to resolve this, the researchers included the average effect size for the multiple tests, according to their coding system. This is consistent with metaanalytic studies that have used a coding system to classify like variables into manageable categories for the (e.g. Hagger & Orbell, 2003). In cases of studies in which different dependent variables are used it is considered acceptable to include each as a separate test of the effect in the. In addition, some studies may have tested the same effect within a single study but presented separate effect sizes for a given variable such as gender. In such cases, these samples can be treated independently and therefore both effect sizes can be included as separate tests of the effect, just as a single study that reported data from a single-sex sample might. However, it would be appropriate to conduct a by gender if the data were available, in which case the two samples would serve as a single independent test of the relationship at each level of the. Occasionally, researchers conducting meta-analyses are confronted with a study in which the test of the effect under scrutiny is expressed as correlations among multiple manifest items, i.e. the items that make up an overall psychological construct. This typically occurs in studies that adopt a latent variable or structural equation modeling approach to the of the data. This presents a similar problem to that outlined earlier because there are effectively multiple tests of the same effect. However, because each item ostensibly measures the same construct, the multiple correlations are effectively testing exactly the same relationship repeatedly. Further, there is no reliability coefficient associated with the construct because no internal consistency statistics (the artefact usually used as a control for measurement error) have been calculated. Rather than omit the study, one alternative, put forward by Hagger et al. (2002) is to use structural equation modeling to resolve the set of correlations among the same independent variable and produce a set of latent constructs for which a single effect size can be derived and a reliability coefficient calculated. This is slightly controversial because latent variables effectively control for measurement error in the variable. However, since the reliability of the latent variable can be estimated and it is based on the amount of variance reflected in the manifest items that make up the latent construct, then the precision of the measurement of the construct of interest can be accounted for in the meta-. In summary, there are procedures that can be used to resolve issues surrounding multiple tests of an effect within a study which will help researchers maximize the potential of their sample of studies (Hagger & Orbell, 2003). Summary and recommendations for researchers The present review has examined the importance of meta- in sport and exercise research, reviewed some of its recent applications, highlighted some recent controversies and illustrated some innovative methods on how they have been resolved by researchers using meta-. Guidelines for the conduct of meta-analytic studies based on this review are presented in Figure 1. This reflects recommended practice based on the issues raised throughout this article. The flowchart highlights the steps taken in pursuing fugitive literature, the nature of the effect sizes available, the treatment of the data prior to the correction for artefacts, tests of and the search for, and the production of a final table of corrected effect sizes. Not explicitly included in the flowchart is the selection of the meta-analytic approach. Specifically, it is recommended that meta-analytic researchers
12 114 M. S. Hagger Unavailable/ qualitative Identify salient variables Available Effect size, sample size & reliabilities available? Available No Yes Nature of effect sizes Contact authors for missing data Unavailable SEM Effects sizes are correlations (e.g. Pearson s r) Yes Correlations between items? No Effects sizes are (e.g. Cohen s d) Correct effect size for sampling and measurement error REJECT STUDY Conduct No Examine CI 5: Groups homogenous Generate final corrected effect size table 9? Yes Figure 1. Flowchart outlining the recommended steps for conducting a meta-. Note. CI 95 /95% confident of the averaged corrected effect size statistic. provide an a priori rationale as to the level of inference they wish to make regarding the hypothesized effect of interest. If they merely wish to generalize regarding the true size of an effect in the sample of studies that they have collected alone, or if they are confident that the effect under scrutiny represents a homogenous case i.e. that all of the variance in the effect studies arises from artefacts within the study and is sampled from the same population, then a fixed effects model is acceptable. However, if the researcher wishes to extrapolate their inferences beyond their sample of studies and they assume that variation between studies may affect the true value of the effect size i.e. that each study is sampled from a population in which the effect size varies, then a random effects model is recommended (Field, 2001, 2003). In addition, researchers conducting a meta- are recommended to adopt good practice when including studies that have conducted multiple tests of a given effect size. Finally, the reader s attention should be drawn to further reading on the topic of meta- and the types of computer software available to conduct meta-. Lucid treatments of meta- are given by Lipsey and Wilson s (1996) text on the matter. More comprehensive and technical expositions are given by Hunter and Schmidt s (1990) and Olkin s (1985) classic texts. There are numerous freeware and commercial computer software applications for conducting metaanalytic computations, I will highlight just a few here. The Metaquick programme (Stauffer, 1996) is a simple freeware program which is easy to use and conducts psychometric meta- using Pearson s r and Cohen s d and has facilities to correct for sampling error, measurement error, and range restriction using the Hunter and Schmidt (1990) random effects method. Schwartzer (1995) has produced a very popular and versatile meta- program which is free to use and permits the conduct of meta-analyses using the algorithms advocated by Rosenthal and Rubin (1982), Glass (1976), and Hunter and Schmidt (1990). The most comprehensive software package available for the conduct of meta- is that produced by Borenstein (2000), appropriately titled Comprehensive Meta-Analysis. This software includes a state-of-the-art windows interactive format and the advanced features such as forest plots and a plethora of fixed and random effects models. Further information on the available software for meta- is available from Shadish s (2005) website at University of California, Merced. References Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50, Ajzen, I., & Fishbein, M. (1980). Understanding attitudes and predicting social behavior. New Jersey: Prentice Hall. Armitage, C.J., & Conner, M. (2001). Efficacy of the theory of planned behaviour: A meta-analytic review. British Journal of Social Psychology, 40, Beedie, C.J., Terry, P.C., & Lane, A.M. (2000). The Profile of Mood States and athletic : Two meta-analyses. Journal of Applied Sport Psychology, 12, 4968.
The moderating impact of temporal separation on the association between intention and physical activity: a meta-analysis
PSYCHOLOGY, HEALTH & MEDICINE, 2016 VOL. 21, NO. 5, 625 631 http://dx.doi.org/10.1080/13548506.2015.1080371 The moderating impact of temporal separation on the association between intention and physical
More informationMeta-Analysis of Correlation Coefficients: A Monte Carlo Comparison of Fixed- and Random-Effects Methods
Psychological Methods 01, Vol. 6, No. 2, 161-1 Copyright 01 by the American Psychological Association, Inc. 82-989X/01/S.00 DOI:.37//82-989X.6.2.161 Meta-Analysis of Correlation Coefficients: A Monte Carlo
More informationMeta-Analysis of the Theories of Reasoned Action and Planned Behavior 1
Meta-Analysis of the Theories of Reasoned Action and Planned Behavior 1 Running head: Meta-Analysis of the Theories of Reasoned Action and Planned Behavior A Meta-Analytic Review of the Theories of Reasoned
More informationMeta-Analysis David Wilson, Ph.D. Upcoming Seminar: October 20-21, 2017, Philadelphia, Pennsylvania
Meta-Analysis David Wilson, Ph.D. Upcoming Seminar: October 20-21, 2017, Philadelphia, Pennsylvania Meta-Analysis Workshop David B. Wilson, PhD September 16, 2016 George Mason University Department of
More informationA brief history of the Fail Safe Number in Applied Research. Moritz Heene. University of Graz, Austria
History of the Fail Safe Number 1 A brief history of the Fail Safe Number in Applied Research Moritz Heene University of Graz, Austria History of the Fail Safe Number 2 Introduction Rosenthal s (1979)
More informationIntroduction to Meta-Analysis
Introduction to Meta-Analysis Nazım Ço galtay and Engin Karada g Abstract As a means to synthesize the results of multiple studies, the chronological development of the meta-analysis method was in parallel
More informationCHAPTER VI RESEARCH METHODOLOGY
CHAPTER VI RESEARCH METHODOLOGY 6.1 Research Design Research is an organized, systematic, data based, critical, objective, scientific inquiry or investigation into a specific problem, undertaken with the
More informationC2 Training: August 2010
C2 Training: August 2010 Introduction to meta-analysis The Campbell Collaboration www.campbellcollaboration.org Pooled effect sizes Average across studies Calculated using inverse variance weights Studies
More informationOn the Use of Beta Coefficients in Meta-Analysis
Journal of Applied Psychology Copyright 2005 by the American Psychological Association 2005, Vol. 90, No. 1, 175 181 0021-9010/05/$12.00 DOI: 10.1037/0021-9010.90.1.175 On the Use of Beta Coefficients
More information1 The conceptual underpinnings of statistical power
1 The conceptual underpinnings of statistical power The importance of statistical power As currently practiced in the social and health sciences, inferential statistics rest solidly upon two pillars: statistical
More informationDoing Quantitative Research 26E02900, 6 ECTS Lecture 6: Structural Equations Modeling. Olli-Pekka Kauppila Daria Kautto
Doing Quantitative Research 26E02900, 6 ECTS Lecture 6: Structural Equations Modeling Olli-Pekka Kauppila Daria Kautto Session VI, September 20 2017 Learning objectives 1. Get familiar with the basic idea
More informationA Brief Introduction to Bayesian Statistics
A Brief Introduction to Statistics David Kaplan Department of Educational Psychology Methods for Social Policy Research and, Washington, DC 2017 1 / 37 The Reverend Thomas Bayes, 1701 1761 2 / 37 Pierre-Simon
More informationDescribe what is meant by a placebo Contrast the double-blind procedure with the single-blind procedure Review the structure for organizing a memo
Please note the page numbers listed for the Lind book may vary by a page or two depending on which version of the textbook you have. Readings: Lind 1 11 (with emphasis on chapters 10, 11) Please note chapter
More informationStill important ideas
Readings: OpenStax - Chapters 1 11 + 13 & Appendix D & E (online) Plous - Chapters 2, 3, and 4 Chapter 2: Cognitive Dissonance, Chapter 3: Memory and Hindsight Bias, Chapter 4: Context Dependence Still
More informationThe Meta on Meta-Analysis. Presented by Endia J. Lindo, Ph.D. University of North Texas
The Meta on Meta-Analysis Presented by Endia J. Lindo, Ph.D. University of North Texas Meta-Analysis What is it? Why use it? How to do it? Challenges and benefits? Current trends? What is meta-analysis?
More information11/18/2013. Correlational Research. Correlational Designs. Why Use a Correlational Design? CORRELATIONAL RESEARCH STUDIES
Correlational Research Correlational Designs Correlational research is used to describe the relationship between two or more naturally occurring variables. Is age related to political conservativism? Are
More informationIn many healthcare situations, it is common to find
Interpreting and Using Meta-Analysis in Clinical Practice Cheryl Holly Jason T. Slyer Systematic reviews, which can include a meta-analysis, are considered the gold standard for determination of best practice.
More informationExternal Variables and the Technology Acceptance Model
Association for Information Systems AIS Electronic Library (AISeL) AMCIS 1995 Proceedings Americas Conference on Information Systems (AMCIS) 8-25-1995 External Variables and the Technology Acceptance Model
More informationINADEQUACIES OF SIGNIFICANCE TESTS IN
INADEQUACIES OF SIGNIFICANCE TESTS IN EDUCATIONAL RESEARCH M. S. Lalithamma Masoomeh Khosravi Tests of statistical significance are a common tool of quantitative research. The goal of these tests is to
More informationAnalysis of the Reliability and Validity of an Edgenuity Algebra I Quiz
Analysis of the Reliability and Validity of an Edgenuity Algebra I Quiz This study presents the steps Edgenuity uses to evaluate the reliability and validity of its quizzes, topic tests, and cumulative
More informationConfidence Intervals On Subsets May Be Misleading
Journal of Modern Applied Statistical Methods Volume 3 Issue 2 Article 2 11-1-2004 Confidence Intervals On Subsets May Be Misleading Juliet Popper Shaffer University of California, Berkeley, shaffer@stat.berkeley.edu
More informationExamining the efficacy of the Theory of Planned Behavior (TPB) to understand pre-service teachers intention to use technology*
Examining the efficacy of the Theory of Planned Behavior (TPB) to understand pre-service teachers intention to use technology* Timothy Teo & Chwee Beng Lee Nanyang Technology University Singapore This
More informationStatistical Techniques. Meta-Stat provides a wealth of statistical tools to help you examine your data. Overview
7 Applying Statistical Techniques Meta-Stat provides a wealth of statistical tools to help you examine your data. Overview... 137 Common Functions... 141 Selecting Variables to be Analyzed... 141 Deselecting
More informationVARIABLES AND MEASUREMENT
ARTHUR SYC 204 (EXERIMENTAL SYCHOLOGY) 16A LECTURE NOTES [01/29/16] VARIABLES AND MEASUREMENT AGE 1 Topic #3 VARIABLES AND MEASUREMENT VARIABLES Some definitions of variables include the following: 1.
More informationFunnelling Used to describe a process of narrowing down of focus within a literature review. So, the writer begins with a broad discussion providing b
Accidental sampling A lesser-used term for convenience sampling. Action research An approach that challenges the traditional conception of the researcher as separate from the real world. It is associated
More informationExperimental Psychology
Title Experimental Psychology Type Individual Document Map Authors Aristea Theodoropoulos, Patricia Sikorski Subject Social Studies Course None Selected Grade(s) 11, 12 Location Roxbury High School Curriculum
More informationHPS301 Exam Notes- Contents
HPS301 Exam Notes- Contents Week 1 Research Design: What characterises different approaches 1 Experimental Design 1 Key Features 1 Criteria for establishing causality 2 Validity Internal Validity 2 Threats
More informationTest Validity. What is validity? Types of validity IOP 301-T. Content validity. Content-description Criterion-description Construct-identification
What is? IOP 301-T Test Validity It is the accuracy of the measure in reflecting the concept it is supposed to measure. In simple English, the of a test concerns what the test measures and how well it
More informationSystematic Reviews and Meta- Analysis in Kidney Transplantation
Systematic Reviews and Meta- Analysis in Kidney Transplantation Greg Knoll MD MSc Associate Professor of Medicine Medical Director, Kidney Transplantation University of Ottawa and The Ottawa Hospital KRESCENT
More informationCHAPTER NINE DATA ANALYSIS / EVALUATING QUALITY (VALIDITY) OF BETWEEN GROUP EXPERIMENTS
CHAPTER NINE DATA ANALYSIS / EVALUATING QUALITY (VALIDITY) OF BETWEEN GROUP EXPERIMENTS Chapter Objectives: Understand Null Hypothesis Significance Testing (NHST) Understand statistical significance and
More informationStill important ideas
Readings: OpenStax - Chapters 1 13 & Appendix D & E (online) Plous Chapters 17 & 18 - Chapter 17: Social Influences - Chapter 18: Group Judgments and Decisions Still important ideas Contrast the measurement
More informationCitation for published version (APA): Ebbes, P. (2004). Latent instrumental variables: a new approach to solve for endogeneity s.n.
University of Groningen Latent instrumental variables Ebbes, P. IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite from it. Please check the document
More informationRecent developments for combining evidence within evidence streams: bias-adjusted meta-analysis
EFSA/EBTC Colloquium, 25 October 2017 Recent developments for combining evidence within evidence streams: bias-adjusted meta-analysis Julian Higgins University of Bristol 1 Introduction to concepts Standard
More information26:010:557 / 26:620:557 Social Science Research Methods
26:010:557 / 26:620:557 Social Science Research Methods Dr. Peter R. Gillett Associate Professor Department of Accounting & Information Systems Rutgers Business School Newark & New Brunswick 1 Overview
More informationBusiness Statistics Probability
Business Statistics The following was provided by Dr. Suzanne Delaney, and is a comprehensive review of Business Statistics. The workshop instructor will provide relevant examples during the Skills Assessment
More informationMeta-Analysis. Zifei Liu. Biological and Agricultural Engineering
Meta-Analysis Zifei Liu What is a meta-analysis; why perform a metaanalysis? How a meta-analysis work some basic concepts and principles Steps of Meta-analysis Cautions on meta-analysis 2 What is Meta-analysis
More informationUMbRELLA interim report Preparatory work
UMbRELLA interim report Preparatory work This document is intended to supplement the UMbRELLA Interim Report 2 (January 2016) by providing a summary of the preliminary analyses which influenced the decision
More informationDescribe what is meant by a placebo Contrast the double-blind procedure with the single-blind procedure Review the structure for organizing a memo
Business Statistics The following was provided by Dr. Suzanne Delaney, and is a comprehensive review of Business Statistics. The workshop instructor will provide relevant examples during the Skills Assessment
More informationWhen the Evidence Says, Yes, No, and Maybe So
CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE When the Evidence Says, Yes, No, and Maybe So Attending to and Interpreting Inconsistent Findings Among Evidence-Based Interventions Yale University ABSTRACT
More informationLiterature Review. This chapter will review concept of meta-analysis, the steps for doing metaanalysis,
Chapter II Literature Review 2.1. Introduction This chapter will review concept of meta-analysis, the steps for doing metaanalysis, and strengths and weaknesses of meta-analysis. It also introduces the
More informationCSC2130: Empirical Research Methods for Software Engineering
CSC2130: Empirical Research Methods for Software Engineering Steve Easterbrook sme@cs.toronto.edu www.cs.toronto.edu/~sme/csc2130/ 2004-5 Steve Easterbrook. This presentation is available free for non-commercial
More informationCritical Thinking and Reading Lecture 15
Critical Thinking and Reading Lecture 5 Critical thinking is the ability and willingness to assess claims and make objective judgments on the basis of well-supported reasons. (Wade and Tavris, pp.4-5)
More informationTHE INTERPRETATION OF EFFECT SIZE IN PUBLISHED ARTICLES. Rink Hoekstra University of Groningen, The Netherlands
THE INTERPRETATION OF EFFECT SIZE IN PUBLISHED ARTICLES Rink University of Groningen, The Netherlands R.@rug.nl Significance testing has been criticized, among others, for encouraging researchers to focus
More information11/24/2017. Do not imply a cause-and-effect relationship
Correlational research is used to describe the relationship between two or more naturally occurring variables. Is age related to political conservativism? Are highly extraverted people less afraid of rejection
More informationGlossary From Running Randomized Evaluations: A Practical Guide, by Rachel Glennerster and Kudzai Takavarasha
Glossary From Running Randomized Evaluations: A Practical Guide, by Rachel Glennerster and Kudzai Takavarasha attrition: When data are missing because we are unable to measure the outcomes of some of the
More informationExtraversion. The Extraversion factor reliability is 0.90 and the trait scale reliabilities range from 0.70 to 0.81.
MSP RESEARCH NOTE B5PQ Reliability and Validity This research note describes the reliability and validity of the B5PQ. Evidence for the reliability and validity of is presented against some of the key
More informationValidity and reliability of measurements
Validity and reliability of measurements 2 3 Request: Intention to treat Intention to treat and per protocol dealing with cross-overs (ref Hulley 2013) For example: Patients who did not take/get the medication
More informationCHAPTER III METHODOLOGY AND PROCEDURES. In the first part of this chapter, an overview of the meta-analysis methodology is
CHAPTER III METHODOLOGY AND PROCEDURES In the first part of this chapter, an overview of the meta-analysis methodology is provided. General procedures inherent in meta-analysis are described. The other
More informationMethod. NeuRA Biofeedback May 2016
Introduction is a technique in which information about the person s body is fed back to the person so that they may be trained to alter the body s conditions. Physical therapists use biofeedback to help
More informationEvidence-Based Medicine and Publication Bias Desmond Thompson Merck & Co.
Evidence-Based Medicine and Publication Bias Desmond Thompson Merck & Co. Meta-Analysis Defined A meta-analysis is: the statistical combination of two or more separate studies In other words: overview,
More informationTheory of Planned Behavior and Adherence in Chronic Illness: A Meta-Analysis
Running head: THEORY OF PLANNED BEHAVIOR AND ADHERENCE 1 2 Theory of Planned Behavior and Adherence in Chronic Illness: A Meta-Analysis 3 4 5 6 7 8 9 10 11 Full citation: Rich, A., Brandes, K., Mullan,
More informationMultiple Trials May Yield Exaggerated Effect Size Estimates
Effect Size Distortion 1 Running Head: EFFECT SIZE DISTORTION Multiple Trials May Yield Exaggerated Effect Size Estimates Andrew Brand King's College London, Institute of Psychiatry Michael T. Bradley,
More informationalternate-form reliability The degree to which two or more versions of the same test correlate with one another. In clinical studies in which a given function is going to be tested more than once over
More informationUnit 1 Exploring and Understanding Data
Unit 1 Exploring and Understanding Data Area Principle Bar Chart Boxplot Conditional Distribution Dotplot Empirical Rule Five Number Summary Frequency Distribution Frequency Polygon Histogram Interquartile
More informationSelf-determination Theory and the psychology of exercise
International Review of Sport and Exercise Psychology Vol. 1, No. 1, March 2008, 79103 Self-determination Theory and the psychology of exercise Martin Hagger a * and Nikos Chatzisarantis b a School of
More informationAn Empirical Assessment of Meta-Analytic Practice
Review of General Psychology 2009 American Psychological Association 2009, Vol. 13, No. 2, 101 115 1089-2680/09/$12.00 DOI: 10.1037/a0015107 An Empirical Assessment of Meta-Analytic Practice Nathan F.
More informationApplying Evidence-Based Practice with Meta-Analysis
Applying Evidence-Based Practice with Meta-Analysis Mike W.-L. Cheung, PhD 1 1 March 2018 1 Department of Psychology, National University of Singapore (NUS) 1 A little bit background about me (1) PhD:
More informationResults. NeuRA Mindfulness and acceptance therapies August 2018
Introduction involve intentional and non-judgmental focus of one's attention on emotions, thoughts and sensations that are occurring in the present moment. The aim is to open awareness to present experiences,
More informationSTATISTICS AND RESEARCH DESIGN
Statistics 1 STATISTICS AND RESEARCH DESIGN These are subjects that are frequently confused. Both subjects often evoke student anxiety and avoidance. To further complicate matters, both areas appear have
More informationAssessing publication bias in genetic association studies: evidence from a recent meta-analysis
Psychiatry Research 129 (2004) 39 44 www.elsevier.com/locate/psychres Assessing publication bias in genetic association studies: evidence from a recent meta-analysis Marcus R. Munafò a, *, Taane G. Clark
More informationStatistical Methods For Assessing Measurement Error (Reliability) in Variables Relevant to Sports Medicine
REVIEW ARTICLE Sports Med 1998 Oct; 26 (4): 217-238 0112-1642/98/0010-0217/$11.00/0 Adis International Limited. All rights reserved. Statistical Methods For Assessing Measurement Error (Reliability) in
More informationMeta-Analysis: A Gentle Introduction to Research Synthesis
Meta-Analysis: A Gentle Introduction to Research Synthesis Jeff Kromrey Lunch and Learn 27 October 2014 Discussion Outline Overview Types of research questions Literature search and retrieval Coding and
More information04/12/2014. Research Methods in Psychology. Chapter 6: Independent Groups Designs. What is your ideas? Testing
Research Methods in Psychology Chapter 6: Independent Groups Designs 1 Why Psychologists Conduct Experiments? What is your ideas? 2 Why Psychologists Conduct Experiments? Testing Hypotheses derived from
More informationSEMINAR ON SERVICE MARKETING
SEMINAR ON SERVICE MARKETING Tracy Mary - Nancy LOGO John O. Summers Indiana University Guidelines for Conducting Research and Publishing in Marketing: From Conceptualization through the Review Process
More informationResults. NeuRA Hypnosis June 2016
Introduction may be experienced as an altered state of consciousness or as a state of relaxation. There is no agreed framework for administering hypnosis, but the procedure often involves induction (such
More informationUnderstanding Tourist Environmental Behavior An Application of the Theories on Reasoned Action Approach
University of Massachusetts Amherst ScholarWorks@UMass Amherst Tourism Travel and Research Association: Advancing Tourism Research Globally 2012 ttra International Conference Understanding Tourist Environmental
More informationReadings: Textbook readings: OpenStax - Chapters 1 11 Online readings: Appendix D, E & F Plous Chapters 10, 11, 12 and 14
Readings: Textbook readings: OpenStax - Chapters 1 11 Online readings: Appendix D, E & F Plous Chapters 10, 11, 12 and 14 Still important ideas Contrast the measurement of observable actions (and/or characteristics)
More informationMethodological Issues in Measuring the Development of Character
Methodological Issues in Measuring the Development of Character Noel A. Card Department of Human Development and Family Studies College of Liberal Arts and Sciences Supported by a grant from the John Templeton
More informationINVESTIGATING FIT WITH THE RASCH MODEL. Benjamin Wright and Ronald Mead (1979?) Most disturbances in the measurement process can be considered a form
INVESTIGATING FIT WITH THE RASCH MODEL Benjamin Wright and Ronald Mead (1979?) Most disturbances in the measurement process can be considered a form of multidimensionality. The settings in which measurement
More informationGRADE. Grading of Recommendations Assessment, Development and Evaluation. British Association of Dermatologists April 2018
GRADE Grading of Recommendations Assessment, Development and Evaluation British Association of Dermatologists April 2018 Previous grading system Level of evidence Strength of recommendation Level of evidence
More informationMood and Anxiety Scores Predict Winning and Losing Performances in Tennis
Mood and Anxiety Scores Predict Winning and Losing Performances in Tennis Peter C. Terry (terryp@usq.edu.au) Department of Psychology University of Southern Queensland, Toowoomba QLD 43 Australia Angus
More informationWhat is Psychology? chapter 1
What is Psychology? chapter 1 Overview! The science of psychology! What psychologists do! Critical and scientific thinking! Correlational studies! The experiment! Evaluating findings What is psychology?
More informationAnimal-assisted therapy
Introduction Animal-assisted interventions use trained animals to help improve physical, mental and social functions in people with schizophrenia. It is a goal-directed intervention in which an animal
More informationCHAMP: CHecklist for the Appraisal of Moderators and Predictors
CHAMP - Page 1 of 13 CHAMP: CHecklist for the Appraisal of Moderators and Predictors About the checklist In this document, a CHecklist for the Appraisal of Moderators and Predictors (CHAMP) is presented.
More informationChapter 2: Research Methods in I/O Psychology Research a formal process by which knowledge is produced and understood Generalizability the extent to
Chapter 2: Research Methods in I/O Psychology Research a formal process by which knowledge is produced and understood Generalizability the extent to which conclusions drawn from one research study spread
More informationUnderstanding the Role and Methods of Meta- Analysis in IS Research
Communications of the Association for Information Systems Volume 16 Article 32 October 2005 Understanding the Role and Methods of Meta- Analysis in IS Research William R. King University of Pittsburgh,
More informationGeorgina Salas. Topics EDCI Intro to Research Dr. A.J. Herrera
Homework assignment topics 51-63 Georgina Salas Topics 51-63 EDCI Intro to Research 6300.62 Dr. A.J. Herrera Topic 51 1. Which average is usually reported when the standard deviation is reported? The mean
More informationCochrane Pregnancy and Childbirth Group Methodological Guidelines
Cochrane Pregnancy and Childbirth Group Methodological Guidelines [Prepared by Simon Gates: July 2009, updated July 2012] These guidelines are intended to aid quality and consistency across the reviews
More informationProblem solving therapy
Introduction People with severe mental illnesses such as schizophrenia may show impairments in problem-solving ability. Remediation interventions such as problem solving skills training can help people
More informationA Spreadsheet for Deriving a Confidence Interval, Mechanistic Inference and Clinical Inference from a P Value
SPORTSCIENCE Perspectives / Research Resources A Spreadsheet for Deriving a Confidence Interval, Mechanistic Inference and Clinical Inference from a P Value Will G Hopkins sportsci.org Sportscience 11,
More informationTraumatic brain injury
Introduction It is well established that traumatic brain injury increases the risk for a wide range of neuropsychiatric disturbances, however there is little consensus on whether it is a risk factor for
More informationCross-Cultural Meta-Analyses
Unit 2 Theoretical and Methodological Issues Subunit 2 Methodological Issues in Psychology and Culture Article 5 8-1-2003 Cross-Cultural Meta-Analyses Dianne A. van Hemert Tilburg University, The Netherlands,
More informationTeacher satisfaction: some practical implications for teacher professional development models
Teacher satisfaction: some practical implications for teacher professional development models Graça Maria dos Santos Seco Lecturer in the Institute of Education, Leiria Polytechnic, Portugal. Email: gracaseco@netvisao.pt;
More informationMeta Analysis. David R Urbach MD MSc Outcomes Research Course December 4, 2014
Meta Analysis David R Urbach MD MSc Outcomes Research Course December 4, 2014 Overview Definitions Identifying studies Appraising studies Quantitative synthesis Presentation of results Examining heterogeneity
More informationDescribe what is meant by a placebo Contrast the double-blind procedure with the single-blind procedure Review the structure for organizing a memo
Please note the page numbers listed for the Lind book may vary by a page or two depending on which version of the textbook you have. Readings: Lind 1 11 (with emphasis on chapters 5, 6, 7, 8, 9 10 & 11)
More informationStandards for the reporting of new Cochrane Intervention Reviews
Methodological Expectations of Cochrane Intervention Reviews (MECIR) Standards for the reporting of new Cochrane Intervention Reviews 24 September 2012 Preface The standards below summarize proposed attributes
More informationCritical Thinking Assessment at MCC. How are we doing?
Critical Thinking Assessment at MCC How are we doing? Prepared by Maura McCool, M.S. Office of Research, Evaluation and Assessment Metropolitan Community Colleges Fall 2003 1 General Education Assessment
More informationNeuRA Sleep disturbance April 2016
Introduction People with schizophrenia may show disturbances in the amount, or the quality of sleep they generally receive. Typically sleep follows a characteristic pattern of four stages, where stage
More informationA SAS Macro to Investigate Statistical Power in Meta-analysis Jin Liu, Fan Pan University of South Carolina Columbia
Paper 109 A SAS Macro to Investigate Statistical Power in Meta-analysis Jin Liu, Fan Pan University of South Carolina Columbia ABSTRACT Meta-analysis is a quantitative review method, which synthesizes
More informationFinal Exam: PSYC 300. Multiple Choice Items (1 point each)
Final Exam: PSYC 300 Multiple Choice Items (1 point each) 1. Which of the following is NOT one of the three fundamental features of science? a. empirical questions b. public knowledge c. mathematical equations
More informationAnswers to end of chapter questions
Answers to end of chapter questions Chapter 1 What are the three most important characteristics of QCA as a method of data analysis? QCA is (1) systematic, (2) flexible, and (3) it reduces data. What are
More informationSUPPLEMENTARY INFORMATION
Supplementary Statistics and Results This file contains supplementary statistical information and a discussion of the interpretation of the belief effect on the basis of additional data. We also present
More informationISC- GRADE XI HUMANITIES ( ) PSYCHOLOGY. Chapter 2- Methods of Psychology
ISC- GRADE XI HUMANITIES (2018-19) PSYCHOLOGY Chapter 2- Methods of Psychology OUTLINE OF THE CHAPTER (i) Scientific Methods in Psychology -observation, case study, surveys, psychological tests, experimentation
More informationReadings: Textbook readings: OpenStax - Chapters 1 13 (emphasis on Chapter 12) Online readings: Appendix D, E & F
Readings: Textbook readings: OpenStax - Chapters 1 13 (emphasis on Chapter 12) Online readings: Appendix D, E & F Plous Chapters 17 & 18 Chapter 17: Social Influences Chapter 18: Group Judgments and Decisions
More informationIntention to consent to living organ donation: an exploratory study. Christina Browne B.A. and Deirdre M. Desmond PhD
Intention to consent to living organ donation: an exploratory study Christina Browne B.A. and Deirdre M. Desmond PhD Department of Psychology, John Hume Building, National University of Ireland Maynooth,
More informationTHE USE OF MULTIVARIATE ANALYSIS IN DEVELOPMENT THEORY: A CRITIQUE OF THE APPROACH ADOPTED BY ADELMAN AND MORRIS A. C. RAYNER
THE USE OF MULTIVARIATE ANALYSIS IN DEVELOPMENT THEORY: A CRITIQUE OF THE APPROACH ADOPTED BY ADELMAN AND MORRIS A. C. RAYNER Introduction, 639. Factor analysis, 639. Discriminant analysis, 644. INTRODUCTION
More informationSTEP II Conceptualising a Research Design
STEP II Conceptualising a Research Design This operational step includes two chapters: Chapter 7: The research design Chapter 8: Selecting a study design CHAPTER 7 The Research Design In this chapter you
More information12/31/2016. PSY 512: Advanced Statistics for Psychological and Behavioral Research 2
PSY 512: Advanced Statistics for Psychological and Behavioral Research 2 Introduce moderated multiple regression Continuous predictor continuous predictor Continuous predictor categorical predictor Understand
More informationIntroduction to Applied Research in Economics
Introduction to Applied Research in Economics Dr. Kamiljon T. Akramov IFPRI, Washington, DC, USA Regional Training Course on Applied Econometric Analysis June 12-23, 2017, WIUT, Tashkent, Uzbekistan Why
More information