Testing the reliability and efficiency of the pilot Mixed Methods Appraisal Tool (MMAT) for systematic mixed studies review
Introduction
Interest in the concomitant review of qualitative, quantitative, and mixed methods studies, known as a mixed studies review (MSR), is growing (Grant and Booth, 2009), particularly in health sciences (Pluye et al., 2009). MSRs address complex questions comprising qualitative and quantitative aspects. For example, in a MSR examining the question ‘What are the impacts of clinical information retrieval technology?’ types of impact were based on findings of qualitative studies, and then the importance of positive impacts was estimated using results of quantitative studies (Pluye et al., 2005). This new form of literature review has the potential to provide a rich, detailed, and highly practical understanding of complex health interventions and programs, which can be more relevant to and useful for clinicians and decision-makers. For example, “examining the effectiveness of interventions to increase the uptake of breast-feeding [based on results of quantitative studies] benefits from examining reasons why people do and do not breastfeed, their perceptions of the advantages of not doing so, and obstacles to this practice [based on findings of qualitative research studies]” (Sheldon, 2005, p. 5).
In MSRs, reviewers apply mixed methods research to review the literature. The foundation of mixed methods research is to combine the strengths of qualitative and quantitative methods by integrating the in-depth descriptions of complex phenomena obtained by qualitative methods with the statistical generalizability of quantitative methods. The conceptualization of mixed methods research is new and no standard valid critical appraisal tool for mixed methods research exists (Creswell and Plano Clark, 2007, O’Cathain et al., 2008, O’Cathain, 2010), whereas, multiple standard tools exist for quantitative methods, and a few valid tools exist for qualitative methods (Crowe and Sheppard, 2011, EQUATOR, 2011, Simera et al., 2010).
When conducting systematic MSRs, reviewers identify, select, appraise, and synthesize relevant qualitative, quantitative, and mixed methods studies, and as with all systematic reviews, the appraisal of the methodological quality of included studies is crucial. The content validation of an initial version of a critical appraisal tool for systematic MSRs, called the Mixed Methods Appraisal Tool (MMAT), has previously been reported in the International Journal of Nursing Studies (Pluye et al., 2009). The MMAT is unique in that no other appraisal tool for systematic MSRs considers all study designs, including mixed methods research designs (Crowe and Sheppard, 2011, Simera et al., 2010). The purpose of the present paper is to describe the reliability and efficiency of the pilot MMAT.
Section snippets
Background
Pluye et al. (2009) reported a qualitative thematic data analysis of the quality appraisal procedures used in 17 systematic health-related MSRs to determine the criteria without which a judgment on quality cannot be made for qualitative, quantitative, and mixed methods studies. Based on this analysis, an initial 15-criteria MMAT was proposed. The purpose of this tool was to allow for the concurrent appraisal of studies employing the most common methodologies and methods, with a set of a few
Methods
The Center for Participatory Research at McGill (PRAM) conducted a review on the benefits of participatory research (PR) in the health sciences. PR is a collaborative approach to research involving both researchers and those affected by the research throughout the research process (Macaulay et al., 1999). Given the heterogeneity of methods used across PR projects, this review presented an opportunity to test the MMAT.
Results
On average, it took approximately 14 min to appraise a study (range: 4–40 min). The consistency of the global ‘quality score’ between reviewers (ICC) was 0.72 pre- and 0.94 post-discussion (Table 2).
Inter-rater reliability pre-discussion: With respect to 17 of the 19 criteria, there was almost perfect agreement for 7 criteria, substantial agreement for 1 criterion, moderate agreement for 3 criteria, fair agreement for 4 criteria, slight agreement for 1 criterion, and no agreement for only 1
Discussion
Results suggest the pilot MMAT was easy to use. Inter-rater reliability scores ranged from moderately reproducible to perfect agreement. After discussion, the raters were able to reach a consensus on 19 (76%) of the 25 pre-discussion disagreements. These disagreements were, for the most part, resolved by referring to the MMAT tutorial.
The sets of criteria with the most discordant results pre-discussion were the ‘non-randomized’ (32%) and the ‘qualitative’ (48%) sets. These differences may be
Conclusion
Our results suggest the MMAT is promising. Reliability is a key property of a critical appraisal tool, and the efficiency is important from a reviewer's perspective. In 2010, the pilot MMAT was used and discussed in four 90-min workshops that suggested further refinement of criteria. These workshops involved diverse audiences such as graduate students enrolled in a mixed methods research course, researchers and research professionals with experience in qualitative, quantitative, and mixed
Acknowledgments
Romina Pace holds a Summer Research Bursary from the Faculty of Medicine, McGill University. Pierre Pluye holds a New Investigator Award from the Canadian Institutes of Health Research (CIHR). The present work is supported by CIHR and the Center for Participatory Research at McGill (PRAM).
Contributions
Pierre Pluye, Marie-Pierre Gagnon, Frances Griffiths, and Janique Johnson-Lafleur proposed an initial version of MMAT criteria. Romina Pace and Pierre Pluye led the test of the pilot MMAT
References (32)
- et al.
A review of critical appraisal tools show they lack rigor: alternative tool structure is proposed
Journal of Clinical Epidemiology
(2011) - et al.
Impact of clinical information-retrieval technology on physicians: a literature review of quantitative, qualitative and mixed methods studies
International Journal of Medical Informatics
(2005) - et al.
A scoring system for appraising mixed methods research, and concomitantly appraising qualitative, quantitative and mixed methods primary studies in Mixed Studies Reviews
International Journal of Nursing Studies
(2009) - et al.
Comparability work and the management of difference in research synthesis studies
Social Science & Medicine
(2007) Integrating quantitative and qualitative research: how is it done?
Qualitative Research
(2006)- et al.
‘Clear as Mud’: toward greater clarity in generic qualitative research
International Journal of Qualitative Methods
(2003) - et al.
Reliability and Validity Assessment
(1979) - et al.
Evaluative criteria for qualitative research in health care: controversies and recommendations
Annals of Family Medicine
(2008) Critical Appraisal of Qualitative Research (Draft Chapter – in peer review with Cochrane Handbook Editors)
(2010)- et al.
Designing and Conducting Mixed Methods Research
(2007)
Ten Questions to Help You Make Sense of Qualitative Research
The EQUATOR Network Website: The Resource Centre for Good Reporting of Health Research Studies
Reliability Analysis: Statnotes, from North Carolina State University, Public Administration Program
A typology of reviews: an analysis of 14 review types and associated methodologies
Health Information & Libraries Journal
A comparative analysis of three online appraisal instruments’ ability to assess validity in qualitative research
Qualitative Health Research
The measurement of observer agreement for categorical data
Biometrics
Cited by (862)
Measuring decision aid effectiveness for end-of-life care: A systematic review
2024, PEC InnovationFacilitators and constraints to adult sports participation: A systematic review
2024, Psychology of Sport and ExerciseVaccine uptake and effectiveness: Why some African countries performed better than the others?
2024, Health Policy and TechnologyThe effects of interpersonal development programmes with sport coaches and parents on youth athlete outcomes: A systematic review and meta-analysis
2024, Psychology of Sport and Exercise