About Meta-Evaluation

About Meta-Evaluation

Meta-evaluation involves assessment of the methodological rigour of evaluations, as well as the aggregation of data from multiple or existing evaluations for comprehensive assessment (Cooksy and Caracelli, 2009; Gough & Martin, 2012). The SEIL project aims to both (1) assess the methodological rigour of evaluation approaches currently used at each of the five participating sites, and also (2) aggregate the evaluation data arising from existing processes or processes introduced as part of the research, to explore and identify a set of “common” evaluation impact indicators, likely to be universally relevant to social enterprise evaluation.

Meta evaluations explore collective lessons drawn from individual evaluations (Henry, 2016). SEIL  will be focused on “realist research questions” (Wong et al., 2013), by exploring (1) which evaluation methodology features commonly lead to successful evaluation, (2) understanding what makes these common features important, and (3) exploring the contextual conditions generally present in successful evaluation. Consistent with meta-evaluation approaches advocated by Stufflebeam (2001) and Tingle, DeSione, and Convington (2003), SEIL will collate meta-evaluation data on evaluation utility, strengths and limitations of evaluation approaches, and systematic efficacy, using this to (1) revise and improve individual evaluation frameworks and procedures, and (2) develop a set of common evaluation indicators. The set of common evaluation indicators will inform an evaluation report and also be presented in the form of an online data dashboard prototype, for social enterprises to share evaluation impacts.

As recommended by Pawson et al. (2004), SEIL will involve discussion and consultation with key stakeholders, in the form of a group meetings and workshops comprised of members of participating organisations and funding bodies, as well as opportunities for peer exchange throughout the project life-cycle.

References

Cooksy, L. J., & Caracelli, V. J. (2009). Meta evaluation in practice: Selection and application of criteria. Journal of Multi Disciplinary Evaluation, 6 (11).

Gough, D., & Martin, S. (2012). Meta-evaluation of the impacts and legacy of the London 2012

Olympic Games and Paralympic Games: Developing methods paper. Retrieved from: https://www.gov.uk/government/publications/meta-evaluation-of-the-impacts-and-legacy-of-the-london-2012-olympic-games-and-paralympic-games-developing-methods-paper

Henry, I. (2016). The meta-evaluation of the sports participation impact and legacy of the London

2012 Games: Methodological implications. Journal of Global Sport Management, 1(1-2), 19-\33.

Pawson, R., Greenhalgh, T., Harvey, G., & Walshe, K. (2004). Realist synthesis: an introduction. Manchester: ESRC Research Methods Programme, University of Manchester.

Stufflebeam, D. L. (2001). The metaevaluation imperative. American journal of evaluation, 22(2), 183-209.

Tingle, L. R., DeSimone, M., & Covington, B. (2003). A meta‐evaluation of 11 school‐based smoking prevention programs. Journal of School Health,73(2), 64-67.

Wong, G., Greenhalgh, T., Westhorp, G., Buckingham, J., & Pawson, R. (2013). RAMESES publication standards: realist syntheses. BMC medicine, 11(1), 21.

 

Leave a Reply

Your email address will not be published. Required fields are marked *