RealWorld Evaluation addresses the challenges of conducting evaluations where there is not enough money, time, or data -- but methodologically sound findings are still needed -- and where politics makes the job of the evaluator harder. These all-too-common circumstances affect many evaluations in both developing and developed countries. The authors provides help to evaluation practitioners who design and conduct evaluations, to users of evaluation including agencies and policy-makers who request evaluations and use their results, and evaluation students learning how to apply evaluation theories and approaches in actual contexts. Adapting a range of methods to real-world situations, RealWorld Evaluation draws on quantitative, qualitative, and mixed methods approaches. Design, methods, cultural sensitivity, validity, credibility, and reporting are among the many topics addressed in this book. Examples drawn from evaluations -- in education, social services, microcredit, agriculture, water projects and other areas -- in the US and in developing countries help show how adapting to different types of exigencies can succeed.
Rezensionen / Stimmen
"I was quite impressed with how this book balanced the technical aspects with practical advice for managing budget, time and political constraints. The use of case study examples and checklists/worksheets helped to make the book much more "grounded in practice" compared to other texts. I like to think of myself as an experienced, well-read evaluator, but I must confess that I learned quite a lot from this book!" -- Scott Bayley 20061214 "The authors provide a useful checklist of questions that can be used to shape dialogue between commissioners of evaluation and evaluators when designing the evaluation, and developing a joint understanding of an agreement about what can and cannot be achieved by the evaluation and the trade-offs that will be involved." -- Sue Funnell Evaluation Journal of Australasia 20080919
Sprache
Verlagsort
Zielgruppe
Für höhere Schule und Studium
Illustrationen
Maße
Höhe: 254 mm
Breite: 178 mm
ISBN-13
978-1-4129-0946-4 (9781412909464)
Copyright in bibliographic data and cover images is held by Nielsen Book Services Limited or by the publishers or by their respective licensors: all rights reserved.
Schweitzer Klassifikation
Jim Rugh has had 41 years of professional involvement in rural community development in Africa, Asia, and Appalachia. He has specialized in evaluation for 25 years-the past 10 years as head of Design, Monitoring and Evaluation for CARE International, a large nongovernmental organization (NGO). His particular skills include promoting strategies for enhanced capacity for evaluation throughout this worldwide organization. He is a recognized leader in evaluation among colleagues in the international NGO community, including InterAction. He has been an active member of the American Evaluation Association since 1986, currently serving on the Nominations and Election Committee. He was a founding member of the Atlanta-area Evaluation Association. He has experience in promoting community development and evaluating and facilitating self-evaluation by participants in such programs. He has provided training for and/or evaluated many different international NGOs. He brings a perspective of the "big picture," including familiarity with a wide variety of community groups and assistance agencies in many countries, plus an eye to detail and a respect for inclusiveness and the participatory process. Michael Bamberger has almost 40 years of experience in development evaluation, including a decade working with nongovernmental organizations in Latin America, almost 25 years working on evaluation with the World Bank in most of the social and economic sectors and in most regions of the world, and 10 years as an independent evaluation consultant, including programs with 10 United Nations agencies and multilateral and bilateral development agencies. He has published three books and several monographs and handbooks on development evaluation, as well as numerous articles in professional journals. He has been active for 20 years with the American Evaluation Association, serving on the Board and as Chair of the International Committee. He has served on the Editorial Advisory Board of New Directions for Evaluation, the Journal of Development Effectiveness, the Journal of Mixed Methods Research, and the American Journal of Evaluation and is a regular reviewer for several professional evaluation journals. He has taught program evaluation in more than 30 countries in Africa, Latin America, Asia, and the Middle East and, since 2002, has been on the Faculty of the International Program for Development Evaluation Training (IPDET) in Ottawa, Ontario, Canada; since 2001, has also lectured at the Foundation for Advanced Studies on International Development (FASID) in Tokyo. Linda Mabry is a faculty member at Washington State University specializing in program evaluation, student assessment, and research and evaluation methodology. She currently serves as president of the Oregon Program Evaluation Network and on the editorial board for Studies in Educational Evaluation. She has served in a variety of leadership positions for the American Evaluation Association, including the Board of Directors, chair of the Task Force on Educational Accountability, and chair of the Theories of Evaluation topical interest group. She has also served n the Board of Trustees for the National Center for the Improvement of Educational Assessments and on the Performance Assessment Review Board of New York. She has conducted evaluations for the U.S. Department of Education, National Science Foundation, National Endowment for the Arts, the Jacob Javits Foundation, Hewlett-Packard Corporation, Ameritech Corporation, ATT-Comcast Corporation, the New York City Fund for Public Education, the Chicago Arts Partnerships in Education, the Chicago Teachers Academy of Mathematics and Science, and a variety of university, state, and school agencies. She has published in a number of scholarly journals and written several books, including Evaluation and the Postmodern Dilemma (1997) and Portfolios Plus: A Critical Guide to Performance Assessment (1999).
Introduction PART I. OVERVIEW: REALWORLD EVALUATION Chapter 1: RealWorld Evaluation and the Contexts in Which It Is Used Welcome to RealWorld Evaluation The RealWorld Evaluation Context The Four Types of Constraints Addressed by the RealWorld Approach The RealWorld Evaluation Approach to Evaluation Challenges Comparing the RealWorld Evaluation Context and Issues in Developing and Developed Countries Who Uses RealWorld Evaluation, for What Purposes, and When? Summary Further Reading PART II. THE SEVEN STEPS OF THE REALWORLD EVALUATION APPROACH Chapter 2: First clarify the purpose: Scoping the evaluation Stakeholder expectations of impact evaluations Understanding information needs Developing the program theory model Identifying the constraints to be addressed by the RWE and determining the appropriate evaluation design Developing the Terms of Reference for the Evaluation Summary Further reading Chapter 3: Not enough money: Addressing budget constraints Simplifying the evaluation design Clarifying client information needs Using existing data Reducing costs by reducing sample size Reducing costs of data collection and analysis Common threats to validity of budget constraints Summary Further reading Chapter 4: Not enough time: Addressing scheduling and other time constraints Similarities and differences between time and budget constraints Simplifying the evaluation design Clarifying client information needs Using existing documentary data Reducing sample size Rapid data collection methods Reducing time pressures on outside consultants Hiring more resource people Building outcome indicators into project records Data collection and analysis technology Common threats to adequacy and validity relating to time constraints Summary Further reading Chapter 5: Critical information is missing or difficult to collect: Addressing data constraints Data issues facing RealWorld evaluators Reconstructing baseline data Special issues in reconstructing comparison groups Collecting data on sensitive topics or from groups who are difficult to reach Common threats to adequacy and validity relating to data constraints Summary Further reading Chapter 6: Reconciling different priorities and perspectives: Addressing political influences Values, ethics, and politics Political issues at the outset of an evaluation Political issues during the conduct of an evaluation Political issues in evaluation reporting and use RealWorld strategies for addressing political constraints Summary Further reading Chapter 7: Strengthening the evaluation design and the validity of the conclusions Validity in evaluation Factors affecting adequacy and validity Assessing the adequacy of quantitative evaluation designs Strengthening validity in quantitative evaluations Assessing the adequacy of qualitative evaluation designs Strengthening validity in qualitative evaluations Points during the RWE evaluation cycle when corrective measures can be taken Summary Further reading Chapter 8: Making it useful: Helping clients and other stakeholders utilize the evaluation The underutilization of evaluations studies The importance of the RealWorld evaluation scoping phase for utilization Formative evaluation strategies Communicating with clients throughout the evaluation Evaluation capacity building Communicating findings Developing a follow-up action plan Summary Further reading PART III A REVIEW OF EVALUATION METHODS AND APPROACHES AND THEIR APPLICATIONS IN REALWORLD EVALUATION Chapter 9 Applications of program theory in RealWorld evaluation Defining Program Theory Evaluation Applications of Program Theory in evaluation Constructing Program Theory Models Logical framework analysis and results chains Program theory evaluation and causality Summary Further reading Chapter 10: The Most Widely-Used RealWorld Quantitative Evaluation Designs Randomized and quasi-experimental evaluation designs The most widely used quantiative designs Ways to strengthen quantitative RWE designs Seven quasi-experimental designs that cover most RWE scenarios Summary Further reading Chapter 11 Quantitative evaluation methods The quantitative and qualitative traditions in evaluation research Quantitative methodologies Applications of quantitative methodologies in program evaluation Quantitative methods for data collection The management of data collection for quantitative studies Data analysis Summary Further reading Chapter 12 Qualitative evaluation methods Qualitative methodology and tradition Qualitative methodology: An overview Different reasons for using different methodologies Qualitative data collection Qualitative data analysis Summary Further reading Chapter 13 Mixed-method evaluation The mixed method approach Mixed-method strategies Implementing a mixed-method design Summary Further reading Chapter 14 Sampling for RealWorld Evaluation The Importance Of Sampling For RWE Purposive sampling Probability (random) sampling Using power analysis and effect size for estimating the appropriate sample size Determining The Size Of The Sample The Contribution of Meta-Analysis Sampling Issues For Mixed-Method Evaluations Sampling issues for RWE Summary Further reading PART IV. PULLING IT ALL TOGETHER Chapter 15 Learning together: Building Capacity For RealWorld Evaluation Defining evaluation capacity building RealWorld evaluation capacity building Designing and delivering evaluation capacity building Summary Further reading Chapter 16: Bringing it all together: Applying RealWorld evaluation approaches to each stage of the evaluation process Scoping the evaluation Choosing the best design from the available options Determining appropriate methodologies Ways to strengthen RWE designs Staffing the evaluation economically Collect data efficiently Analyze the data efficiently Report findings efficiently and effectively Help clients use the findings well Appendices Glossary of terms References