Synopsis
This book addresses the challenges of conducting program evaluations in real-world contexts where evaluators and their clients face budget and time constraints and where critical data may be missing. The book is organized around a seven-step model developed by the authors, which has been tested and refined in workshops and in practice. Vignettes and case studies―representing evaluations from a variety of geographic regions and sectors―demonstrate adaptive possibilities for small projects with budgets of a few thousand dollars to large-scale, long-term evaluations of complex programs. The text incorporates quantitative, qualitative, and mixed-method designs, and this Second Edition reflects important developments in the field since the publication of the First Edition.
"This book represents a significant achievement. The authors have succeeded in creating a book that can be used in a wide variety of locations and by a large community of evaluation practitioners."―Michael D. Niles, Missouri Western State University
"This book is exceptional and unique in the way that it combines foundational knowledge from social sciences with theory and methods that are specific to evaluation."―Gary Miron, Western Michigan University
"The book represents a very good and timely contribution worth having on an evaluator′s shelf, especially if you work in the international development arena."―Thomaz Chianca, independent evaluation consultant, Rio de Janeiro, Brazil
About the Authors
Michael Bamberger has been involved in development evaluation for fifty years. Beginning in Latin America where he worked in urban community development and evaluation for over a decade, he became interested in the coping strategies of low-income communities, how they were affected by and how they influenced development efforts. Most evaluation research fails to capture these survival strategies, frequently underestimating the resilience of these communities – particularly women and female-headed households. During 20 years with the World Bank he worked as monitoring and evaluation advisor for the Urban Development Department, evaluation training coordinator with the Economic Development Department and Senior Sociologist in the Gender and Development Department. After retiring from the Bank in 2001 he has worked as a development evaluation consultant with more than 10 UN agencies as well as development banks, bilateral development agencies, NGOs and foundations. Since 2001 he has been on the faculty of the International Program for Development Evaluation Training (IPDET). Recent publications include: (with Jim Rugh and Linda Mabry) RealWorld Evaluation: Working under budget, time, data and political constraints (2012 second edition); (with Marco Segone) How to design and manage equity focused evaluations (2011); Engendering Monitoring and Evaluation ( 2013 ); (with Linda Raftree) Emerging opportunities: Monitoring and evaluation in a tech-enabled world (2014); (with Marco Segone and Shravanti Reddy) How to integrate gender equality and social equity in national evaluation policies and systems (2014).
Jim Rugh has had 41 years of professional involvement in rural community development in Africa, Asia, and Appalachia. He has specialized in evaluation for 25 years―the past 10 years as head of Design, Monitoring and Evaluation for CARE International, a large nongovernmental organization (NGO). His particular skills include promoting strategies for enhanced capacity for evaluation throughout this worldwide organization. He is a recognized leader in evaluation among colleagues in the international NGO community, including InterAction. He has been an active member of the American Evaluation Association since 1986, currently serving on the Nominations and Election Committee. He was a founding member of the Atlanta-area Evaluation Association. He has experience in promoting community development and evaluating and facilitating self-evaluation by participants in such programs. He has provided training for and/or evaluated many different international NGOs. He brings a perspective of the “big picture,” including familiarity with a wide variety of community groups and assistance agencies in many countries, plus an eye to detail and a respect for inclusiveness and the participatory process.
Linda Mabry is a faculty member at Washington State University specializing in program evaluation, student assessment, and research and evaluation methodology. She currently serves as president of the Oregon Program Evaluation Network and on the editorial board for Studies in Educational Evaluation. She has served in a variety of leadership positions for the American Evaluation Association, including the Board of Directors, chair of the Task Force on Educational Accountability, and chair of the Theories of Evaluation topical interest group. She has also served n the Board of Trustees for the National Center for the Improvement of Educational Assessments and on the Performance Assessment Review Board of New York. She has conducted evaluations for the U.S. Department of Education, National Science Foundation, National Endowment for the Arts, the Jacob Javits Foundation, Hewlett-Packard Corporation, Ameritech Corporation, ATT-Comcast Corporation, the New York City Fund for Public Education, the Chicago Arts Partnerships in Education, the Chicago Teachers Academy of Mathematics and Science, and a variety of university, state, and school agencies. She has published in a number of scholarly journals and written several books, including Evaluation and the Postmodern Dilemma (1997) and Portfolios Plus: A Critical Guide to Performance Assessment (1999).
"About this title" may belong to another edition of this title.