Session Report
Manya Deshpande
An online International Summer School Program on “Data, Monitoring and Evaluation” is a two-month immersive online hands-on certificate training course organized by IMPRI Impact and Policy Research Institute, New Delhi. The day 8 of the program started with a session on “Introduction to Impact Evaluation Methods” by Mr. Mr. Rakesh Pandey.
About the Data, Monitoring and Evaluation program
The training program was conducted by an expert group of academicians which included
- Prof Nilanjan Banik, Professor and Program Director (BA, Economics and Finance) at Mahindra University, Hyderabad
- Prof Mukul Asher, Former Professor, Lee Kuan Yew School of Public Policy, National University of Singapore; Visiting Distinguished Professor at IMPRI
- Dr Soumyadip Chattopadhyay, Associate Professor, Economics, Visva-Bharati, Santiniketan; Visiting Senior Fellow at IMPRI.
Other notable experts include
- Dr Devender Singh, Global Studies Programme, University of Freiburg, Germany; Visiting Senior Fellow, IMPRI
- Prof Gummadi Sridevi,Professor, School of Economics, University of Hyderabad; Visiting Professor, IMPRI
- Dr Amar Jesani,Independent Researcher and Teacher (Bioethics and Public Health); Editor, Indian Journal of Medical Ethics
- Dr Radhika Pandey,Senior Fellow, National Institute of Public Finance and Policy (NIPFP), New Delhi
- Prof Vibhuti Patel,Visiting Distinguished Professor, IMPRI
- Dr Ismail Haque, Fellow, ICRIER and Visiting Fellow, at IMPRI
- Mr V. Ramakrishnan,Managing Director, Organisation Development, Singapore
- Prof VinaVaswani,Director, Centre for Ethics, and Professor, Department of Forensic Medicine and Toxicology, Yenepoya (Deemed to be) University, Mangalore
- Mr Rakesh Pandey, Assistant Policy Researcher, Doctoral Scholar, Pardee RAND Graduate School, RAND Corporation, USA
- Prof Nalin Bharti, Professor, Department of Humanities and Social Sciences, Indian Institute of Technology (IIT), Patna; Visiting Senior Fellow, IMPRI.
The Conveners for the course were Dr Simi Mehta, CEO & Editorial Director, IMPRI and Dr Arjun Kumar, Director at IMPRI.
Fiza Mahajan, a researcher at IMPRI, opened the event by introducing the distinguished panelists and welcoming the speakers and attendees to the program.
The session was taken up by Mr. Rakesh Pandey, who summarized the session objective as ‘an introduction to widely-used quantitative methods in impact evaluation research’. He built a base for the discourse by answering the question, ‘Why Evaluate?’. With this, he proceeded to highlight the fundamental aspects of impact evaluation through establishing a ‘causal relationship’ between policy and outcome variables. He also shed some light on the challenges with such evaluations, given the difficulty of inference in the presence of causality, selection biases and assumptions.
Delving into the crux of the subject matter: the identification strategy, he defined it as a way that leads to the ‘creation of a natural experiment’. He elucidated on each of the three major identification strategies being: Randomized Control Trials (RCT’s), Difference-in-Difference and Regression Discontinuity Design.
Impact Evaluation Methods and More
He explained that RCT’s can solve the selection bias, since they are the gold standard due to their randomized procedure. He also brought attention to its threats, mainly imperfect compliance and attrition, and how they might appear in the evaluation. Alongside this, he gave solutions to resolve these challenges; the two-stage least squares evaluation method and the Wald estimate for imperfect compliance and clustered randomization for attrition. Analysis of RCT data is essential to building an effective evaluation, and this can be done through the Intention-to-Threat estimate.
Next up, Mr. Pandey gave an overview of the quasi-experimental, Difference-in-Difference (D-i-D) method. There are three periods’ types of D-i-D: multiple time periods, variation in treatment timing or a combination of both. The basic idea is that this method provides an inclusive, comprehensive perspective at evaluating data from multiple time periods, trends or factors at once. To explain his point further, he gave an example: In presence of a group that has received ‘treatment’ and other groups that haven’t, by using only ‘pre’ or ‘post’ data, results can be rather misleading since the effect of the ‘treatment’ would not be negated. Thus, a D-i-D method would be able to overcome this gap and provide a bigger, truer picture.
The significance of this method, is its attention to trends and uncorrelated treatment with trends in the outcome. He decoded the concept overview, estimation and assumptions (parallel trends, common shocks and no anticipation) in-depth. Paying attention to the assumption of parallel trends, he gave some ideas on how one could identify them and test for them via ways such as visual comparison and regression outcome on time interacted with treatment in the pre-period.
Another way of fool-proofing this method would be a methodical ‘Event Study’, which according to him would aid us in identifying relevant trends better. He gave ways of going about such a study, with the help of plotting coefficients, its threats and some important papers discussing the aforementioned method.
Regression Discontinuity Design (RDD)
Lastly, he talked about Regression Discontinuity Design (RDD). He shared how this method helps us compare data points/individuals of the study at the margin to estimate causal effects. He provided an example for the same: “If X tends to c, the groups just above c and just below c are usually comparable”. Post introduction, he went further by clarifying the concept of full compliance (Sharp RDD or a very clear discontinuity) and incomplete compliance (Fuzzy RDD or an unclear discontinuity) verticals of the method evaluation and their respective estimation procedures. In conclusion to the segment, he shared valuable method-wise resources and references for self-learning on the topic.
Acknowledgement: Manya Deshpande is Research Intern at IMPRI.
Read more event reports of IMPRI here
Project Management: KRA’s and KPI’s
Hands-on Data Learning Session-Probability distributions: Density and Cumulative