![]() ![]() They drew me in and made me want to join the discussion and ask even more questions. The interviews are accessible and engaging, like being invited to a conversation over coffee-a tribute to the power of storytelling. It models the reflection required of good practice. Alkin, University of California, Los Angeles"This book offers a rare opportunity to glimpse the assumptions, values, logic, and reasoning behind evaluator choices. Drawing from the popular 'œExemplars' section in the American Journal of Evaluation (AJE), the book's twelve interviews with evaluators illustrate a variety of evaluation practices in different settings and include commentary and analysis on what the interviews teach about evaluation practice.Praise for Evaluation in Action'œEvaluation in Action: Interviews With Expert Evaluators is a 'œmust' read for those who want to know how evaluations really take place.' -Marvin C. Overall, information sharing among expert evaluators can lead to more conservative allocation decisions that favors protecting against failure than maximizing success.Evaluation in Action: Interviews With Expert Evaluators is the first book to go behind the scenes of real evaluations to explore the issues faced-and the decisions made-by notable evaluators in the field. Qualitative coding and topic modeling of the evaluators’ justifications for score changes reveal that exposures to low scores prompted greater attention to uncovering weaknesses, whereas exposures to neutral or high scores were associated with strengths, along with greater emphasis on non-evaluation criteria, such as confidence in one’s judgment. Although the intellectual similarity treatment did not yield a measurable effect, we found causal evidence of negativity bias, where evaluators are more likely to lower their scores after seeing critical scores than raise them after seeing better scores. We exogenously varied two key aspects of information sharing: 1) the intellectual distance between each focal evaluator and the other evaluators and 2) the relative valence (positive and negative) of others’ scores, to determine how these treatments affect the focal evaluator’s propensity to change the initial score. ![]() Collectively, our experiments mobilized 369 evaluators from seven universities to evaluate 97 projects resulting in 760 proposal-evaluation pairs and over $300,000 in awards. We designed and executed two field experiments in two separate grant funding opportunities at a leading research university to explore evaluators’ receptivity to assessments from other evaluators. This paper investigates the role of information sharing among experts as the driver of evaluation decisions. The evaluation of novel projects lies at the heart of scientific and technological innovation, and yet literature suggests that this process is subject to inconsistency and potential biases. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |