Thursday, May 6, 2010

Reflection Paper #2 - pp. 129-151

The readings on Participant Oriented Evaluation were a little troubling to me internally.  As I stated in the last reflection paper, I have a background in auditing so I am very comfortable with the concept of evaluations by an independent evaluator using methodologies that identify objectives and assess performance based on the objectives, even if that objective is to inform third parties (consumers, managers, researchers, etc.).  The eliminations of biases through the approaches taken, and the independence of the evaluator seem critical to my training in providing evaluations that can be relied upon.

The ideas behind the various approaches and methodologies for participant oriented evaluations at first do not seem to be evaluation in the sense of my preconceived notions.  But the text identifies an area of evaluation that is an important window into the "effectiveness" of programs.  An evaluation that "experiences" the program in the role of participant may actually be more effective at determining the value of the program than any measurement of objectives.  The fact that some forms of this approach focus on the evaluator as the learner the subjects of the program being the informers or teachers of the evaluator.  This allows the evaluator to understand the real impact of the program and to accurately report these impacts to the stakeholders.

Perhaps the most interesting approach in the participant model is that biases and advocacy are an acceptable input for the evaluation.  In fact, the evaluator is free to pursue evaluations as an independent entity, not upon specific engagement.  This movement from evaluation as a source of information to stakeholders, to evaluation as a force of change upon stakeholders is troubling to me.  Advocacy evaluation seems to be a route to undercut the good-faith and trust of the public as it relates to evaluation.  Since many of the evaluations that are engaged are brought forward to inform the public of the effectiveness of the use of their funds through the programs that are offered, the loss of faith and trust, or the rejection of aggressive forms of advocacy through evaluation, may undermine the ability of providers of services to garner that trust.

We need to be mindful of the participant experiences in our project.  In fact, this may be the easiest thing to evaluate in the limited time we have available.  There may not be time to formulate testable objectives and collect the data in support of any meaningful conclusions.  But it seems that we could readily assess the student experience and report the affects of the game on their attitudes and levels of engagement.

1 comment:

  1. Darin,

    Yes, I also feel troubled by the advocacy ideas of empowerment evaluation. I like the idea of teaching (empowering) people to evaluate themselves, but I worry about the advocacy part. It definitely has its place. For example, if nobody else is providing data into why an underserved portion of the population is underserved, then it might make sense to advocate for them through collecting evaluation data.

    But it could slip so easily into becoming something not very professional where the evaluator could lose credibility. I think we see that many times, not just in evaluation but in research. Personally, I think the pendulum has swung too far towards advocacy and we need to swing back towards just really understanding what is going on.

    Good thoughts!

    ReplyDelete