Efficiently (on the cheap) and effectively evaluating training with uncertain outcomes

I just returned from a meeting that provoked some fresh thinking about an evaluation problem I have thought about a lot in the past. The meeting was a training session designed to teach people how to address difficult problems that the organization had hitherto been unable to solve. The session was part instrumental, e.g. “When you have difficulty “A”, use tactic “X”. Part of the session was conceptual, e.g. the presenter asked attendees to discuss what they were doing, and a dialogue ensued that showed the insight that might come from thinking of the problem is a different way. This dialogue was repeated many times, thus (hopefully) sensitizing people to how to look at old problems in new ways.

The meeting was attended by four distinct groups of people: 1) members of the cross-functional problem solving groups that were doing the hands-on work, 2) top leadership of the organization, 3) employees of the organization not involved in the problem solving exercises, and 4) a variety of people from other organizations who had an interest in the kind of activity that was going on.

Those of us involved in the evaluation design face a few problems. First, other than a general sense of “consciousness raising”, it is hard to know what impact the meeting had. One unknown is whether any of the specific tactics taught would be useful for particular needs between the time of the training and the time of the evaluation data collection. (If someone is doing home improvement, and I taught her to use a Dremel, what are the odds that she would actually need a Dremel in the month between the training and the data collection? And if she did, what kind of questions would be needed to find out if her use of the Dremel was more effective than it would have been without my excellent advice?)  The second problem is that it’s even harder to find out if a conceptual tool is helping people. Third, budgets are squeezed. So, is there an efficient and effective means of evaluating the training? I think so.

Step 1: Assign the attendees to one of the four groups: 1) members of problem solving teams, 2) organization leadership, 3) non-leadership members of the organization, and 4) people from other organizations.

Step 2: Count the number of people in each group. This will provide guidance as to whether a group can  be sampled, or the entire group needs to be contacted. This step may not be necessary from a formal sampling point of view because of the data collection method, but it’s a good idea anyway. It provides a sense of the coverage in the meeting, and is thus a useful descriptive evaluation element in it’s own right.

Step 3: Wait about two weeks after the training and send attendees an email with two questions: 1) Looking back over the time you spent, did pick up any knowledge or insight that might influence how you do your problem solving work — “yes”, “no”, “not sure”. 2) “ If you think it would help us if you elaborated on your responses, please give us a few sentences explaining  why you answered as you did.” I would keep this survey entirely in email to which people could hit “reply”. I’d stay away from Web based survey tools because I think that keeping it within email  would increase the response rate — no waiting for the survey to come up, reading directions, and so on.

Step 4: Look at the data. Follow up with select telephone interviews as needed.

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s