The difference between training analysis and training evaluation is the focus of each. The analysis includes review of information sources indicating a need for training intervention. Evaluation is a review of the conduct of training and how effective it is in creating change in perceptions, behavior, skill level, and basic knowledge. The following paragraphs describe evaluation components typically used to evaluate the effectiveness of the training presentation.
A training evaluation plan should follow Kirkpatrick's model for evaluating levels of reaction to learning process, learning levels (cognitive, psychomotor, affective), behavior (level of change) and results for the organization in terms of increased productivity and quality. The plan should identify how to conduct and apply training evaluation that includes pre- and post-training tests (learning and behavior), post-training surveys (reaction), and reporting and using evaluation results to evaluate Return on Investment. The plan should be integrated into a Curriculum Development and Implementation Plan.
A. Pre- and Post-Test Instruments
1. Design: Test questions, both pre- and post-test, can consist of different test item types. Several common types include:
- Multiple choice - test item stem and several discriminators (choices)
- True/False - statement that the trainee must evaluate as either true or false
- Matching - identify a word or phrase that complements the root word or phrase completion - supply the missing word or phrase in a statement
- Essay - Write an opinion, supporting information, or analysis of a statement or list
2. Implementation: Pre- Tests can be administered just prior to taking the course or several days or weeks prior. The object is to test the student's cognitive and psychomotor knowledge (as appropriate) of a subject before participating in learning. The Post-Test should have items that test the same cognitive and psychomotor knowledge but not necessarily using the same wording in each corresponding test item. There is a risk that the items are not equal in intent if written slightly differently, but an even greater risk if they are identically the same.
3. Response Evaluation: Several types of analysis can be done depending on the number of trainees that are in the test population. If the course has been administered several times in the past, perform t-tests to test the significance of difference in responses by one specific group with the total training population taking the same two tests. Perform correlations of all items on the pre- test with all items in the post-test. If the test items are paired, correlations can identify the level of difference between paired items in the pre- and post-tests.
B. Post-Training Surveys - Student Acceptance and Perceived Training Value
1. Design: Commonly referred to as a "Smile Sheet", this post-training survey typically asks the trainee, "Did you like the training?" Was the training environment acceptable? "Was the coffee hot and the donuts fresh?" But post-training surveys should ask a lot more. Questions should query the student's opinion about the relevance and usefulness of training in the student's job upon returning to work, to solicit the trainees' opinion about the usefulness and effectiveness of training components, and the effectiveness of the facilitator in the knowledge transfer process.
2. Implementation: There are several types of post-training surveys. They include a survey to solicit an immediate reaction to training, a 45-day survey to see how every day work affects short term memory about course information, and a 90-day survey to determine the level of training residual that continues to effect job performance.
3. Response Evaluation: The primary objective is to measure the level of variation to responses on each survey type by training group as compared to the training population receiving the same training. The secondary objective is to measure the level of variation between post-survey types: initial survey with the 45-day survey, initial survey with the 90-day survey, and 45-day with the 90-day survey. The ideal result would indicate that there is no loss of training effect no matter how long ago the training was presented.
C. Interpreting and Reporting Training Response Data
To determine the quality of training, data collected with evaluation instruments is necessary for analyzing and recommending changes for continuous process improvement of the training program, including increasing the quality of materials and training components as well as ensuring continued availability and/or growth of training resources.
Results should be reported by course for each of the topics explained in the subsections of Section 6. In addition to data reports, a cumulative report of responses should be provided as well to show effectiveness trends by course. Explain what the trends mean in terms of the effect of training in providing solutions to promote organization mission goals.