How Do They Know They Know? Evaluating Adult Learning

Our trainers are learner-focused. We remind ourselves that “what is learned is more important than what is taught.” Getting through the material isn’t the point. But how can we evaluate the learning that has taken place through our TNT program? The book “How Do They Know They Know: Evaluating Adult Learning” [Vella, Berardinelli, Burrow; 1998] provides insight.

How Do They Know They Know:
Evaluating Adult Learning

Vella, Berardinelli, Burrow
1998, Jossey-Bass Inc., San Francisco
(authors are affiliated with the
Jubilee Popular Education Center, Inc.
Raleigh, NC)

Preface:  A nice summary of non-formal education:

  1. Participation of the learners in determining what is to be learned through needs assessment
  2. Dialogue between learner and teacher and among learners
  3. Small-group work to engage learners
  4. Visual support and psychomotor involvement
  5. Accountability: “How do they know what they know”
  6. Participative feedback on results of programs
  7. Respect for learners and teachers
  8. A listening attitude on the part of teachers and resource people
  9. Learners do what they are learning

Chapter 1:  “A New Way of Thinking About Evaluation”

“Utilization:” How can you measure the learner’s continuing use of a new skill, new concept or newly developed attitude? Needed: a course planning and designing process that anticipates certain results. At Jubilee this involves:

  1. Prior to the course, get relevant baseline data from each upcoming participant (to “inform” not “form” the course)
    1. Picture: a 30 min. videotape of participant actual teaching
    2. Preparation: Evidence of how participant prepares for a presentation. Guidelines? Steps?
    3. Program: An actual program plan participant used in the past
    4. Perception: Have a former student(s) answer (in writing): 1) What do you perceive are the strengths of—–as demonstrated by this experience? 2) What aspects would you like to see him or her work on and improve?
  2. At Jubilee, the trainers call up-coming participants to discuss expectations, fears, etc.
  3. Have learners begin building a learning portfolio so they can see their own progress. Enable to compare new designs, skills, and attitudes, with the old.

Chapter 2:  “Building from the Base”

Characteristics of Effective Evaluation:

  1. Must be objective
  2. Must identify the important elements of an education program
  3. Must match organizational philosophy
  4. Must be identifiable and accessible (process must not be too difficult and time-consuming)
  5. Must focus both on outcomes and the process. 1) Did we accomplish our objectives? 2) Did we accomplish them in an effective and efficient way? (My note: #2 reflects a N.A. value. A more Biblical question might be: “Did we accomplish them in a way that honors God? Another important question: “What did I learn as a trainer? How can I be more effective next time based on what I learned here?”)
  6. Must be integrated into the educational planning process, otherwise it will be ignored, be too time consuming and expensive, and/or be less objective.
  7. Evaluation must be “owned” by trainers and learners.

Evaluation Axioms:

  1. Evaluation does not just happen…must be carefully planned, just as the training program is.
  2. Evaluation must be done by experts. Learners have a critical role to play in terms of evaluation, but they cannot speak wisely to all aspects of the training (e.g. future effectiveness of the training). Other trainers/outside observers are needed.
  3. Effective evaluation returns more than it costs
  4. Evaluation can be accomplished in many ways

Evaluation must be owned by both the trainers and the learners, otherwise it will be viewed as an outside imposition and therefore won’t be supported.

Key Questions to Ask when Developing an Evaluation Plan:

  1. What is the purpose of the evaluation? (KISS)
  2. What should be evaluated? Align evaluation with what is being taught.
  3. What are the sources of evaluation information?
  4. What are the methods for gathering the information?
  5. When should evaluation be completed?

Program Planning Process:

  1. Purpose of education program
  2. Learner SKAs (skills, knowledge, attitudes) to be developed
  3. Education program design decisions
  4. RESULTS:  Learning that occurs—changes that occur in SKAs as a result of the program
  5. RESULTS:  Changes in performance (“transfer”)—learning from the program that is applied in the learner’s work after completing the education/training program
  6. RESULTS:  Impact (organizational improvement as a result of the learner’s work)

A comprehensive evaluation process should allow for the measurement of all 3 types of results (#4-6 above)

  1. Learning: immediate and specific (within the course)
  2. Change: intermediate and applied (in the learner’s work)
  3. Impact: long-term and broad (organizational)

Kirkpatrick (1994) distinguishes between 4 types of evaluation:

  1. Reaction
  2. Learning
  3. Behavior
  4. Results

Berardinelli (1991) sees 3 elements to successful impact:

  1. The quality of the learning experience (optimum design for learning, appropriate teacher activities)
  2. The characteristics of the individual learner (begin with necessary levels of skill and ability, motivated and expects to be effective, can commit adequate time to learning, attempts to apply what has been learned)
  3. The environment in which they apply what is learned afterwards (relationship between learner and supervisor, perceptions of rewards for enhanced performance)

Chapter 3:  “The Accountability Process and Planner”

(Note: This Accountability Planner takes the “Program Planning Process” and collapses the 6 stages into 3, and then adds 3 new stages)

  1. Purpose and goals. What (content) and What for (objectives), including SKAs. Determine by needs assessment, organizational mandate, expert opinion, learner input.
  2. Educational design. When, where, how. education process elements, including curriculum, learning tasks, materials, principles and practices, and instructors.
  3. Anticipated changes. (learning, change, impact)
  4. Evidence of change. Signs that change has occurred for each of the anticipated changes. Qualitative or quantitative, but direct, identifiable, specific and accessible. Likely to contain content and process elements.
  5. Documentation of evidence. What sources to use? When should evidence be collected? Develop separate data collection instrument?
  6. Analysis of evidence. What changes/gains did the program cause? Organize and summarize evidence. Analyze. Compare pre-course measures with post-course measures., comparison of participant and non-participant performance, comparison to an established standard, etc.

Chapter 4:  “Evaluating Existing Programs”

Difficult to evaluate programs already underway…extra time, effort, resources needed if not integrated from the beginning. Add-on evaluations may not be comprehensive or even appropriate for a given use. Externally-imposed evaluations may be resisted by the trainers…viewed as a threat.

Suggestions on how to “back in to evaluation”…

  1. Prioritize evaluation needs…what information will be most helpful to improve the quality of the program? What is the primary purpose of the program?
  2. Include major stakeholders in designing evaluation plan. Build “ownership” re: the value of evaluation.

Formal testing of learners is a narrow type of evaluation. Think “outside the box” regarding types of evidence and documentation of that evidence:

Evaluating Knowledge: Do they get it?

Oral questioning, discussions, writing assignments, class activities and assignments, work-related activities/assignments, case studies, flip chart, problem-solving.

Evaluating Skills: Can they do it?

Simulations of individual skills or complete tasks, games, projects, role plays. Actual performance in the work place. Observations of skills completed by the instructor, learning peers, supervisors, etc. using checklists of tasks or performance elements, rating scales, comparative rankings.

Evaluating Attitudes: Do they own it?

Self-perceptions. Perceptions of others (customers, coworkers, peers, instructors). Completed by direct observation, discreet observation or recall of past experience using listings/descriptions of desired attitudes and behaviors.

Much evaluation info is included in materials completed during the program. These can be collected and then reviewed later.

Additional suggestions on non-traditional ways to do assessment.

(From The Dirt on Learning, by Thom and Joni Schultz, Group Publishing, Loveland, CO., for church youth workers)


  1. observation
  2. verbal responses
  3. written records
  4. drawing
  5. products
  6. self-evaluation tools
  7. portfolios
  8. teacher-student conferences
  9. parent-teacher-student conferences
  10. small-group conferences
  11. journals
  12. class scrapbook
  13. faith history project
  14. video projects
  15. audio projects
  16. living Bible museum
  17. story box
  18. dramatic presentation
  19. living Bible verses
  20. individualized educational programs
  21. teacher for a day
  22. music
  23. show and tell
  24. role play
  25. “why” circle

Leadership Resources

Leadership Resources exists to equip and encourage pastors around the world to teach God’s word with God’s heart. Learn more about our work, our approach, and see how your church can make a worldwide difference with God's Word.