Skip to main content

Literacy Programming

Program Evaluation

Program evaluation gathers the data you need to show your program’s effectiveness and impact in the community.  This is important data for current and prospective funders.  It will also help you make program improvements.


How do I evaluate my program?

Consider both the process and the outcomes.

Analyze how you delivered the program, step-by-step:

  • What were the individual tasks?
  • Who did what?
  • When?
  • How much time did each task take?
  • What challenges did you encounter?
  • How did you solve them?
  • Will you have these same challenges in future programming?

To evaluate outcomes, ask:

  • What difference did this program make in the lives of its participants? 
  • What broader changes happened because of this program?

For measureable outcomes, track the number of:

  • people impacted
  • learners who achieved a specific goal (got a job, GED, read to a child, etc.)
  • learners who improved a skill level (often NRS*)
  • referrals to and from a tech college, W-2 agency, school, etc.
  • tutors trained and matched
  • learners who received more than ## hours of instruction
  • cumulative instructional hours provided
  • cumulative volunteer hours
  • support services (hours of paid childcare, bus passes, gas cards, etc.)

*What is NRS? There are 6 NRS (National Reporting System) levels for ABE and ELL.  Federally-funded programs must track NRS gains using a standardized assessment, like TABE, TABE CLASE E or CASAS.  Learn more at https://nrsweb.org/about-us/QA

For qualitative outcomes, highlight a story or testimonial from a:

  • learner who achieved a goal, with the support of a volunteer tutor
  • family member of a successful learner
  • teacher who saw how your program helped kids progress in school
  • employer reporting better safety record after Workplace Literacy program
  • volunteer who became a tutor to give back to their community

Most funders want to support programs that serve disadvantaged communities.  Collect demographic information for learners and volunteers, so you can show who your program directly impacts:

  • race/ethnicity
  • age
  • zip code
  • income and family size
  • use of public assistance (free and reduced lunch, Badger Care, Food Share, etc.)
  • native language
  • educational level
  • occupation
  • any other data requested by your individual funders

Demographic data also helps you identify trends in your volunteer and learner bases.
 

Should we collect learner feedback?

Yes.  Learner feedback will help you ensure program quality.

Sample Class Evaluation Form


How often should we evaluate our programs?

It depends on the program. 

For a 12-week workplace literacy class, you will need to assess learners before the program begins and then again at the end of the program, to determine outcomes (skill gain). 

You may evaluate your GED classes at the end of each semester, and your one-to-one tutoring program every six months to 1 year.


Resources:

Basic Guide to Outcomes-Based Evaluation for Nonprofit Agencies with Very Limited Resources: www.managementhelp.org/evaluatn/outcomes.htm

Basic Guide to Program Evaluation: www.managementhelp.org/evaluatn/fnl_eval.htm

“Measuring the Difference Volunteers Make: A Guide to Outcome Evaluation for Volunteer Program Managers” from Minnesota Department of Human Services: https://www.energizeinc.com/art/documents/MeasuringtheDifference20 05.pdf

For more information and examples of program outcomes and measurement: www.managementhelp.org/np_progs/np_mod/org_frm.htm

For information on conducting a program evaluation or creating a program evaluation tool for family literacy programming, see: “‘A’ is for Assessment: A Primer on Program Evaluation” at: http://www.nccap.net/media/pages/Program%20Evaluation.PDF