for Designing a Process Evaluation
Produced for the
Georgia Department of Human Resources
Division of Public Health
Melanie J. Bliss, M.A. James G. Emshoff, Ph.D.
Department of Psychology Georgia State University
Evaluation Expert Session July 16, 2002 Page 1
What is process evaluation?
Process evaluation uses empirical data to assess the delivery of programs. In contrast to outcome evaluation, which assess the impact of the program, process evaluation verifies what the program is and whether it is being implemented as designed. Thus, process evaluation asks “what,” and outcome evaluation asks, “so what?”
When conducting a process evaluation, keep in mind these three questions:
1. What is the program intended to be? 2. What is delivered, in reality? 3. Where are the gaps between program design and delivery?
This workbook will serve as a guide for designing your own process evaluation for a program of your choosing. There are many steps involved in the implementation of a process evaluation, and this workbook will attempt to direct you through some of the main stages. It will be helpful to think of a delivery service program that you can use as your example as you complete these activities. Why is process evaluation important? 1. To determine the extent to which the program is being
implemented according to plan 2. To assess and document the degree of fidelity and variability in
program implementation, expected or unexpected, planned or unplanned
3. To compare multiple sites with respect to fidelity 4. To provide validity for the relationship between the intervention
and the outcomes 5. To provide information on what components of the intervention
are responsible for outcomes 6. To understand the relationship between program context (i.e.,
setting characteristics) and program processes (i.e., levels of implementation).
7. To provide managers feedback on the quality of implementation 8. To refine delivery components 9. To provide program accountability to sponsors, the public, clients,
and funders 10. To improve the quality of the program, as the act of evaluating is
Evaluation Expert Session July 16, 2002 Page 2
Stages of Process Evaluation Page Number
1. Form Collaborative Relationships 3 2. Determine Program Components 4 3. Develop Logic Model* 4. Determine Evaluation Questions 6 5. Determine Methodology 11 6. Consider a Management Information System 25 7. Implement Data Collection and Analysis 28 8. Write Report**
Also included in this workbook:
a. Logic Model Template 30 b. Pitfalls to avoid 30 c. References 31
Evaluation can be an exciting, challenging, and fun experience
* Previously covered in Evaluation Planning Workshops. ** Will not be covered in this expert session. Please refer to the Evaluation Framework
and Evaluation Module of FHB Best Practice Manual for more details.
Evaluation Expert Session July 16, 2002 Page 3
Forming collaborative relationships
A strong, collaborative relationship with program delivery staff and management will likely result in the following:
Feedback regarding evaluation design and implementation Ease in conducting the evaluation due to increased cooperation Participation in interviews, panel discussion, meetings, etc. Increased utilization of findings
Seek to establish a mutually respectful relationship characterized by trust, commitment, and flexibility.
Key points in establishing a collaborative relationship:
Start early. Introduce yourself and the evaluation team to as many delivery staff and management personnel as early as possible.
Emphasize that THEY are the experts, and you will be utilizing their knowledge and
information to inform your evaluation development and implementation.
Be respectful of their time both in-person and on the telephone. Set up meeting places that are geographically accessible to all parties involved in the evaluation process.
Remain aware that, even if they have requested the evaluation, it may often appear as
an intrusion upon their daily activities. Attempt to be as unobtrusive as possible and request their feedback regarding appropriate times for on-site data collection.
Involve key policy makers, managers, and staff in a series of meetings throughout the
evaluation process. The evaluation should be driven by the questions that are of greatest interest to the stakeholders. Set agendas for meetings and provide an overview of the goals of the meeting before beginning. Obtain their feedback and provide them with updates regarding the evaluation process. You may wish to obtained structured feedback. Sample feedback forms are throughout the workbook.
Provide feedback regarding evaluation findings to the key policy makers, managers,
and staff when and as appropriate. Use visual aids and handouts. Tabulate and summarize information. Make it as interesting as possible.
Consider establishing a resource or expert “panel” or advisory board that is an official
group of people willing to be contacted when you need feedback or have questions.
Evaluation Expert Session July 16, 2002 Page 4
Determining Program Components
Program components are identified by answering the questions who, what, when, where, and how as they pertain to your program.
Who: the program clients/recipients and staff What: activities, behaviors, materials When: frequency and length of the contact or intervention Where: the community context and physical setting How: strategies for operating the program or intervention
BRIEF EXAMPLE: Who: elementary school students What: fire safety intervention When: 2 times per year Where: in students’ classroom How: group administered intervention, small group practice
1. Instruct students what to do in case of fire (stop, drop and roll). 2. Educate students on calling 911 and have them practice on play telephones. 3. Educate students on how to pull a fire alarm, how to test a home fire alarm and how to
change batteries in a home fire alarm. Have students practice each of these activities. 4. Provide students with written information and have them take it home to share with their
parents. Request parental signature to indicate compliance and target a 75% return rate. Points to keep in mind when determining program components Specify activities as behaviors that can be observed
If you have a logic model, use the “activities” column as a starting point
Ensure that each component is separate and distinguishable from others
Include all activities and materials intended for use in the intervention
Identify the aspects of the intervention that may need to be adapted, and those that should
always be delivered as designed. Consult with program staff, mission statements, and program materials as needed.
Evaluation Expert Session July 16, 2002 Page 5
Your Program Components
After you have identified your program components, create a logic model that graphically portrays the link between program components and outcomes expected from these components.
Now, write out a succinct list of the components of your program. WHO: WHAT: WHEN: WHERE: HOW:
Evaluation Expert Session July 16, 2002 Page 6
What is a Logic Model
A logical series of statements that link the problems your program is attempting to address (conditions), how it will address them (activities), and what are the expected results (immediate and intermediate outcomes, long-term goals).
Benefits of the logic model include:
helps develop clarity about a project or program, helps to develop consensus among people, helps to identify gaps or redundancies in a plan, helps to identify core hypothesis, helps to succinctly communicate what your project or program is about.
When do you use a logic model Use… – During any work to clarify what is being done, why, and with what intended results – During project or program planning to make sure that the project or program is logical and complete – During evaluation planning to focus the evaluation – During project or program implementation as a template for comparing to the actual program and as a filter to determine whether proposed changes fit or not. This information was extracted from the Logic Models: A Multi-Purpose Tool materials developed by Wellsys Corporation for the Evaluation Planning Workshop Training. Please see the Evaluation Planning Workshop materials for more information. Appendix A has a sample template of the tabular format.
Evaluation Expert Session July 16, 2002 Page 7
Determining Evaluation Questions
As you design your process evaluation, consider what questions you would like to answer. It is only after your questions are specified that you can begin to develop your methodology. Considering the importance and purpose of each question is critical.
BROADLY…. What questions do you hope to answer? You may wish to turn the program components that you have just identified into questions assessing: Was the component completed as indicated? What were the strengths in implementation? What were the barriers or challenges in implementation? What were the apparent strengths and weaknesses of each step of the intervention? Did the recipient understand the intervention? Were resources available to sustain project activities? What were staff perceptions? What were community perceptions? What was the nature of the interaction between staff and clients?
These are examples. Check off what is applicable to you, and use the space below to write additional broad, overarching questions that you wish to answer.
Evaluation Expert Session July 16, 2002 Page 8
SPECIFICALLY … Now, make a list of all the specific questions you wish to answer, and organize your questions categorically. Your list of questions will likely be much longer than your list of program components. This step of developing your evaluation will inform your methodologies and instrument choice. Remember that you must collect information on what the program is intended to be and what it is in reality, so you may need to ask some questions in 2 formats. For example:
How many people are intended to complete this intervention per week?” How many actually go through the intervention during an average week?”
Consider what specific questions you have. The questions below are only examples! Some may not be appropriate for your evaluation, and you will most likely need to add additional questions. Check off the questions that are applicable to you, and add your own questions in the space provided. WHO (regarding client): Who is the target audience, client, or recipient? How many people have participated? How many people have dropped out? How many people have declined participation? What are the demographic characteristics of clients?
Race Ethnicity National Origin Age Gender Sexual Orientation Religion Marital Status Employment Income Sources Education Socio-Economic Status
What factors do the clients have in common? What risk factors do clients have? Who is eligible for participation? How are people referred to the program? How are the screened? How satisfied are the clients?
Evaluation Expert Session July 16, 2002 Page 9
WHO (Regarding staff): Who delivers the services? How are they hired? How supportive are staff and management of each other? What qualifications do staff have? How are staff trained? How congruent are staff and recipients with one another? What are staff demographics? (see client demographic list for specifics.)
YOUR QUESTIONS: WHAT: What happens during the intervention? What is being delivered? What are the methods of delivery for each service (e.g., one-on-one, group session, didactic instruction,
etc.) What are the standard operating procedures? What technologies are in use? What types of communication techniques are implemented? What type of organization delivers the program? How many years has the organization existed? How many years has the program been operating? What type of reputation does the agency have in the community? What about the program? What are the methods of service delivery? How is the intervention structured? How is confidentiality maintained?
YOUR QUESTIONS: WHEN: When is the intervention conducted? How frequently is the intervention conducted? At what intervals? At what time of day, week, month, year? What is the length and/or duration of each service?
Evaluation Expert Session July 16, 2002 Page 10
YOUR QUESTIONS: WHERE: Where does the intervention occur? What type of facility is used? What is the age and condition of the facility? In what part of town is the facility? Is it accessible to the target audience? Does public transportation access
the facility? Is parking available? Is child care provided on site?
YOUR QUESTIONS: WHY: Why are these activities or strategies implemented and why not others? Why has the intervention varied in ability to maintain interest? Why are clients not participating? Why is the intervention conducted at a certain time or at a certain frequency?
Evaluation Expert Session July 16, 2002 Page 11
Validating Your Evaluation Questions
Even though all of your questions may be interesting, it is important to narrow your list to questions that will be particularly helpful to the evaluation and that can be answered given your specific resources, staff, and time.
Go through each of your questions and consider it with respect to the questions below, which may be helpful in streamlining your final list of questions. Revise your worksheet/list of questions until you can answer “yes” to all of these questions. If you cannot answer “yes” to your question, consider omitting the question from your evaluation.
Will I use the data that will stem from these questions?
Do I know why each question is important and /or valuable?
Is someone interested in each of these questions?
Have I ensured that no questions are omitted that may be important to someone else?
Is the wording of each question sufficiently clear and unambiguous?
Do I have a hypothesis about what the “correct” answer will be for each question?
Is each question specific without inappropriately limiting the scope of the evaluation or probing for a specific response?
Do they constitute a sufficient set of questions to achieve the purpose(s) of the evaluation?
Is it feasible to answer the question, given what I know about the resources for evaluation?
Is each question worth the expense of answering it?
Derived from “A Design Manual” Checklist, page 51.
Evaluation Expert Session July 16, 2002 Page 12
Determining Methodology Process evaluation is characterized by collection of data primarily through two formats: 1) Quantitative, archival, recorded data that may be managed by an computerized
tracking or management system, and 2) Qualitative data that may be obtained through a variety of formats, such as
surveys or focus groups.
When considering what methods to use, it is critical to have a thorough understanding and knowledge of the questions you want answered. Your questions will inform your choice of methods. After this section on types of methodologies, you will complete an exercise in which you consider what method of data collection is most appropriate for each question.
Do you have a thorough understanding of your questions?
Furthermore, it is essential to consider what data the organization you are evaluating already has. Data may exist in the form of an existing computerized management information system, records, or a tracking system of some other sort. Using this data may provide the best reflection of what is “going on,” and it will also save you time, money, and energy because you will not have to devise your own data collection method! However, keep in mind that you may have to adapt this data to meet your own needs – you may need to add or replace fields, records, or variables.
What data does your organization already have? Will you need to adapt it?
If the organization does not already have existing data, consider devising a method for the organizational staff to collect their own data. This process will ultimately be helpful for them so that they can continue to self-evaluate, track their activities, and assess progress and change. It will be helpful for the evaluation process because, again, it will save you time, money, and energy that you can better devote towards other aspects of the evaluation. Management information systems will be described more fully in a later section of this workbook.
Do you have the capacity and resources to devise such a system? (You may need to refer to a later section of this workbook before answering.)
Evaluation Expert Session July 16, 2002 Page 13
Who should collect the data?
Given all of this, what thoughts do you have on who should collect data for your evaluation? Program staff, evaluation staff, or some combination?
Program Staff: May collect data from activities such as attendance, demographics, participation, characteristics of participants, dispositions, etc; may conduct intake interviews, note changes regarding service delivery, and monitor program implementation.
Advantages: Cost-efficient, accessible, resourceful, available, time-efficient,
and increased understanding of the program. Disadvantages: May exhibit bias and/or social desirability, may use data for critical
judgment, may compromise the validity of the program; may put staff in uncomfortable or inappropriate position; also, if staff collect data, may have an increased burden and responsibility placed upon them outside of their usual or typical job responsibilities. If you utilize staff for data collection, provide frequent reminders as well as messages of gratitude.
Evaluation staff: May collect qualitative information regarding implementation, general characteristics of program participants, and other information that may otherwise be subject to bias or distortion.
Advantages: Data collected in manner consistent with overall goals and timeline
of evaluation; prevents bias and inappropriate use of information; promotes overall fidelity and validity of data.
Disadvantages: May be costly and take extensive time; may require additional
training on part of evaluator; presence of evaluator in organization may be intrusive, inconvenient, or burdensome.
Evaluation Expert Session July 16, 2002 Page 14
When should data be collected?
Conducting the evaluation according to your timeline can be challenging. Consider how much ti