Now more than ever, grantmakers are focused on ensuring that every grant dollar has the greatest impact possible on people and communities. This means that the most competitive grant applications should incorporate strong evaluation plans, complete with goals, objectives, outcomes, and methods of measurement. For many nonprofits, developing an evaluation plan can seem daunting. However, with the right tools, you can develop a plan that is clear, realistic, and actionable – and impressive to funders! With a few evaluation basics, you can transform your big picture vision and mission into evaluation plans that grantmakers will love.
To cover first things first, however, you should understand why evaluation is important in seeking grants. Almost all proposals these days require an organization to state the outcomes they plan on achieving – and then to report on those outcomes in a progress or final report. A strong evaluation plan with clearly articulated outcomes will make your proposal more competitive. What’s more, if you actually follow your plan, and can show evidence of your success, grantmakers will be more likely to give you funding in the future.
Understandably, nonprofits can get frustrated when it seems like they must develop a new evaluation plan with new outcomes every time they apply for funding. And it’s true, funders will have different requirements for what they expect out of an evaluation. However, if you do the work to develop a strong framework for your organization or program’s evaluation plan, you should only have to tweak it slightly to fit with different funder requirements. Which gets us to our first big recommendation…
Build a Strong Foundation
With certain pieces in place, your organization will have fully developed, strong, and strategic framework for why it does what it does to achieve its intended impacts. These pieces include:
Theory of Change: a comprehensive description and illustration of how and why a desired change is expected to happen in a particular context; includes information on the need, target population, core program components, outcomes, supporting research, and hypothesis. For more information, visit: https://www.aecf.org/TOC
Research and Evidence-Based Models: evidence to show that the work you are doing will have its intended impact. This may include a literature review, which is a comprehensive overview of knowledge available on a topic, or utilization of an evidence-based model, where you can identify clear components that have shown to be effective through research or evaluation.
Program Model: to be able to evaluate your program, you need to know exactly what your program is. Make sure you have the key components of a program model clearly defined: goals and objectives, target population, services provided, dosage of services, staff required, resources required, and timelines.
Logic Model: A systematic and visual way to present your understanding of the relationships among the resources you have to operate your program, the activities you plan, and the changes or results you hope to achieve. Some grantmakers require you to submit a logic model with an application. For more information, visit: https://www.wkkf.org/logicmodel
Now that you have a strong foundation in place, you are ready to get into the specifics of your evaluation plan. For this, it’s important for you to understand some basic principles of data collection and evaluation design…
Basic Data Collection and Evaluation Design
Funders will ask you to describe your evaluation plan. To do this, you will need to know types of data collection methods and purpose – how will you collect data to measure your outcomes? You will also need to know different types of evaluation design – how do you plan to implement your evaluation?
There are many ways to collect data. Surveys, interviews, focus groups, and observation are all options. You will want to consider many factors in deciding on what method(s) to use. Do you need a large sample size? Will statistics suffice, or will you need more detailed narrative that can only come from actually speaking to another human being? How much time do you have to collect data – and how much money? Ultimately, the biggest factor is what type of data collection method will best measure your intended outcomes. For more information, visit: https://www.cdc.gov/methods
As for evaluation design, this is about when you collect your data. A few basic examples include:
- Post-only: data collection occurs once, after an intervention occurs.
- Pre/Post: data collection happens before and after the intervention. This can help show the change that occurred as a result of the intervention.
- Longitudinal: data collection occurs at multiple points over time. This can help show the impact over a longer period, as well as trends.
For more information, visit: http://toolkit.pellinstitute.org/evaluationdesign
Finally, you can now put all this information together to create your evaluation plan! Remember, different funders will require different pieces of information for an evaluation plan, so you should always pay attention to what those are. However, as a general rule, a strong evaluation plan will have certain things in place…
Putting it All Together
- Introduction: Overview of problem and program model, prior research on this program or similar programs, purpose of current evaluation, scope of current evaluation
- Description of Intervention: planned work (resources, activities), description of expected results, outputs, outcomes, impact
- Evaluation Methods: questions the evaluation will address, evaluation study design, data collection methods, data analysis plan
- Timeline: How long will it take you to accomplish the following: planning, sampling identification, data collection, analysis, report writing
- Dissemination: How will you report evaluation results? How will you disseminate evaluation results? How will you engage stakeholders in understanding/interpreting evaluation results?
- Budget: Staffing, supplies, equipment, travel for each major evaluation component