About CES4Health

Product Details

Product at a Glance - Product ID#DJS6ZMSK


Title: Partnerships for Environmental Public Health Evaluation Metrics Manual


Abstract: NIEHS developed the PEPH Evaluation Metrics Manual to help grantees understand how to map out their programs using a logic model, and to identify measures for documenting their achievements in environmental public health research. While the primary intended audiences for this Manual are PEPH grantees and program staff, we hope that other groups and organizations will also find it useful. The metrics are intended for use at the local level and grantees are encouraged to adapt the metrics to fit the unique characteristics of their communities. Those projects with multi-site activities may want to consider identifying metrics that are applicable for all locations. Finally, though focused on PEPH program activities, the principles outlined in the Manual could also be useful for those interested in measuring the success of basic research programs.The manual covers five cross-cutting areas that are common throughout the PEPH program and are more challenging to measure using peer reviewed literature: partnering, leveraging, products and dissemination, education and training, and capacity building. Each chapter identifies evaluation metrics for the activities, outputs and impacts in a sample logic model and describes common strategies grantees can use to collect relevant data for metrics. We also included examples of metrics for almost every activity, output and impact discussed in the manual. Finally, we included more than 80 narratives that illustrate “metrics in action.” These narratives provide real world examples of how a grantee measured an activity, output or impact.


Type of Product: PDF document


Year Created: 2012


Date Published: 9/16/2013

Author Information

Corresponding Author
Kristianna Pettibone
National Institute of Environmental Health Sciences
530 Davis Dr.
Morrisville, NC 27560
United States
p: 919-541-7752
pettibonekg@niehs.nih.gov

Authors (listed in order of authorship):
Kristianna Pettibone
National Institute of Environmental Health Science

Christina Drew
National Institute of Environmental Health Science

Product Description and Application Narrative Submitted by Corresponding Author

What general topics does your product address?

Biological Sciences, Public Health


What specific topics does your product address?

Advocacy, Community engagement, Community health , Environmental justice, Environmental health, Program evaluation, Community-based participatory research


Does your product focus on a specific population(s)?

N/A


What methodological approaches were used in the development of your product, or are discussed in your product?

Case study , Community-based participatory research , Interview


What resource type(s) best describe(s) your product?

Manual/how to guide


Application Narrative

1. Please provide a 1600 character abstract describing your product, its intended use and the audiences for which it would be appropriate.*

NIEHS developed the PEPH Evaluation Metrics Manual to help grantees understand how to map out their programs using a logic model, and to identify measures for documenting their achievements in environmental public health research. While the primary intended audiences for this Manual are PEPH grantees and program staff, we hope that other groups and organizations will also find it useful. The metrics are intended for use at the local level and grantees are encouraged to adapt the metrics to fit the unique characteristics of their communities. Those projects with multi-site activities may want to consider identifying metrics that are applicable for all locations. Finally, though focused on PEPH program activities, the principles outlined in the Manual could also be useful for those interested in measuring the success of basic research programs.The manual covers five cross-cutting areas that are common throughout the PEPH program and are more challenging to measure using peer reviewed literature: partnering, leveraging, products and dissemination, education and training, and capacity building. Each chapter identifies evaluation metrics for the activities, outputs and impacts in a sample logic model and describes common strategies grantees can use to collect relevant data for metrics. We also included examples of metrics for almost every activity, output and impact discussed in the manual. Finally, we included more than 80 narratives that illustrate “metrics in action.” These narratives provide real world examples of how a grantee measured an activity, output or impact.


2. What are the goals of the product?

The PEPH Evaluation Metrics Manual provides ideas about how to measure and document success. It also aims to build a common evaluation language that grantees and program staff can use in discussing PEPH programs.


3. Who are the intended audiences or expected users of the product?

While the primary intended audiences for this Manual are PEPH grantees and program staff, we hope that other groups and organizations will also find it useful. The metrics are intended for use at the local level and grantees are encouraged to adapt the metrics to fit the unique characteristics of their communities. Those projects with multi-site activities may want to consider identifying metrics that are applicable for all locations. Finally, though focused on PEPH program activities, the principles outlined in the Manual could also be useful for those interested in measuring the success of basic research programs.


4. Please provide any special instructions for successful use of the product, if necessary. If your product has been previously published, please provide the appropriate citation below.


5. Please describe how your product or the project that resulted in the product builds on a relevant field, discipline or prior work. You may cite the literature and provide a bibliography in the next question if appropriate.

Program evaluation is the use of information to make decisions about a program – such as whether to continue it, adjust it or expand it to different communities. Typically, program evaluations are used to answer questions about whether a program is working as intended, and to explain why or why not.(1,2,3) The benefits of evaluations are clear and include the ability to assess effectiveness and impact; determining factors that lead to program success (or failure); identifying areas for program improvement; justifying further funding; and identifying new audiences and applications for projects. However, there are many challenges including narrowing down key evaluation questions; and collecting, storing and analyzing appropriate data (4). It can also be difficult to isolate the contribution of the program to the particular impacts and it may take years to recognize the ultimate impacts of a program on a community.

NIEHS has a solid history of evaluating programmatic activities. (5,6,7) When measuring the impact of these programs on public health, we often rely on bibliometric measures that assess the number of publications and how often those publications are cited by others. However, many accounts of the successful products and impacts achieved by the PEPH programs are not published in peer reviewed literature; they emerge as fact sheets, web sites, town hall meetings, partnerships, videos, and other types of activities or products. When results are published, they are not always captured in PubMed, our primary source for tracking grantee publications. PEPH grantees asked NIEHS for tools and approaches they could use to develop project specific evaluation metrics. NIEHS developed the Evaluation Metrics Manual to address this need.


6. Please provide a bibliography for work cited above or in other parts of this application. Provide full references, in the order sited in the text (i.e. according to number order). .

1. Grembowski, D (2001). The Practice of Health Program Evaluation. Thousand Oaks, CA: Sage Publications.

2. Wholey, JS; Hatry, HP; Newcomer, KE. (1994), Handbook of Practical Program Evaluation. San Francisco, CA: Jossey-Bass Inc. Thousand Oaks, CA: Sage Publications.

3. Fink, A. (2005). Evaluation Fundamentals. Insights into the Outcomes, Effectiveness, and Quality of Health Programs. Thousand Oaks, CA: Sage Publications.

4. Weiss, CH. (1997). Evaluation. Upper Saddle River, NJ: Prentice Hall.

5. Engel-Cox, JA; Van Houten; B, Phelps, J; Rose, SW. (2008). Conceptual Model of Comprehensive Research Metrics for Improved Human Health and Environment. Environmental Health Perspectives, 116(5): 583–592. doi: 10.1289/ehp.10925. PMCID: PMC2367676.

6. Orians, C; Abed, J; Drew, CH; Rose, SW; Cohen, J: Phelps, J. (2009). Scientific and public health impacts of the NIEHS Extramural Asthma Research Program: Insights from primary data. Research Evaluation, 18, (5), 375-385(11). doi: 10.3152/095820209X480698. PMCID: PMC3171697.

7. Drew, CH; Barnes, MI; Phelps, J; Van Houten, B. (2008). NIEHS Extramural Global Environmental Health Portfolio: Opportunities for Collaboration. Environmental Health Perspectives, 116(4): 421–425. doi: 10.1289/ehp.11323. PMCID: PMC2291017


7. Please describe the project or body of work from which the submitted product developed. Describe the ways that community and academic/institutional expertise contributed to the project. Pay particular attention to demonstrating the quality or rigor of the work:

  • For research-related work, describe (if relevant) study aims, design, sample, measurement instruments, and analysis and interpretation. Discuss how you verified the accuracy of your data.
  • For education-related work, describe (if relevant) any needs assessment conducted, learning objectives, educational strategies incorporated, and evaluation of learning.
  • For other types of work, discuss how the project was developed and reasons for the methodological choices made.

The National Institute of Environmental Health Sciences (NIEHS) funds research to reduce the burden of human illness and disability by understanding how the environment influences the development and progression of human disease. Much of the work we fund reflects the intersection of science, policy and people, especially at the community level. The Partnerships for Environmental Public Health (PEPH) program builds upon the NIEHS’ long-standing commitment to facilitate and engage community groups in environmental health science research. The program defines environmental public health as the science of conducting research and translating it into action to address environmental exposures and health risks of concern to the public. To this end, the PEPH program provides a coordinating framework to encourage interaction among grantees. The program brings together scientists, community members, educators, health care providers, public health officials and policy makers in the shared goal of enhancing the impact of environmental public health research at local, state, regional, tribal, national and global levels. By fostering these multi-level partnerships, vital information about the linkages between environmental exposures and disease can be discovered and used to promote health and reduce the risk of disease across the populations at highest risk. A central goal of the PEPH program is to promote evaluation of project processes and impact, in order to convey the successes and challenges of environmental public health research.

The Evaluation Metrics Manual was developed in response to our grantees request for materials and information about how to evaluate aspects of their programs that are not typically included in peer-reviewed literature. The grantees recognized the value of evaluating their programs and adapting their programs based on their findings, but they lacked resources to help them understand how to do this. NIEHS developed the Evaluation Metrics Manual to help address this need.


8. Please describe the process of developing the product, including the ways that community and academic/institutional expertise were integrated in the development of this product.

The NIEHS PEPH program funds more than 200 grantees whose work includes a community engagement focus. We developed the Evaluation Metrics Manual using participatory strategies that drew on the experience and expertise of these grantees. Beginning in 2009, we conducted literature reviews, and reviewed NIEHS program documents, journal articles and evaluation manuals, as well as grantee websites, documents, and outreach and engagement materials. We conducted topical interviews and focus groups with PEPH grantees and NIEHS staff to obtain input from the field. In 2010-11, we presented a draft version of the Manual at grantee meetings, scientific meetings, invited sessions and webinars. We sought comments from a wide range of stakeholders including grantees, community partners, federal and state government agencies, public health practitioners, and other NIH institutes. These comments were incorporated into the final version, published in 2012.


9. Please discuss the significance and impact of your product. In your response, discuss ways your product has added to existing knowledge and benefited the community; ways others may have utilized your product; and any relevant evaluation data about impact, if available. If the impact of the product is not yet known, discuss its potential significance.

Although the product has only been in the public domain for one year, we have already heard from many grantees who have used the manual to evaluate their program activities. In addition, many grantees have used the information in the manual to document their program and their achievements for funding applications.


10. Please describe why you chose the presentation format you did.

We designed the PEPEH Evaluation Metrics Manual as a reference manual because our grantees indicated that they wanted a how-to guide for creating and using metrics to demonstrate the success of their projects. This format enabled us to present concrete ideas that they could refer to and also includes real world examples for all metrics from our funded grantees.


11. Please reflect on the strengths and limitations of your product. In what ways did community and academic/institutional collaborators provide feedback and how was such feedback used? Include relevant evaluation data about strengths and limitations if available.

While the Manual was developed for NIEHS’ Partnerships for Environmental Public Health program, the five main program areas – partnerships, leveraging, products and dissemination, education and training, and capacity building – are applicable to any agency or organization that promotes community-based participatory research. In addition, the strategy for using logic models to identify key program components and metrics has long been used in areas beyond environmental public health. In developing the Manual, our hope is that we have created a tool that empowers any partner to take the lead in evaluating program activities, outputs and impacts.

The Manual is a living document that we plan to update periodically. Opportunities for expansion include: new evaluation topics, such as cost-benefit analyses and econometric evaluations; new approaches used in programs, such as social media; and new examples of metrics drawn from the ever expanding network of PEPH grantees. We developed the Evaluation Metrics Manual using participatory strategies. We conducted topical interviews and focus groups with PEPH grantees and NIEHS staff to obtain input from the field. In 2010-11, we presented a draft version of the Manual at grantee meetings, scientific meetings, invited sessions and webinars. We sought comments from a wide range of stakeholders including grantees, community partners, federal and state government agencies, public health practitioners, and other NIH institutes. Some of the substantive changes we made in response to comments included incorporating a balance of qualitative and quantitative metrics, identifying specific examples of program outcomes, and addressing issues and examples that were unique to tribal communities. Several reviewers also provided suggestions for additional metrics to include. We also added more examples from PEPH grantees. The final version of the Manual incorporates the comments and suggestions we received.


12. Please describe ways that the project resulting in the product involved collaboration that embodied principles of mutual respect, shared work and shared credit. If different, describe ways that the product itself involved collaboration that embodied principles of mutual respect, shared work and shared credit. Have all collaborators on the product been notified of and approved submission of the product to CES4Health.info? If not, why not? Please indicate whether the project resulting in the product was approved by an Institutional Review Board (IRB) and/or community-based review mechanism, if applicable, and provide the name(s) of the IRB/mechanism.

While the PEPH Metrics Manual was not approved by an IRB or community based review mechanism, we actively worked with our funded grantees and other stakeholders in the development of the manual, as described in our response to question 8. Any grantee that provided material for the manual, such as a case study or "metrics in action" example, was provided an opportunity to review the final version of the portion of the manual that included their program examples. We actively worked with all contributors to address any editing or content concerns they had and to ensure that we appropriately represented their projects in the manual.