Project ECHO
®
Evaluation 101:
A practical guide for evaluating
your program.
April 2017
Table of Contents
1 Preface
2 What to expect in this guide
3 Introduction
3 What is program evaluation?
4 Why should you evaluate your ECHO program?
5 Considerations before beginning an evaluation
5 Feasibility and Scale
9 Protections for Participants and Patients
12 Developing and Implementing an Evaluation Plan
14 Clarify and Dene Program Goals and Objectives
18 Developing Evaluation Questions and Indicators
27 Selecting Evaluation Approaches
35 Implement the evaluation
38 Data sources that are useful for Project ECHO evaluations
38 Primary Data Sources for ECHO Evaluations
46 Secondary Data Sources for ECHO Evaluations
52 Making sense of your evaluation data
52 Quantitative Analysis
57 Qualitative Analysis
63 Economic Analysis
71 Using Evaluation Findings
77 Appendix A: Data Collection Methods: Examples from ECHO Projects
90 Appendix B: Survey Toolkit
103 Appendix C: Focus Group Toolkit
111 Appendix D: Glossary of Key Terms
122 Appendix E: Additional Resources
Table of Contents
1
Project ECHO® Evaluation 101
PREFACE
Project ECHO® (Extension for Community Healthcare Outcomes) is a collaborative
medical education model that aims to build workforce capacity in rural and
underserved areas. Developed by clinicians at the University of New Mexico (UNM),
the model is built upon four principles:
1. Use technology to leverage scarce resources;
2. Share “best practices” to reduce disparities;
3. Employ case-based learning and guided practice to support participants in
mastering complexity; and
4. Monitor program outcomes.
Originally created with the goal of increasing access to care for hepatitis C in rural
New Mexico, the ECHO model is now being used to address health care shortages
all over the world and across diseases and specialties—ranging from autism care
for children to palliative care for older adults.
1
The model relies on videoconferencing
to link primary care clinicians in underserved communities (spoke sites) with an
interdisciplinary team of specialist providers at academic medical centers (hubs)
during virtual “teleECHO
TM
” sessions, which include a brief educational lectures and
case-based, experiential learning.
2
People need access
to specialty care for
their complexhealth
conditions.
There arent enough
specialists to treat
everyone who needs
care, especially in
rural and underserved
communities.
ECHO trains primary
care clinicians to
provide specialty care
services. This means
more people can get the
care they need.
Patients get the right
care, in the right place,
at the right time.
This improves outcomes
and reduces costs.
NYAM.org
2
Evaluations of the impact of the ECHO model have not kept pace with the growth
of new and unique types of clinics. While there is some evidence that the model can
successfully improve care for conditions other than hepatitis C, more evidence is
needed to understand how model adaptations impact clinician eectiveness, as well
as patient health, health care utilization, and health care costs.
i, 3
Understanding the
impact of Project ECHO clinics on participants, the health of their patients, and the
broader health care environment is critically important when making the business,
economic or social case for the program. Moreover, evaluation ndings can be used to
engage stakeholders, adapt program activities, and ensure that scarce resources are
invested eciently and eectively.
Although there is clear value in evaluating ECHO programs, many new ECHO hubs report
that they lack the time, funding, and/or expertise to carry out evaluation activities; yet,
a great deal of valuable information on program implementation and impact can be
gathered using limited resources. With a particular focus on supporting groups with
relatively limited evaluation resources, this guide describes evaluation methods that
can be used to examine the implementation, outcomes and value of Project ECHO
clinics that aim to address a wide range of challenges related to health care access,
delivery, treatment, and prevention, particularly in underserved communities.
WHAT TO EXPECT IN THIS GUIDE
The purpose of this guide is to support leaders of Project ECHO programs as they
conduct basic program evaluations. A “one-size-ts-all” approach to evaluation is not
possible given wide variation in the topics, audiences and settings of ECHO programs.
Instead, we (researchers and evaluators at The New York Academy of Medicine)
aim to provide you (ECHO implementers) with practical information on evaluation
techniques and best practices that can guide you in designing and carrying out
your own evaluation, even when resources are limited.
The guide was created using:
1. A review of best practices in program evaluation;
2. A review of published evaluations of Project ECHO programs;
3. Findings from interviews with leaders of ECHO hubs regarding their
own evaluation experiences and recommendations; and
4. Advice from evaluation experts.
i
For a full list of publications on the Project ECHO model, see list available on Box.com.
3
Project ECHO® Evaluation 101
INTRODUCTION
WHAT IS PROGRAM EVALUATION?
Program evaluation is the process of systematically examining the implementation,
quality, impact and value of a program. Evaluations can take several forms (described
below); this guide will primarily focus on “process,” “outcome,” and “economic”
evaluations, as these are most likely to be relevant to ECHO hubs with limited
evaluation resources.
Process evaluations
Process evaluations focus on how a program is implemented, including specic
project activities, the number and characteristics of participants, and delity to
the original program model.
4
Information from a process evaluation may be used
to demonstrate program accomplishments and explain outcomes. For some ECHO
programs, process evaluation may be the principal focus because “outcomes
(for example, changes in patient health status) may be dicult to measure, may
occur further into the future, or may be attributable to many factors, instead of one
individual program.
Performance monitoring, meaning tracking program activities and regularly
assessing whether the program is on target to meet its goals, can be a component of
a process evaluation.
5
Monitoring provides program managers and sta with real-
time information on successes and challenges of implementation, enabling them to
act quickly to address problems that arise.
Outcome evaluations
Outcome evaluations assess whether the program achieved its expected results
within a given timeframe.
6
Project ECHO outcome evaluations typically examine
changes at the provider level (e.g., provider knowledge, self-ecacy, treatment
practices, or professional satisfaction) or the patient level (e.g., health outcomes,
health care utilization, or costs of care).
NYAM.org
4
Economic Evaluations
Economic evaluations compare the expenses associated with implementing
and delivering the program to the benets or savings derived from it.
7
Economic
evaluations include cost-eectiveness analyses and return on investment (ROI)
calculations. These types of evaluations can be particularly useful when “making
the case” for the program to stakeholders (e.g., funders, insurers, and health care
delivery systems), and when working to achieve a sustainable model for covering
the costs of a program.
WHY SHOULD YOU EVALUATE YOUR
ECHO PROGRAM?
It is well-recognized that evaluation is essential in determining whether the
Project ECHO model is eective in new settings or when new conditions or
diagnoses are targeted, and you are likely familiar with the idea that “monitoring
outcomes” is a core component of the Project ECHO model.
Implementers of Project ECHO® cite three primary benets of assessing
program impact:
1. Program improvement
Evaluation can inform decisions around program implementation, improvement,
expansion and replication, for example, by providing program implementers
with information on which aspects of their programs are successful and which
can be improved.
2. Funding and sustainability
Findings can enable individual ECHO programs to demonstrate program outcomes
to funders and provide data that can be included in grant proposals and other
solicitations for continued or expanded funding.
3. Stakeholder engagement
Findings can be used to demonstrate the value of the program participation
to potential participants and achieve greater buy-in from leaders of hospital
systems, federally-qualied health centers (FQHCs), accountable care
organizations (ACOs), and health plans.
5
Project ECHO® Evaluation 101
CONSIDERATIONS
BEFORE BEGINNING
AN EVALUATION
Evaluation can seem daunting, especially when your team seems to have insucient
funding or evaluation expertise. Before getting started, you may want to anticipate
the following challenges and work towards minimizing them.
FEASIBILITY AND SCALE
Timing, funding, evaluation expertise and data access will, in large part, dictate the
scale of your evaluation; be realistic about the availability of each of these resources
when planning your evaluation. For each step, assess whether the relevant data are
(or will be) accessible, whether they can be analyzed with the resources available, and
whether they will answer the questions that are important to you in the timeframe
you have available.
Timing
It is best to design your evaluation while planning your program. Early planning
allows you to build in systematic and practical processes for data collection
and performance monitoring and facilitates the collection of baseline (or pre-
implementation) data that may be used to assess change over time. Early evaluation
planning can also s trengthen the program design by providing clarity and direction to
program objectives (see Section 4.1: Clarify and Dene Program Goals and Objectives,
below). However, regardless of planning, unforeseen challenges will inevitably arise
during implementation, so you will likely need to revisit and adjust your plan over the
course of the project.
NYAM.org
6
Funding
Consider your budget before developing an evaluation plan. In ideal situations,
dedicated evaluation funding can be included in the program budget. Although
it might be dicult to think about allocating even a small amount of your scarce
resources to activities other than program implementation, including evaluation
expenses in an ECHO budget can be useful in the long run as it will enable your
program to document successes, address unforeseen challenges, and advocate
for a sustainable nancing model. Some programs even report that they are hesitant
to accept funding for ECHO without a portion dedicated to evaluation because,
without evidence of eectiveness, making the case to sustain the program is too
great a challenge. When not included in the original budget, some ECHO programs
have successfully obtained complementary funding that is designated for evaluation
from other sources.
For many programs, however, dedicated funding for evaluation is not a reality.
This limitation will likely mean that the evaluation will be narrower in scope or
will examine only a few aspects of the program, rather than the program as a
whole. A limited evaluation can still generate very valuable information – you
should not be deterred!
Whether you decide to evaluate your entire ECHO program or a single component,
it is useful to consider:
What funds do you have for evaluation?
What capacity do you have to collect data, and how much sta time will
this require?
What capacity do you have to analyze and report on the data, and how much
sta time will this require?
What are potential barriers to accessing information and will the funding
be enough to cover unforeseen challenges (e.g., a need for additional data,
diculty gaining access to data)?
Can the evaluation be conducted with internal sta or will you need an
external evaluator?
7
Project ECHO® Evaluation 101
When the budget and the size of the project team are limited, integrating data
collection into regular programmatic activities may help alleviate some of the burden
inherent in evaluation. For example, you may already be collecting data like provider
participation and number and type of case presentations for your own records,
which are important for performance monitoring and process evaluation. Additional
information is likely available in records or can be easily collected during clinic
sessions, such as the number and type of case presentations, didactic topics and
length of clinic sessions, and provider attendance.
Evaluation Expertise
The amount and type of evaluation expertise available to you will also inuence the
size and scale of your evaluation. Most ECHO programs are started by clinical teams
at academic medical centers that do not have experience in program evaluation.
Although some may have backgrounds in traditional clinical research, evaluation
requires a dierent perspective and set of skills.
8
Consider what expertise is
missing from your team and whether colleagues in other departments within your
institution or organization can oer support or resources. Support can range from
actually conducting the evaluation (which will typically require payment to that
department), to providing access to datasets with which they are already working, to
simply oering guidance on the development of an evaluation plan or data collection
instrument (e.g., a survey or a focus group guide). Alternatively, those with the
resources to do so may choose to hire an external evaluator who will work with the
program team to design and carry out an evaluation.
NYAM.org
8
WORKING WITH EXTERNAL EVALUATORS
Some groups may prefer to hire an external evaluator to design and
implement an evaluation. External evaluators may be an individual
consultant, a nonprot research institute, a university-based evaluator,
or a consulting rm.
Potential benets of working with external evaluators
They have expertise and experience in designing an evaluation,
and in conducting data collection, analysis, and reporting.
They are oen viewed by stakeholders as more objective because
they have less of a stake in the success of a program.
They bring their own team, which relieves some of the burden on
program sta.
Potential challenges of working with external evaluators
They may be limited in their understanding of the specic program
goals, components and nuances.
There will be additional costs (though the cost of an internal
evaluation that uses sta time might be similar).
Tips on working with an external evaluator
Be aware that sta will need to dedicate some time to working
with the external evaluator to ensure that they have an accurate
understanding of the program and that the evaluation addresses
the needs of the program.
Expect the cost of an evaluation to be from 10-25% of the cost
of implementing a program, depending on the evaluation scope
and design.
9
Project ECHO® Evaluation 101
Data Access
Data availability is a key factor in determining the scope and scale of your evaluation.
There are several data sources that might be used in evaluations of ECHO programs
(see Section 5: Data sources that are useful for Project ECHO evaluations) and each
of them has important benets and challenges that must be considered during the
planning process. Collecting your own data (“primary source”) will reduce challenges
related to data access, but it requires signicant sta time and some expertise
to design data collection instruments, collect the data in a systematic fashion,
and manage and analyze the data once collected. Alternatively, using data that is
already being collected (“secondary source) for another purpose, such as program
administration, patient care, or payment (e.g., health insurance claims), reduces the
sta eort required for collection. However, gaining access to external data sources
can be challenging and analysis of these data can be complicated, requiring extensive
data management and/or statistical expertise. Furthermore, there are oen lags
between when data is collected and when it is made available, which can prevent
rapid analysis and reporting.
PROTECTIONS FOR PARTICIPANTS
AND PATIENTS
Anytime data are collected or analyzed, especially in health elds, careful
consideration must be paid to protecting the condentiality of participants and
patients. When conducting research on human subjects, approval from
an independent review committee, known as an Institutional Review Board (IRB),
is oen required.
ii
IRBs are entities established to protect the rights and welfare
of people (“human subjects”) who participate in research.
Universities and other institutions that conduct research on a regular basis usually
have their own IRB. IRB submissions can be subject to full review, expedited review,
or considered exempt, depending on the data being collected, perceived risk to
participants, and purpose of the evaluation. In general, it is a good idea to check in
with your IRB before getting started to determine the level of review needed for
your evaluation.
ii
Free training on the protection of human subjects in research is available through the National Institutes of
Health at https://phrp.nihtraining.com/users/login.php. Completion of this or a similar training is generally
required by IRBs. Check with your IRB to determine which training course will satisfy their requirements.
NYAM.org
10
In making a decision as to the type and level of review required for an evaluation
project, an IRB will consider the purpose(s) of your evaluation and your plan for
disseminating ndings.
Program improvement
If you are conducting evaluation activities with the sole purpose of using
the information to make adjustments and improvements to your program
(quality assurance or improvement), your study might be exempt from IRB
review. Ask your IRB administrator for more information.
Generating new knowledge to be shared with the broader community
Approval from an IRB is generally required when your purposes are broader
and you plan to share ndings and lessons learned with a larger audience,
oen via a published report, article or presentation. Peer-reviewed journals
increasingly require statements regarding IRB review be included in
manuscripts considered for publication.
When reviewing your protocol, IRBs will focus on assessing risk(s) to participants.
Privacy and security protections related to research on health programs and
patients can be particularly stringent due to requirements to comply with the
Health Insurance Portability and Accountability Act (HIPAA). Contact your IRB
directly for more information on complying with HIPAA when conducting research
on patient health.
11
Project ECHO® Evaluation 101
TIPS AND TRICKS
WORKING WITH INSTITUTIONAL
REVIEW BOARDS IRB
Contact the IRB
If you are unfamiliar with the process for submitting a protocol, contact
the IRB administrator to understand the process. A conversation with
your IRB administrator will help you to determine which level of review
(full, expedited, exempt) is most likely to apply to your evaluation plan.
Review protocols for other ECHO programs
Consider reviewing the IRB protocols submitted by other ECHO
programs to inform the development of your own. Some examples
are available on Box.com, but you might also ask leaders of ECHO
programs who used similar evaluation plans if they are willing to share
their protocol.
Build in sucient time
IRB approval can take several weeks, depending on the institution and
the type of information being collected. Ask an IRB administrator or
others who have worked with your institutions IRB about the timeline
for review.
NYAM.org
12
DEVELOPING AND
IMPLEMENTING AN
EVALUATION PLAN
Creating a detailed evaluation plan is an important component of an evaluation.
This plan should provide background information on your program, state your
evaluation questions and how you will answer them, and describe your plan for
using and sharing the information. Although you will likely adjust the plan over time,
outlining your strategy during the early stages of implementation will help you to
design an ecient and informative system of data collection.
In designing your evaluation, rst consider the purpose of your evaluation and
who the audience for the ndings will be.
WILL RESULTS
BE USED TO
tweak the model and improve the program?
make the case for sustainability to funders,
policy makers, or other health care payers?
recruit more participants, or to encourage
health care facilities and systems that
providing sta with the time to participate
is worth the investment?
understand the potential for program replication?
inform the eld more broadly?
Consider the perspective of various stakeholders and engage relevant partners
throughout the evaluation planning process. These stakeholders may include ECHO
program sta, specialists, or participants, as well as administrators of practices,
FQHCs or ACOs, policymakers, health plans or funders. Not only can these
stakeholders provide insight into the program objectives and what should be
evaluated, but early and continued engagement will facilitate bidirectional
communication and reduce the likelihood of surprises when the ndings are reported.
13
Project ECHO® Evaluation 101
The interests of stakeholders are not always obvious, especially in the case of
Project ECHO. For example:
Although you might assume that health insurance companies are interested
in whether a program saves money, some are reportedly more interested in
whether clinicians value participating in ECHO programs, since the company
seeks to increase professional satisfaction among providers in order to retain
high quality clinicians in their network.
Some policymakers have expressed that, although numbers and costs are
important, qualitative ndings oen resonate more because they provide
humanistic detail and relatable stories regarding the programs impact on
their constituents.
In the sections that follow, we describe steps to creating an evaluation plan, but it
should be noted that the process is iterative. For example, the evaluation indicators
you choose to use (step 2) will depend on program objectives (step 1) as well as the
type of data that you will have available (which falls under step 3).
STEPS TO CREATING AN EVALUATION PLAN
1 Clarify and dene program goals and objectives
2 Develop evaluation questions and indicators
3 Select evaluation approaches
NYAM.org
14
CLARIFY AND DEFINE PROGRAM GOALS
AND OBJECTIVES
Evaluation plans should ow directly from the intervention at hand, so it is important
to have a thorough understanding of the problem being addressed and the program
design. Begin by articulating your program goals and objectives.
Program goals tend to be broad; they are generally not time limited or concrete.
For many (though not all) ECHO programs, the program goal will be related
to increasing access to or the quality of a specic type of specialty care (e.g.,
mental health care, hepatitis C care, etc.) in a particular community.
Program objectives are specic and can be achieved within the timeframe of
the project. Objectives can relate to activities required for eective program
implementation (process objectives) or to outcomes that would be expected if
the program were a success (outcome objectives).
Below are process and outcome objectives that could be relevant for various
ECHO programs.
iii
A total of 20 ECHO sessions will be conducted (bi-weekly) over the course of
the calendar year.
At least 75% of sessions will be “high quality,” dened as scoring 90% or
above on the ECHO Facilitation Scorecard.
At least 61% of patients with diabetes (aged 18-75) being treated at
participating practices will have controlled cholesterol (dened as LDL
cholesterol less than 100mg/dL) within one year of the start of the
ECHO program.
At least 87% of hypertensive patients (aged 18-85) being treated by clinicians
at practices participating in ECHO for one year will have controlled blood
pressure (i.e., systolic blood pressure less than 140 mm Hg and the diastolic
blood pressure less than 90 mm Hg).
Within six months of a practice joining the ECHO program, 72% of patients
being treated in that practice who have newly diagnosed chronic obstructive
pulmonary disease (COPD) will be diagnosed using a spirometry test.
iii
These examples are provided for illustrative purposes only; all objectives should be developed based on the
individual program that is being evaluated. Examples were developed using quality targets reported by the
Kaiser Foundation Health Plan of the Northwest (2014), which relied on the Healthcare Eectiveness Data
and Information Set (HEDIS), a widely used set of health care performance measures created by the National
Committee for Quality Assurance (NCQA).
15
Project ECHO® Evaluation 101
[See Appendix E for additional resources on developing of program objectives]
TIPS AND TRICKS
SMART OBJECTIVES
Some experts recommend developing of SMART objectives,
meaning objectives that are:
SPECIFIC: Objectives should be concrete and detailed; they state what
will happen and who is responsible for making it happen.
MEASUREABLE: Objectives provide clear information as to
how success will be measured or dened.
ACHIEVABLE: Objectives should be feasible and easy to
put into action.
REALISTIC: Objectives should take into account resource constraints,
such as funding, personnel, and time frame.
TIME BOUND: A time frame helps to set boundaries around
the objective.
Although the SMART guidelines are generally useful, it is worth noting
that some objectives, such as participant satisfaction or self-ecacy,
can be better examined through descriptions (i.e., qualitatively) rather
than with numbers and statistics (quantitatively). For instance, although
it may be interesting to know that a majority of clinicians would rate
sessions as good or very good, it might be more useful to know which
aspects of the program were most relevant, how they feel the program
helped them, and where there are areas for improvement.
NYAM.org
16
Tying it together: Logic Models
Logic models are useful tools for understanding and communicating the process
through which your program aims to achieve its goals and objectives. They provide
a structural framework for the evaluation and a visual representation of the
relationship between a program’s resources, planned activities, and expected
outcomes that can be used for planning purposes.
PROGRAM GOAL: The overall, long-term impact of your intervention on
the broader community
PROGRAM CONTEXT: The health, economic, social, and political environment
in which the program operates
RESOURCES/
INPUTS
ACTIVITIES OUTPUTS
SHORTER-
TERM
OUTCOMES
LONGER-
TERM
OUTCOMES
What
resources do
we have to
successfully
implement our
program?
What
activities do
we need to
complete for
our program to
successfully
implemented ?
If all the
planned
activities are
carried out
successfully,
what are the
outputs that
we expect to
see?
What is the
impact you
would expect
to see in the
short-term?
What impacts
would you
expect to see
over a longer
time horizon?
FIGURE 1: LOGIC MODEL COMPONENTS
Adapted from the W.K. Kellogg Foundation Logic Model Development Guide (2004)
Each component of the logic model contains activities, outputs or outcomes that can
be tracked as part of an evaluation (see Figure 1). A process evaluation will examine
the extent to which the actions outlined in the rst three sections of the logic model
(inputs, activities, and outputs) were accomplished, while an outcome evaluation will
assess whether the last two sections (shorter, or long-term outcomes) were achieved.
PROGRAM GOAL:
Increase access to high quality specialty care in underserved and rural communities
Improve health and quality of life for patients living with X condition
Create a more ecient and sustainable health system
PROGRAM CONTEXT:
Disease patterns, clinician level of knowledge or training, health or health care disparities (e.g., health care providers have limit-
ed access to experts on X condition, high disease prevalence, etc.)
RESOURCES/
INPUTS
ACTIVITIES OUTPUTS
SHORTER-TERM
OUTCOMES
LONGER-TERM
OUTCOMES
• Funding
• Project ECHO sta at
University ABC
• Interdisciplinary
specialist team
• Primary care
providers interested
in participating
• Video conference
techology
• Materials and
training from ECHO
Institute
• Recruit X participants
(or practices) to
join Project ECHO
program
• University ABC
develops curriculum
for teleECHO clinic
sessions
• University ABC
conducts X high
quality teleECHO
clinic sessions on
a biweekly basis
(dates)
• X didactic
presentations
conducted
• X participants
present cases at
each teleECHO clinic
sessions
• Written care
recommendations
sent to 100% of
providers who
presented cases
• Completed
curriculum, for
implementation
ECHO clinics
• X practices commit
to engagement in
ECHO
• X% of participants
attend X% of clinic
sessions
• X providers receive
training and support
in X condition
PARTICIPANT
OUTCOMES
Increased self-
ecacy related
to providing [type
of care] among of
ECHO® participants
Increased knowledge
on best practices for
treating X condition
among participants
Increased sense of
professional support
among participants
Improved quality of
care
PATIENT OUTCOMES
Improved patient
activation and
disease managmeent
Improved satisfaction
with care
PARTICIPANT
OUTCOMES
Improved care for
patients treated
in practices
with clinicians
participating in
ECHO®
PATIENT OUTCOMES
Improved health
outcomes (e.g.,
fewer diabetes
complications,
increased HCV cure
rates)
HEALTH SYSTEM
OUTCOMES
Reduced need for
specialist care or
shorter wait times
for existing specialist
providers
Reduced provider
turn-over
Reduced costs
related to
transportation for
health services
Reduced health costs
due to complications
Assess in process evaluations Assess in outcome evaluations
Assess in economic evaluations
*NOTE: This example is for illustrative purposes only. Each program conducting an evaluation conducting an evaluation should develop a unique
logic model adapted to t the individual program. Adapted from the W.K. Kellogg Foundation Logic Model Development Guide (2004)
FIGURE 2: EXAMPLE OF A LOGIC MODEL FOR A PROJECT ECHO® PROGRAM*
NYAM.org
18
An economic evaluation examines the relationship between program costs and the
outcomes articulated in the logic model. Note that the logic model and objectives can
be eectively used to guide the remainder of the evaluation plan, so it is benecial to
obtain input from a variety of stakeholders.
Most ECHO programs focus on challenges related to limited access to specialty
health care in a high-need community. As a result, logic models are likely to look
similar, though each will need to be adapted to t the unique aspects of program
design and the targeted health condition. Figure 2 provides a general example of the
types of information that might go in a logic model for Project ECHO.
[See Appendix E for additional resources related to developing and utilizing
logic models]
DEVELOPING EVALUATION QUESTIONS
AND INDICATORS
Evaluation questions are those questions you hope to answer through evaluation.
They will be used to guide the remainder of the evaluation plan. Oen, the main
question is “did the program work?” In other words, did it result in the changes that
were hoped for, and did it have the intended outcomes? If the program did work, you
might also want to know how well it worked, for whom, and whether it was worth
the investment. If it didn’t work, you might want to explore why and whether the
problems seem solvable. Clearly dened program objectives and a detailed logic
model will support you in articulating your evaluation questions and determining
what information is needed to answer them.
Identify specic evaluation questions
Each component of the evaluation requires a clear and specic evaluation questions
(see Table 1).
19
Project ECHO® Evaluation 101
TABLE 1: EVALUATION TYPES AND SAMPLE QUESTIONS
EVALUATION
TYPE
SAMPLE EVALUATION QUESTIONS FOR
ECHO
TIMEFRAME FOR ANALYSIS AND
REPORTING
Performance
monitoring/
Process
evaluation
Was the program implemented
eectively?
Has engagement and retention of
clinicians reached expectations?
Are participant characteristics
(e.g., type of clinician, geographic
location of practice) consistent
with program objectives?
Were ECHO program activities
implemented successfully? (e.g.,
were sessions perceived as high
quality? Were topics relevant
to the audience? Were sessions
implemented on a timely basis?)
Which components of the program have
been successful?
What aspects of the program
need improvement, and how can
improvement be achieved?
Process evaluation and performance
monitoring begin while the program is still
in its early phases so that adaptations can
be made and suggestions for improvement
incorporated. However, monitoring of
activities and quality should be ongoing
throughout the life of the program.
NYAM.org
20
Outcome
evaluation
Do providers participating in the ECHO
program have improved knowledge,
condence, and treatment practices
related to the targeted condition?
Does provider participation in Project
ECHO improve health outcomes for
patients?
Does the implementation of a Project
ECHO clinic increase access to high-
quality care for the target condition?
Have health care costs for patients with
the condition changed as a result of the
ECHO program?
Final analyses for outcome evaluations take
place when program activities are complete,
but preliminary analyses can be completed
at regular intervals throughout the project
timeframe (e.g., every six months, the end
of a grant period, or the end of a curriculum
cycle).
Economic
evaluations
Was this ECHO program cost-eective?
What was the return on investment for
this ECHO program?
How much did it cost to treat patients
using the ECHO model compared to
usual care?
Economic analyses should be conducted
in conjunction with or aer outcome
evaluations so that outcomes can be taken
into account when analyzing program value.
Adapted from Centers for Disease Control and Prevention. Types of Evaluation. Available at: https://www.cdc.
gov/std/Program/pupestd/Types%20of%20Evaluation.pdf. Last Accessed: December 2, 2016
21
Project ECHO® Evaluation 101
Performance monitoring and process evaluation questions focus on whether
the program is being implemented as intended and can be asked throughout the
evaluation. The answers to these questions are useful for quality improvement
and provide insight around which components of the program are working and
which need adjustment. They also provide necessary information for reporting
and replication.
Outcome evaluation questions look at what changed as a result of the program.
Moore, Green and Gallis
9
propose a seven-level “Expanded Outcomes Framework
for evaluating physician-training programs that is useful for considering ECHO
evaluation. In their proposed framework and in most ECHO evaluations, outcomes
relate to impact on clinicians, their patients, and/or the broader health system.
Table 2 provides an overview of the framework and how it can be adapted
to Project ECHO evaluations.
10
Economic evaluation questions focus on program costs and how they relate to program
benets (nancial and other). Answers to these questions can be useful for “making
the case” for program sustainability and ongoing funding. Note, however, that not
all ECHO programs will result in cost reductions - in fact, some ECHO programs may
increase costs by improving access to and utilization of health care (especially in the
short-term). For instance, increasing access to hepatitis C care will the number of
people receiving high cost medications. Although long-term health care costs for these
patients might be lower (as hepatitis C is cured and less medical care is needed over
the lifespan), a short-term economic evaluation is likely to nd higher expenditures
compared to a status quo in which few people are receiving the treatment they need.
NYAM.org
22
TABLE 2. MOORE’S EXPANDED OUTCOMES FRAMEWORK ADAPTED
FOR PROJECT ECHO
LEVEL CONSTRUCT
DESCRIPTION ADAPTED FOR
PROJECT ECHO
POTENTIAL DATA SOURCE (FOR
LOW RESOURCE GROUPS)
1 Participation &
engagement
The number of clinicians who attend
or participate (e.g., present cases) in
each clinic session.
Program records
(e.g., iECHO)
2 Satisfaction The degree to which participants
expectations of Project ECHO were
met.
Clinic evaluation surveys completed
aer each session; key informant
interviews
3a Learning
(Declarative)
The degree to which participants can
reiterate the information provided in
Project ECHO sessions.
Survey assessing knowledge, key
informant interviews
3b Learning
(Procedural)
The degree to which participants
can describe how they will apply the
lessons conveyed during Project
ECHO.
Surveys assessing behavioral intent,
key informant interviews
4 Learning
(Competence)
The degree to which participants are
condent in their ability to apply the
lessons from Project ECHO.
Self-ecacy questionnaires, key
informant interviews, focus groups
5 Performance The degree to which participants in
ECHO apply Project ECHO lessons
when treating patients.
Surveys, key informant interviews,
focus groups, chart review, claims
data
6 Patient health The degree to which the health
status of patients improve due to
changes in treatment practices of
Project ECHO participants.
Chart review, claims data, surveys or
interviews with patients
7 Community
Health
The degree to which Project ECHO
impacts health and health care
trends and patterns in the broader
community.
Claims data, quality measures
(e.g., HEDIS, MDS), key informant
interviews, administrative records,
community surveys
Adapted from Moore, D. E., Green, J. S., & Gallis, H. A. (2009). Achieving desired results and improved outcomes:
integrating planning and assessment throughout learning activities. Journal of Continuing Education in the
Health Professions, 29(1), 1-15.
23
Project ECHO® Evaluation 101
[See Appendix E for additional resources related to creating evaluation questions]
DO’S AND DON’TS FOR DEVELOPING
EVALUATION QUESTIONS
DO: Consider the audience for your ndings.
Think about how evaluation ndings will be used and by whom.
DONT: Assume you know stakeholders’ interests without asking them.
Engage stakeholders in the planning process to nd out what
they are interested in learning from the evaluation.
DO: Keep it simple.
Avoid over-committing and getting in over your head. Think about
what aspects of your program are most important to evaluate
and what is feasible, and stick to them. Simple, clear, and focused
evaluations can provide valuable information and will be carried out
more eectively than complex studies, especially when resources
are limited.
DONT: Ask questions that cannot be answered within the timeframe
of the evaluation.
This is a common mistake made by groups without evaluation
experience. Many questions, such as:“Did prevalence of lung
cancer decrease as a result of my ECHO program on smoking
cessation?” would take a number of years to answer. This type
of question focuses more on program goals than time-limited
objectives, and may be inappropriate for a short-term,
low-resource evaluation.
NYAM.org
24
Identifying Indicators
Once you have focused your evaluation, think through what type of information or
evidence you will need to answer your evaluation questions. For each component
that you want to measure, you will need to select a unit of measurement, oen called
an indicator or metric, which can tell you whether a particular activity or outcome
was accomplished.
11
Like most aspects of evaluation, the indicators you select will
depend on your questions, as well as funding, sta capacity, available expertise, and
data access.
Examples of process indicators that have been reported in ECHO evaluations include:
Number of program participants in attendance at each teleECHO session
Average number teleECHO clinic sessions attended by each participant
Number of teleECHO clinic sessions held in a calendar year
Number of teleECHO clinic sessions attended by each member of the
specialist team
Percentage of participants who presented at least one case during a
teleECHO session.
Rating of teleECHO clinic session according to the ECHO Facilitation Scorecard
iv
Number of technology problems reported by participants
Frequency of didactic presentations that cover topics inline with
program objectives
iv
The ECHO Facilitation Scorecard identies essential components of high-quality ECHO clinic sessions.
The latest version can be found in Project ECHO’s Box.com folder and can be used as part of performance
monitoring and process evaluation.
25
Project ECHO® Evaluation 101
DO’S AND DONTS FOR
SELECTING INDICATORS
DO: Choose indicators that are relevant for your program
objectives and evaluation questions.
There should be direct links from your programs objectives to
your evaluation questions to the indicators you select to answer
those questions. Avoid indicators that are heavily impacted by
factors unrelated to your ECHO program.
DONT: Choose indicators that fall outside of the timeframe
of the project.
Similar to developing research questions, avoid indicators that are
likely to occur far into the future and, thus, will not produce useful
evaluation ndings. For example, a pediatric hypertension ECHO
program is unlikely to reduce the rate of heart attacks within a
timeframe that can be reasonably evaluated.
DO: Choose indicators that can be collected using available
data sources.
Unfortunately, access to data can be a major challenge in
evaluations. Regardless of how relevant a particular indicator is,
each must have a data source in order to be used.
DONT: Focus too much on indicators that are rare or have
multiple causes.
This common mistake can prevent you from detecting changes
related to your program. Gathering sucient data on events
that are rare (e.g., foot amputations related to diabetes, infant
mortality) is expensive and likely not feasible within the timeframe
of your evaluation project. Instead, consider relevant indicators
that are clearly linked to (or precursors of) those ultimate
program outcomes (e.g., number of diabetic patients with
controlled blood sugar levels, or number of parents who report
placing infants on their back to sleep).
NYAM.org
26
Examples of outcome indicators reported by ECHO programs include:
Rate of antipsychotic prescriptions lled by older adults with geriatric mental
health conditions
Percent of older adults who were physically restrained in the last 90 days
Average score on the Patient Health Questionnaire (PHQ-9)
V
score of patients
treated by ECHO providers
Examples of economic indicators for ECHO programs include:
Total cost of care for patients of participating providers who have the targeted
health condition
Costs of emergency room visits for pediatric patients of ECHO participants who
have a diagnosis of asthma
Costs accrued due to 30-day hospital readmissions among patients with heart
failure who are treated by participants in Project ECHO
V
The PHQ-9 is a validated instrument for screening, diagnosing and monitoring the severity of depression.
HARMONIZED METRICS:
POWER IN NUMBERS
Project ECHO stakeholders have discussed the development of a set
of harmonized metrics that all ECHO programs can use as part of their
evaluation. This would likely be a set of specic survey questions assessing
participant outcomes that are common across most (though not all) ECHO
programs, such as reduced professional isolation or improved professional
satisfaction. The use of harmonized metrics would enable researchers to
conduct an evaluation of the ECHO model using a large sample (which is
important for statistical signicance) and data across ECHO programs.
Ideally, these questions could be incorporated into the ECHO technology
platform to facilitate data collection and ensure consistency across
programs. Contact the ECHO Institute for an update on the status of the
development of harmonized metrics.
[See Appendix E for additional resources related to selecting indicators.]
27
Project ECHO® Evaluation 101
SELECTING EVALUATION APPROACHES
Once you have identied indicators, it is time to design a strategy for collecting your
data in a manner that will answer your evaluation questions. This involves considering
sources of relevant data and evaluation design approaches.
Data Sources
All indicators used in the evaluation will need a data source and a method of obtaining
or collecting the information described in the indicator. For example, you will need to
consider how to obtain information on the number of participants in ECHO sessions
or the number of providers who complete autism screenings with pediatric patients.
The feasibility of obtaining data, costs associated with gaining access, and quality
of the data (e.g., completeness, accuracy) all vary depending on the data source.
Here, we broadly discuss types of data sources, and the benets and challenges
for each. In Section 5: Data sources that are useful for Project ECHO evaluations, we
provide additional detail on various sources of data and the benets and challenges
to using each.
Quantitative Evaluation Methods
Quantitative data includes information that is reported numerically (e.g. weight,
blood pressure, number of hospitalizations, participation rates, health care costs) as
well as information that is counted or measured for analytic and reporting purposes
(e.g. scores on a survey, proportion of participants with a particular characteristic).
Because they rely on numeric information, quantitative methods lend themselves to:
Assessment of whether specic targets were met;
Comparisons between groups; and
Broader generalizations (or assumptions) around the impact of the program.
For example, you might assess whether the average number of participants in
teleECHO sessions increased over time to evaluate the success of participant
engagement eorts, or compare scores of ECHO participants and non-participants
on a hypertension knowledge survey to determine whether participants learned
NYAM.org
28
TIPS AND TRICKS
SELECTING AND COLLECTING DATA
Collect data systematically
The quality of your data is key to the credibility of the ndings that you
report. Collecting data in a systematic fashion will increase condence
that the data is comprehensive and accurate.
Plan ahead
Think about data needs prior to starting the project and integrate data
collection into the project infrastructure. Early planning can enable you
to collect data eciently and systematically from the start.
Choose quality over quantity
Consider data completeness and quality—a small amount of good data
is better than a large amount of bad data.
Balance evaluation goals and data access
Selection of data sources is an iterative process. Indicators should
inform data sources, but access and quality may in turn limit the
indicators that you can choose and the methodologies available.
Consider the rigor
The most rigorous evaluation methods will increase condence that
the results you nd are, in fact, due to your program. However, very
rigorous evaluations can be more complex and costly to conduct;
determine what level of rigor your evaluation requires based on the
needs of your stakeholders, the audience for your ndings, and your
realties around funding, expertise, and data access.
Store data securely
From the beginning, create and maintain a system that ensures that
data is stored securely and that protects the privacy and security of
both participating clinicians and their patients.
29
Project ECHO® Evaluation 101
important lessons from teleECHO sessions. This data is analyzed using a variety
of statistical techniques that range in complexity (see Section 6.1: Quantitative
Analysis for more information).
Although quantitative data is oen useful, many important aspects of programs
cannot be quantied; sometimes, numerical data provide only a partial story that
lacks sucient insights into why a certain outcome did or did not occur. For example,
using quantitative data, you may learn that 80% of ECHO participants experienced
an increase in self-ecacy, or that 50% of patients experienced a particular health
outcome aer their health care provider joined ECHO. However, these ndings do
not elucidate which program components most contributed to participants’
increase in self-ecacy, or how providers relied on the program to improve patient
health outcomes.
Qualitative Evaluation Methods
Qualitative data, which are centered on words rather than numbers, oer detail-
rich information that describes or explains ndings and allows for more nuanced
reporting, thereby helping to address the concerns noted above. These data can
provide information on the reasons certain outcomes were or were not achieved,
as well as insight into unexpected consequences of the program and barriers or
facilitators of program success or failure. In addition, qualitative data allow you to:
Engage program participants, sta and others in the evaluation process;
Eectively utilize data from a small sample of participants; and
Collect information for case studies and testimonials that may be useful for
sustaining your program.
Qualitative data are best collected via interviews or focus groups, when follow-up
questions can be used to encourage participants to elaborate on topics and explore
themes, though open-ended survey questions may also be used. Interview and focus
group data should be recorded and transcribed. Transcription can be completed
internally or outsourced to a company that provides transcription services. Data
are analyzed through techniques that systematically identify important patterns
and themes. For more information on analyzing qualitative data, see Section 6.2:
Qualitative Analysis.
NYAM.org
30
Primary and Secondary Evaluation Data
“Primary source” data are newly collected, specically for the purpose of evaluation.
In Project ECHO evaluations, these oen include surveys, interviews, and focus
groups developed or conducted for evaluation purposes.
“Secondary source” data are those that are collected for another purpose and are
being “repurposed” for the evaluation. Program records from ECHO clinics that collect
information on attendance, topics covered, and case presentations are oen a useful
source of secondary data, as are health insurance claims, lab tests, publicly reported
quality metrics, and electronic health records.
See Section 5: Data sources that are useful for Project ECHO evaluations for more
information on various methods of collecting data for Project ECHO evaluations.
Also, see Appendix A and Appendix E for more information on data collection methods.
Selecting a Design
In conjunction with choosing data sources, you must decide on an evaluation design.
For process evaluations, this might simply mean thinking through the frequency of
data collection and analysis. However, for outcome evaluations, design may be more
complex as it will inuence the level of condence that stakeholders will have in your
ndings. This is because certain designs allow you to more reasonably attribute the
changes you see in participating providers or their patients directly to your program
and rule out other factors that may also impact outcomes (see Table 3, Page 34).
Pre-Post Designs
Most low-resource evaluations rely on pre-post designs, meaning that data are
collected from participants before the program is implemented (pre” or baseline),
and again aer the program is implemented (post” or follow-up). Pre and post
data are then compared in order to assess whether any changes took place. In
ECHO evaluations, this oen means surveying participants about their knowledge,
treatment and referral practices and/or professional training prior to implementing
Project ECHO, and again aer they participated in the program. It might also mean
examining secondary data (e.g., claims data, EHRs) from before and aer the program
to examine changes in treatment practices and health outcomes.
31
Project ECHO® Evaluation 101
Although pre-post evaluations are a great place to start when it comes to evaluating
ECHO programs—and are very common in evaluation generally—your ability to
condently draw conclusions from them may be limited because your cannot be
certain that changes are attributable to your specic program. It is possible that
forces external to the program caused the changes. For example, in New York State,
local health reform eorts are encouraging practices to shi towards a value-
based health care system, which could cause signicant changes in outcomes of
interest to ECHO evaluators, such as patient health outcomes, health care costs,
and professional satisfaction among participants, regardless of their participation
in ECHO. Thus, in an evaluation of the program, it may appear that Project ECHO
participants increased their job satisfaction and patients fared better, when in reality
this may have been a trend for all providers and patients in the community due to
other health reform initiatives.
Another danger in using pre-post designs is the fact that, in general, data that is
below or above average will be closer to the true average the next time it is measured.
This phenomenon is oen called regression to the mean and may also limit your
ability to attribute improvement in scores aer the program to the ECHO program
itself. So, for example, if you survey ECHO participants before they begin participating
in ECHO and they seem to have very low levels of knowledge related to providing care
for patients with your targeted condition, than the next time they are surveyed, they
are likely to be closer to average, even with no intervention.
Loss to follow up, meaning the inability to collect follow-up data for all individuals
in the baseline sample, is also a limitation of pre-post designs that rely on primary
data. Oering incentives can be helpful in increasing retention. Alternatively, it may
be more realistic to conduct analyses that aggregate baseline data and follow-up
data so that tracking individual participants is not required. In other words, you could
compare the mean score on a knowledge survey among participants who completed
the survey at baseline to the mean score on the survey at follow-up, even if the
individuals in each group were dierent.
Designs Using Control or Comparison Groups
A widely accepted method for increasing condence in your evaluation ndings is
collecting the same data from a control or comparison group that you collect from
participants in the program. Control or comparison groups consist of individuals who
are similar to those receiving the intervention (e.g., participants in Project ECHO) but
NYAM.org
32
have not participated in the program. These individuals may be on a waitlist, practice
outside of your program’s catchment area, or simply have chosen not to participate
(though it is worth considering whether this this means they are inherently dierent
than those who did choose to participate, known as “selection bias”).
Unfortunately, obtaining data for a comparison group can be dicult, time
consuming and costly. When using primary data, this design requires data collection
from two groups (which requires eort dedicated to recruitment and resources to
provide participation incentives). When using secondary data, nding a control or
comparison group will likely require access to a broader range of data, as well as
additional analytical or statistical expertise to ensure the comparison group is
well-matched to the population of ECHO participants.
Retrospective Designs
You may decide to evaluate your ECHO program aer implementation has already
begun, which prohibits the collection of baseline data. Some design options that
work retrospectively include:
Reective surveys
Using surveys, you can ask participants to compare their current knowledge,
self-ecacy, or treatment practices to those before they began the program.
For example: Compared to 6 months ago, my current level of condence in my
ability to accurately diagnose [x] condition is … This design is especially useful
for measuring concepts like knowledge and self-ecacy, as evaluators have
found that participants oen do not realize how much they did not know until
aer they participated in the program.
Qualitative interviews or focus groups
Using interviews or focus groups, you can directly ask participants for their
perspectives on and descriptions of how the program inuenced their practice
and that of their colleagues, as well as any noticeable changes in patient
health outcomes.
Secondary data
Use secondary data that cover the full span of your project (e.g., health insurance
claims data, electronic health records).
Although these designs can be useful, note that retrospective evaluations that
use primary data collection are subject to “recall bias,” as they rely on the ability
of respondents accurately recall their prior experiences and perspectives.
33
Project ECHO® Evaluation 101
Y
O
U
R
P
R
O
G
R
A
M
Mixed Method Designs
Ideally, evaluations should consist of multiple data sources that employ both
qualitative and quantitative methods; these multi-component evaluations are
oen referred to as mixed-method evaluations. Not only does a mixed method
approach provide a more comprehensive and precise understanding of a program
and its outcomes, it also can increase condence that your ndings are valid.
This is because using a mixed method approach allows you to “triangulate
ndings, which means comparing and linking ndings from dierent sources
and dierent perspectives when drawing conclusions about your program (see
Figure 3). Doing so will strengthen your evaluation and make your ndings more
useful to you and your stakeholders.
FIGURE 3: TRIANGULATION OF FINDINGS IN MIXED METHOD EVALUATIONS
Results
from public
datasets
Results from key
informant interviews
EVALUATION
CONCLUSIONS
Results from
questionnaires/
surveys
NYAM.org
34
TABLE 3: ADVANTAGES AND DISADVANTAGES OF EVALUATION DESIGNS
EVALUATION
DESIGN DEFINITION ADVANTAGES DISADVANTAGES
Pre-Post
Collecting data from before
and aer a program was
implemented.
Lower cost
More objective assessment
of changes resulting from
the program
Cannot be certain that
changes identied were due
to program participation
(and not external factors)
Uncertain results due to loss
to follow-up
Control or
Comparison
Group
Collecting data from
individuals who received
the intervention (e.g.,
participated in Project
ECHO) and from a similar
group of people who did not
receive the intervention.
Greater condence that
observed changes were a
result of participation in your
program
Higher cost
Oen dicult to access
data on non-participants,
especially primary data
Challenges related to
carefully matching
intervention and control
groups
Retrospective
Collecting data only aer
program was implemented.
Lower cost
Can be planned aer
implementation has already
begun
Reective questions may be
more accurate for measuring
certain indicators (e.g.,
competence)
Cannot objectively assess
changes over time
Subject to recall deciencies
and bias
Mixed method
Utilizing data from multiple
sources to identify and
corroborate evaluation
ndings
Greater condence in the
validity of ndings
Greater exibility if
unforeseen challenges arise
with one component of the
evaluation
Ability to report ndings in
ways that resonate with
dierent audiences
Each additional data source
increases costs associated
with data collection and/or
analysis
[For additional resources on evaluation designs, see Appendix E.]
35
Project ECHO® Evaluation 101
IMPLEMENT THE EVALUATION
Aer carefully planning your evaluation, it is time to actually implement it!
If using primary data, this means:
Obtaining IRB approval
Be sure to leave a sucient amount of time for approval, as some IRBs
require several months to review and make a decision on protocols.
Finalizing data collection forms and protocols
Ensure that all recruitment materials data collection tools are approved by
the IRB. Print any relevant forms, develop and pilot-test surveys, assign
data collection and data management roles, and ensure privacy protections
are in place.
Training data collection sta
Train all sta members engaged in data collection; be sure they are familiar with
the overall goals of the evaluation, the data collection tools, and best practices
for collecting the data. Aim for questions (and follow-up questions) to be asked
in a consistent manner; anticipate issues that are likely to come up during data
collection and train sta to handle them appropriately.
Recruiting evaluation participants (from whom data will be collected)
Plans for recruitment should be outlined in your IRB protocol. Recruitment of
ECHO participants usually entails an announcement during a teleECHO session
requesting participation and several emails or phone calls (initial and follow
up) to participants requesting their permission to be surveyed interviewed, or
to attend a focus group. The process of recruiting patients for data collection
eorts tends to be more time-consuming and expensive due to privacy
considerations. If planning to recruit patients, begin by engaging providers
(for access to patients) early in the evaluation process. Regardless of whether
you are engaging program participants or their patients, be sure to collect any
required consent forms in advance of data collection.
Collecting the data
Engage participants and gather your data. Use incentives to encourage
participation, be exible and persistent (but avoid nagging or harassing
providers), and take steps to make participation as easy as possible.
NYAM.org
36
If using secondary data, implementing the evaluation may mean working with
partners to develop a concrete plan and procedures for data access, data security,
and data analysis. It will also mean taking the time to understand the data and
its structure and limitations. Be exible with your analysis plan and understand
that it may change depending on the availability of data.
TIPS AND TRICKS
IMPLEMENTING YOUR EVALUATION
Use project management tools to get and stay organized
Consider using project management tools and resources to be sure
the evaluation stays on track and on time, and that all members of the
evaluation team understand their role and responsibilities. For example,
GANTT charts are a recommended method of visually representing a
schedule and timeline for work to be completed.
Manage your data
Thinking about data management and analysis before and during
data collection will save time in the long run. For example, collecting
data electronically saves time on data entry, and structuring program
records (such as case presentation forms) strategically will allow for
easy data extraction.
Plan, plan, plan – and then do
Embarking on an evaluation can seem overwhelming, and planning
it out carefully is important. However, it is important not to get
“lost in the weeds” worrying about a perfect data collection method
and the most rigorous evaluation design. Evaluations, especially
those being conducted on a pilot program or with few resources,
take place in the real world and are rarely perfect – they still provide
important information.
(continued PG 37)
37
Project ECHO® Evaluation 101
TIPS AND TRICKS: IMPLEMENTING YOUR EVALUATION (cont.)
Analyze early and oen
If you have to wait until the end of the project for evaluation results,
you are likely to miss important learning (and reporting) opportunities.
Conducting preliminary analyses can allow you to understand if your
data collection approach and tools are working, while also providing
useful data that demonstrate ongoing accomplishments. Even if you
must wait for “post” survey results to measure outcomes, pre surveys
provide important information on participant characteristics and
baseline needs.
Revisit and adjust
Planning is important, but if basic assumptions of your program or
your evaluation change over time (for example, lower than expected
participation rates), than your evaluation plan should change too.
You may want to adapt your evaluation to better understand why
the program changed the way it did, and how that aected participants
experience in the program. Furthermore, if participation rates are
low, you might want to focus on qualitative rather quantitative
research methods.
NYAM.org
38
DATA SOURCES THAT
ARE USEFUL FOR PROJECT
ECHO EVALUATIONS
Choosing your data sources is one of the most important decisions you will make
in your evaluation. The type of data used in an evaluation must be consistent with
the research questions and the kind of analyses that can be done. Each type of data
comes with benets and challenges that should be carefully weighed before being
incorporated into an evaluation plan. Here, we describe a number of sources for
evaluation data. This list is not exhaustive; instead, it is a list of those data sources
that have proven useful in lower-resource ECHO evaluations. For information on
real-world examples of the use of these data sources in ECHO evaluations, refer to
Appendix A.
PRIMARY DATA SOURCES FOR
ECHO EVALUATIONS
Primary data sources are those collected specically for evaluation purposes.
We describe below several types of primary data sources that have been used as
components of evaluations of Project ECHO programs. Note that data collection can
be time consuming for both program sta and participants; therefore, you should
carefully consider available resources when selecting primary data sources.
Interviews
Interviews are a useful way to gather qualitative data from a variety of stakeholders.
They involve asking open-ended questions and getting answers from participants.
Interviewees are asked to reect on previous experiences, report on changes in
their knowledge, attitudes, or behaviors, and consider how the program could be
improved in the future. Although subject to bias (meaning that individuals’ recall
or reporting may not be true or accurate), the detailed data collected through
interviews can provide valuable information on how and why a particular program
worked or did not work.
39
Project ECHO® Evaluation 101
As with all data collection methods, your overarching evaluation questions and
objectives will guide the interview process; they determine who you will want to
interview (e.g., program administrators, providers, other sta) and what questions
to ask. For process evaluations, questions might focus on implementation activities,
challenges, successes, and areas for improvement. Outcome evaluation questions
might focus on changes in knowledge, self-ecacy, treatment practices, and patient
health. In addition, with sucient funding, patients can be interviewed regarding
health outcomes and their perceptions of Project ECHO.
Interviews can be conducted in person, via phone, or via videoconference aer the
program has been implemented and interviewees have had sucient experience with
it to provide feedback and reection. Before the interviews, you will need to develop
an interview guide containing open-ended questions and prompts (also called
“probes”) that encourage full responses and clarication. It is important to note
that, this document should truly be a guide: it is recommended that interviews are
conducted in a conversational manner. Thus, depending on responses, interviewers
should feel comfortable adlibbing follow-up questions, or asking for more detail on a
particular response.
Interviews are generally audio recorded and transcribed (with the permission of the
participant), which ensures that all information is accurately captured and helps you
to analyze the data systematically (see Analysis Section.) However, if there is not
sucient funding for transcription, (either by existing sta or an outside transcription
company) detailed note-taking can be used to record themes that arise.
NYAM.org
40
Benets and challenges of using interviews include
BENEFITS
Can be used for collecting nuanced data that
cannot be easily quantied
No baseline required; can be used for programs
that have already been implemented or have
been in existence for a long time
Provides an opportunity to engage with
stakeholders
CHALLENGES
Can be time consuming, both in terms of
conducting the interviews and analyzing the data
Scheduling time to conduct the interview can be
a challenge, particularly for providers
Privacy may be of concern, especially when
interviewing patients
All data are self-report and therefore susceptible
to bias
For examples of ECHO evaluations utilizing interview data, see Appendix A.
For additional resources on conducting interviews for the purpose of evaluation,
see Appendix E.
Focus Groups
Focus groups involve group discussions. They usually have 6-12 participants and last
between one and two hours. By engaging multiple people in a shared discussion,
focus groups encourage self-disclosure and data richness as participants build on
and react to comments made by others. Although focus groups are not good for
eliciting detailed accounts from specic individuals, the group format oen facilitates
a level of information sharing beyond what might occur through individual interviews.
Focus groups are led by a facilitator who follows a focus group guide that contains
questions meant to frame the discussion. The facilitator asks follow up questions and
41
Project ECHO® Evaluation 101
probes, when necessary. Although funding may dictate the number of focus groups
you can conduct, groups will ideally be conducted until you reach “data saturation,
meaning the same themes arise in each new group and no new themes are identied.
Similar to interviews, you will want to develop a focus group guide specic to your
program and your evaluation questions. The guide should consist of open-ended
questions, along with any anticipated follow up questions for clarity (probes).
Facilitators should be very familiar with the guide and with the project so they
can follow-up on important points, quickly respond when participants provide
unanticipated information, and ensure the discussion stays on track.
Below are some benets and challenges of using focus groups in
an evaluation.
BENEFITS
Fairly ecient way to collect qualitative
data from multiple people at once
Provides rich, nuanced data, as participants
bond with one another and build o each
others comments
Useful method of assessing how program
participants prioritize ideas, or which ideas
generate the most enthusiasm or traction.
CHALLENGES
Facilitation requires training and experience
Limited generalizability beyond those engaged
in the group(s)
Data are time consuming to analyze
Individual personalities can inuence group
processes and perceptions, biasing the results
For additional detail and advice around using focus groups in ECHO evaluations,
see Appendix C. For examples of previous ECHO evaluations that utilized focus groups,
see Appendix A. For additional resources on conducting focus groups for the purpose
of evaluation, see Appendix E.
NYAM.org
42
TIPS AND TRICKS
COLLECTING DATA VIA INTERVIEWS
OR FOCUS GROUPS
Consider who is best suited to conduct the interviews or facilitate
the focus groups. When possible, avoid using someone who is
intimately involved in administering the program, as participants may
not feel comfortable giving negative feedback.
Establish rapport and avoid using jargon.
Avoid leading questions that could result in biased responses. For
instance, instead of asking “were the sessions too long?” ask “what did
you think about the length of the sessions?
Use open-ended questions to elicit detailed information. Avoid
questions that can be answered by a simple yes or no.
Prepare “probes” or follow-up questions that can elicit clarity and
rich, detailed information. Encourage interviewees to provide detail on
the reasoning behind their conclusions and ask for examples whenever
possible (e.g. “Can you give me an example of a time you used that
lesson with a patient?” or “Can you tell me about specic changes you
noticed in the patient aer you changed her medication?).
Observations
Observations are a method of gathering data by watching activities or behavior
that takes place during or aer a program has been implemented. When planning
to conduct observations, consider what specically you want to know and design
a system of data collection that would enable observers to collect that data in a
consistent manner. Most commonly, this will consist of a checklist or a recording
sheet where observers can note the extent to which certain essential components
of the program were carried out as planned.
43
Project ECHO® Evaluation 101
For ECHO evaluations, observation tends to be most useful for assessing levels of
engagement, the quality of teleECHO clinic sessions, and delity to the ECHO model,
all of which may be important components of a process evaluation. For example, you
might observe the quality of the teleECHO facilitation (e.g., mechanics of sessions)
or to what extent participants are engaged during videoconferences. The ECHO
Institute has developed a “Facilitation Scorecard” that programs have used as part of
an evaluation to assess quality of the clinic and adherence to the model (available on
Box.com). Still, observations cannot provide information on stakeholder perspectives;
as such it is best to use them in combination with other data collection methods.
Benets and challenges of using observations include:
BENEFITS
Facilitates increased understanding of
program operations
Requires minimal time or data collection
burden on participants
Oers an alternative to self-report, which may
be biased
CHALLENGES
Evaluator presence can lead those being observed
to alter their behavior (termed the “Hawthorne
eect”)
Require careful planning and note taking,
otherwise, observations will lack structure and
data will be unreliable and dicult to interpret.
Can be time consuming and expensive, especially
if the goal is observation of multiple sites or
activities.
For examples of ECHO evaluations that utilized observations, see Appendix A.
For additional resources related to using observations in evaluations, see Appendix E.
NYAM.org
44
Surveys or questionnaires
Surveys, also called questionnaires, are one of the most common methods of
collecting data for evaluation purposes. They are oen considered a simple and fairly
inexpensive method of collecting basic information on participant characteristics,
as well data on changes in knowledge, attitudes and behavior. Surveys can be
administered multiple times over the course of a program and can be used to assess
change over time.
In Project ECHO evaluations, surveys are most oen used to gather information from
program participants in order to learn more about their opinions of the program itself
and/or changes they experienced related to:
Knowledge of best practices in patient care
Attitudes toward patients with particular conditions
Condence and self-ecacy, or the belief in one’s ability to provide eective and
high-quality care for patients with the target condition or diagnosis
Changes in treatment practices related to caring for patients with the targeted
condition or diagnosis.
Surveys can also be administered to other stakeholders. For instance, patients of
ECHO participants could be surveyed to assess changes in patient satisfaction
and/or health outcomes the resulted from their provider participated the program.
However, access to patients is oen challenging and costly due to privacy concerns.
Survey design can be dicult, so it is worthwhile to check if there is an available
pre-existing survey instrument that meets your needs. Pre-existing surveys are
useful because they have generally been pilot-tested, which reduces problems with
question clarity that can harm data quality. Some pre-existing surveys may be also
validated, meaning that researchers have tested them and demonstrated that they
measure what they claim to measure.
In reviewing pre-existing surveys, consider whether the questions are consistent with
your project and evaluation objectives, if the survey has been used with populations
similar to yours, whether there is free access (or a charge for use), and whether any
adaptations to the survey are allowable. If there is no survey that ts your needs,
you will need to design your own. Regardless of the type of survey you choose
(pre-existing or one you develop yourself), consider your evaluation questions, the
population you will be surveying (e.g., education level, familiarity with electronic
45
Project ECHO® Evaluation 101
media, etc.) and resources available to you (sta time needed for developing and
administering the survey, survey design experts at your university, etc.) when
selecting questions to include and determining how frequently the survey will be
administered.
Surveys can be conducted electronically using a web-based platform, such as Survey
Monkey or RedCap (most common and most aordable), or by telephone, via paper
handouts, or in-person. To assess change overtime, you will need to use a survey
with questions that can be asked to participants before they begin the program
(“baseline) and again at either the completion of the program or at a specied
follow-up time.
Benets and challenges of using surveys or questionnaires include:
BENEFITS
Useful for collecting data from a large number
of respondents fairly quickly
Can be administered remotely (e.g., online,
mobile devices, telephone)
Data can be kept condential or anonymous,
which encourages respondents to be more
honest.
Facilitates the collection of quantitative results
that can be tested for statistical signicance,
which may be prioritized by some stakeholders
CHALLENGES
May be dicult to obtain a sucient number
of responses
Generally not suited to obtaining information
on “why” a particular outcome occurred, or to
understand novel or unexpected phenomena.
See Appendix B for important considerations when using surveys in ECHO
evaluations. Also, see Appendix A for examples of ECHO programs that use surveys
as part of their evaluations. For additional resources on developing and using surveys
for program evaluation, see Appendix E.
NYAM.org
46
SECONDARY DATA SOURCES FOR
ECHO EVALUATIONS
Secondary data refers to data collected for purposes other than your evaluation.
They may be directly related to the implementation of your program (e.g., program
records, case presentations) or they may be gathered by an external entity (e.g.,
insurance claims, administrative data). Relying on secondary data might save time
and money as it reduces the need to dedicate sta time to developing tools and
collecting data. Unfortunately, there are also challenges with using secondary data;
it can be dicult to gain access to the sources themselves due to data protection
and privacy issues, they may not contain sucient information needed to answer
your evaluation questions, and it may be dicult to translate the data collection
and management processes into a format that is useful for your purposes. Below,
we describe some examples of secondary data sources and how they have been
used by ECHO programs to conduct evaluations.
Program records
Program records consist of data available from program implementation, including
information on participant engagement, program activities, and program cost.
They are a particularly useful in that they require little or no additional data collection
eort, especially with advanced planning around systematic documentation and
organization of the data.
Program records oen contain information that is useful for process evaluation,
such as activities that were conducted during recruitment, attendance, and
participation. In fact, the Project ECHO data and technology platform is designed
specically to support data collection for this purpose. For example, records of
attendance at teleECHO clinic sessions should be available through the ECHO
technology platform that most programs use. Slides and notes from didactic lessons,
case presentation forms and documentation of the specic recommendations
for each case presented may also be important sources of information as they
provide data on the topics that were covered during each session.
Program records may also be useful for outcome and economic evaluations of
Project ECHO programs. For example, surveys conducted for the purposes of
47
Project ECHO® Evaluation 101
providing Continuing Medical Education (CME) credits may contain information on
knowledge gain from the program, follow-up presentations on cases might provide
information on patient health outcomes that result from adoption of specialist
recommendations, and project budgets are important for conducting cost-
eectiveness and ROI analyses.
Records from case presentations may also prove useful. They provide detail on the
types of cases that were discussed by the ECHO team and participants and the topics
that were covered in the program. Some researchers have used case presentation
forms and follow-up presentations to track whether care recommendations were
followed and changes in patient health outcomes.
Benets and challenges of using program records include:
BENEFITS
Program sta can easily obtain program-specic
information without signicant additional work
Provide descriptive information program
activities, making them useful for performance
monitoring and process evaluation.
CHALLENGES
Some information is supercial, such as counts
and lists
Because they are implementation-focused, may
be limited with respect to measuring outcomes
If sta are not invested in careful documentation
of activities, may contain missing or unreliable
information
For examples of evaluations that have used program records in an evaluation, see
Appendix A. For additional resources on using program records in evaluations, see
Appendix E.
NYAM.org
48
Other datasets
Many other datasets collected by outside entities might be relevant to your program
evaluation and informative with respect to the selection and monitoring of outcomes.
Datasets range from health condition registries (e.g., diabetes) to government
administrative databases (e.g., state-level inpatient hospitalization data). Some
of these datasets are available for free to the public (e.g., National Health Interview
Survey, American Community Survey, Behavioral Risk Factor Surveillance System),
some are available to the public on a limited basis (e.g., Medical Expenditure Panel
Survey, all-payer claims databases), and some require negotiation with the entity
that collects and owns the data (e.g., Medicaid data, insurance claims from private
health plans).
Secondary datasets vary in terms of the frequency with which they are collected,
as well as the lag time between when the data are collected and when the data
are available to external researchers or evaluators. For instance, some data, such
as the Minimum Data Set 3.0 (which is collected by the Centers for Medicare and
Medicaid Services and contains information on nursing home quality) is updated on
a quarterly basis, and is available free of charge to the public with a six-month delay.
However, state-level hospitalization databases can take over a year to be released
to researchers, who can only gain access aer going through an application approval
process through the relevant state agency. You will also need to consider the level
at which you will be able to access the data; in many cases, these data need to be
analyzed at the facility level because identiable data at the patient or provider level
is not available.
Large datasets also vary with respect to data quality and complexity, and you will
need to understand data limitations before incorporating them into your evaluation.
To assess data limitations, consider the process for data collection (including who
is responsible for documenting and reporting the data) and the likelihood of missing
data or data discrepancies. For instance, the Uniform Data System (UDS) for FQHCs
can be compiled either from a central database by an administrator for a large FQHC
that has many individual sites, or from each site individually. Certain sites may have
administrators trained in extracting the relevant data, while others may not. These
dierent processes are likely to produce variation in the accuracy of the data. This
does not mean that the data cannot be used, but it does mean that the data will only
be accurate for certain indicators. Consult with others who have used the data in the
past to better understand nuances and pitfalls of secondary datasets.
49
Project ECHO® Evaluation 101
Some benets and challenges of using other, larger datasets include:
BENEFITS
Can be used for context and benchmarking
patient characteristics and outcomes
May contain patient level information, which
can otherwise be costly and dicult to collect,
particularly at a similar scale
May be familiar to policymakers and
administrators, thereby increasing their
condence in evaluation ndings
CHALLENGES
Information of interest may not be available in
datasets
Recent data may not be immediately available
due to lags in data collection and processing
Datasets are usually large and, therefore, may
require experts with a background in database
management and statistics
Datasets are commonly de-identied prior
to transfer, which can make it dicult to link
changes in health outcomes to a specic program
For examples of ECHO evaluations that relied on large datasets, see Appendix A.
For additional resources on using large datasets in evaluations, see Appendix E.
NYAM.org
50
Health Record Reviews: Electronic or Chart
Reviews of medical records, oen called chart reviews, are a common method of
collecting data for epidemiological, medical and patient health outcome research.
Data from medical records can be collected retrospectively (looking back over time to
collect necessary information), prospectively (data are gathered on an ongoing basis
throughout the project period), or a combination of the two.
12
Begin by ensuring that your work complies with any research regulations or guidelines
in place at the health care institutions from which data are being collected. During
the planning stage, standardize the data collection and/or extraction process. You
should have a clear understanding of the variables to be studied and where they
can be found in the records, who is responsible for recording the information, how
TIPS AND TRICKS
WORKING WITH LARGE DATASETS
Build in extra time and funding.
If requesting data from an outside organization, understand that this
will take time and may be costly. Some entities (e.g., government) may
have a strict process for application and transfer of data, including
limitations on identifying information. Others, such as private payers
with an interest in Project ECHO, may be amenable to data requests,
though specics will likely require negotiation.
Talk to others who have used the datasets.
Most datasets have nuances that take time to learn; researchers who
are already familiar with the datasets can provide an overview of these
details, saving you time during the analysis phase.
Consider data quality.
Be aware of issues related to data quality, particularly if the data are
self-reported or collected with the purpose of, for example, delivering a
service than for research purposes.
51
Project ECHO® Evaluation 101
consistently the information is reported and/or recorded. For example, will you be
tracking specic symptoms that are reported by patients, and can you be sure that
data on these symptoms will be reported consistently across records? Or, will you be
tracking whether a specic diagnostic test is run consistently for all patients with a
specic condition? It will be helpful to review a few charts and consult with clinicians
at each site to be sure that you are familiar with the type and quality of data that can
be extracted from patient records.
13
Benets and challenges of using electronic health records or
chart reviews include:
BENEFITS
Provides concrete information on treatment
practices and patient health outcomes before
and aer the intervention was implemented
Can be conducted retrospectively or prospectively
Access to patient-level data and outcomes,
which can be dicult to otherwise obtain
CHALLENGES
Patient condentiality regulations and
protections can make it dicult to gain access to
records
Data may be incomplete or dicult to interpret if
data entry practices vary signicantly
Dicult to pull information from charts in a
consistent manner (in other words, may have
poor inter-rater reliability).
For examples of ECHO evaluations that relied on large datasets, see Appendix A.
For additional resources on using large datasets in evaluations, see Appendix E.
NYAM.org
52
MAKING SENSE OF YOUR
EVALUATION DATA
Now that you have collected your evaluation information, you must determine the
best strategy for organizing and analyzing your data. The right analysis approach will
help you understand and interpret your ndings.
In determining how to make sense of your data, ask yourself:
What kind of data do you have?
What expertise do you have for analysis and reporting?
What are your evaluation questions?
In what data is your target audience most interested?
How will you present the data?
What type of analysis can be conducted with your data?
What soware is available to you?
Answering these questions will help you think about appropriate analytic approaches.
Keep in mind that this section does not contain sucient information to transform
you or your evaluator into a statistician. Instead, the section is meant to provide
basic information about the dierent procedures for handling and analyzing various
forms of data, and to refer you to additional resources that can provide more detailed
guidance. At a minimum, this section may help you to better communicate with
statisticians or others doing more complex analyses.
Refer to Appendix E for more resources on data analysis.
QUANTITATIVE ANALYSIS
Quantitative analysis involves working with numeric data, such as rating scales or
frequencies. Quantitative analyses answer questions related to what happened, to
whom it happened, and when or how oen it happened. Basic quantitative analyses
common to evaluation research include descriptive analyses and inferential analyses.
53
Project ECHO® Evaluation 101
Descriptive Analyses
Descriptive statistics simply describe what the data shows. They can be used to
report basic information about your program participants, their level of engagement,
and characteristics of their patients. Descriptive statistics also provide an overview of
the data and patterns within it, which can be helpful to understand before engaging
in more advance analyses.
TIPS AND TRICKS
SETTING UP YOUR DATABASE FOR
QUANTITATIVE ANALYSES
Although quantitative analysis can be complex, a well-organized
database will make the analysis easier.
Assign a unique identier to each individual in your dataset.
For example, if your analysis is of patient data, each patient would
have a unique patient ID number. If your analysis is at the provider level,
each provider would have a unique provider ID number.
Label variables (e.g., “gen” for “gender of participant”), label values
(e.g., 1 = child, 2 = adult) and denote the format of each variable
(i.e., numeric or string).
Include all information about an individual in one row of your database,
rather than having the same person appear in multiple rows.
Limit response options so that invalid information cannot be entered
(e.g., restricting zip code options to the local area).
Code text responses into a numerical form so that they are easier
to analyze (e.g., 0=No, 1 =Yes or 0=Male, 1=Female).
NYAM.org
54
See below for examples of descriptive statistics that might be useful for ECHO
evaluations:
DESCRIPTIVE STATISTIC ECHO EVALUATION EXAMPLE
FREQUENCY
A count of the number of times a particular
event or data point appears in a dataset.
Number of teleECHO clinic sessions held
MEAN
The average value in a dataset.
Average age of patients treated by
ECHO participants
MEDIAN
The middle value (or mid-point) in a dataset.
Median number of participants in an
ECHO session
MODE
The most frequently occurring value in
a dataset.
The most common number of cases presented in
a weekly session over a 6-month ECHO program
RANGE
The dierence between the minimum
and maximum data point.
The minimum and maximum number
of participants in ECHO sessions over a
six-month period
Inferential Analyses
Inferential statistics help you to understand and draw conclusions from your data.
For example, you might analyze whether there is a relationship between the number
of hours participants spend attending sessions and improved clinical knowledge, or
whether there is a relationship between provider participation in Project ECHO and
improved health outcomes for patients.
In general, inferential statistics are used to determine whether there is a relationship
or association between variables of interest. When there is a relationship, inferential
statistics can be used to determine whether the relationship is likely real (referred
to as “statistically signicant”) or whether it could have been due to chance.
Several factors inuence the likelihood of signicance, including the strength of
the relationship, the amount of variability in the data, and the number of people in
the sample.
55
Project ECHO® Evaluation 101
Inferential statistics should be calculated using statistical analysis soware, such
as STATA, SAS, R (which is free), or SPSS. However, such programs require a bit of
expertise, and some may nd it easier to conduct basic analyses using Microso
Excel, which also has some statistical capabilities.
Some below for examples of inferential statistics:
INFERENTIAL STATISTIC EXAMPLE EVALUATION QUESTIONS
CHISQUARE
This is used to determine the strength of the
relationship between two categorical
(non-numerical) variables.
Are nurse care managers more likely
to participate in Project ECHO than
physicians?
Do participants from rural settings report
greater satisfaction with Project ECHO
than participants from urban settings?
CORRELATIONS
These are used to measure whether a
relationship exists between two numeric
variables. Usually, the strength of the correlation
is measured using a statistic called Pearson’s r
correlation coecient, which can range from -1
to +1. A positive correlation (r is greater than 0)
means that as one variable increases, the other
also increases. A negative correlation (r is less
than 0) means that as one variable increases, the
other variable decreases.
It is important to remember that correlation does
not mean that one variable “causes” the other,
only that a relationship exists between the two.
Is there a relationship between the
number of hours participants attend
Project ECHO sessions and knowledge
scores?
Is there a relationship between number of
ECHO sessions attended and percent of
patients who receive care in line with best
practices?
(continued PG 56)
NYAM.org
56
INFERENTIAL STATISTIC EXAMPLE EVALUATION QUESTIONS
TTESTS
These are used to determine whether there is a
signicant dierence between two means. The
independent samples t-test is used to compare
one groups mean value to another groups mean
value. The paired t-test is used when each
observation in one group is paired with a related
observation in the other group or when measures
are taken at two points in time
within the same group (in such cases a person
is paired with him or herself).
Are knowledge scores dierent
between urban and rural Project ECHO
participants? (independent sample
t-test)
Do participants report higher levels of job
satisfaction aer attending Project ECHO
training sessions? [paired t-test]
ANALYSIS OF VARIANCE ANOVA
This is similar to a t-test, but used to compare
whether three or more groups have signicantly
dierent means.
Are there dierences in scores on
knowledge surveys according to the
educational background (MD vs. NP vs.
RN) of the respondent?
Did Project ECHO have a dierent impact
on provider self-ecacy based on the on
geographic location of the respondents’
practice site (i.e., rural, suburban, urban)?
REGRESSION
This is used to determine whether one variable
is a predictor of another. Common types of
regressions include:
Logistic regression
used when your dependent variable is categorical
Linear regression
used when your dependent variable is
numeric and continuous
Is length of participation in Project ECHO
a predictor of improved condence (use
logistic regression if condence, which is
your dependent variable, is categorical e.g.
yes/no)?
Is length of participation in Project ECHO
a predictor of improved knowledge (use
linear regression if knowledge score which
is your dependent variable is continuous)?
57
Project ECHO® Evaluation 101
See Appendix E for additional resources on quantitative data analysis.
QUALITATIVE ANALYSIS
Analysis of qualitative data involves the identication, examination, and
interpretation of patterns and themes in textual data (e.g., interview or focus group
transcripts, narratives within medical or program records, and open ended survey
questions). According to Bernard,
14
qualitative analysis is focused on “the search for
patterns in data and for ideas that help explain why those patterns are there in the
rst place”.
Qualitative data analysis should be a systematic and iterative process. It involves
reading and familiarizing oneself with the data, developing a coding scheme
(including code denitions), applying codes to relevant portions of the text,
identifying themes, and interpreting your results. The process can be somewhat
uid, so you will likely move back and forth between steps, particularly in the early
stages of analysis.
TIPS AND TRICKS
QUANTITATIVE DATA ANALYSES
Before performing any quantitative data analysis, read and understand
the data you want to use.
Allow your evaluation questions and the statistical expertise available
to you guide your analysis plan. If you need to conduct complex
statistical analysis beyond the expertise of your team, consider
engaging experienced researchers or evaluators.
Statistical signicance can be hard to obtain if you have a small
number of cases. Instead, use descriptive analyses and consider
complementing quantitative ndings with qualitative methods that
are more informative for small samples.
NYAM.org
58
TIPS AND TRICKS
PLANNING FOR QUALITATIVE
DATA ANALYSIS
Review your evaluation questions.
They will be important in guiding your analysis.
Start early.
Start reviewing data as soon as they become available. Although you
may not want to make signicant changes based on a single focus
group or interview, early ndings can serve as a pilot that guides
renements to the data collection process. Starting early may also
help to ensure that the work gets done, as analyzing qualitative data
is very time-consuming.
Determine the level of rigor required for your evaluation and assign
sta time accordingly.
Although best practice in qualitative data analysis requires two or more
people to participate in the coding and analyses, this may not
be feasible due to resource limitations.
Obtain the necessary tools for data analysis.
If you have small amounts of data, they can be analyzed manually or
using Microso Word or Excel. However, qualitative analysis soware
(such as Nvivo or Atlas TI) is very useful once the amount of data grows.
Soware packages can be expensive to purchase, but most universities
oer access to faculty and students for free or at a reduced cost. Even
if paying full price, the soware will likely pay for itself in labor saved
and will result in a more reliable and trusted analysis.
59
Project ECHO® Evaluation 101
READ AND
BECOME
FAMILIAR
WITH DATA
DEVELOP
CODEBOOK
APPLY CODES
TO DATA
IDENTIFY
THEMES AND
PATTERNS
INTERPRET
RESULTS
DATA COLLECTION AND
TRANSCRIPTION
Coding the Data
Before you begin any analysis of qualitative data, read and re-read several of the
documents to be coded (“source documents”) to familiarize yourself with the
content. Aer reviewing source documents, you can begin the process of coding
your data. “Coding” refers to the process of identifying and labeling blocks of text,
essentially creating an electronic ling system that helps you to systematically
retrieve the information you need when you need it. For example, you might code
text for “professions,” if you will want to examine how dierent professions and
specialties respond to an ECHO clinic.
Coding necessitates a “codebook,” consisting of a list of codes and their denitions.
The codebook will include pre-identied codes created based on the questions you
asked participants or your own expectations regarding the data, as well as codes
that identify themes that arise from the data themselves (see Table 4). Multiple
codes can be applied to a single block of text, so code denitions should be broad.
You may also include codes that cut across themes (e.g., knowledge, satisfaction,
recommendation). Initial codebooks can be adapted once or twice as the codes are
applied, but each change requires recoding previously-coded documents, so aim to
make adaptions early and avoid repeated changes.
NYAM.org
60
TABLE 4: EXAMPLES OF CODES FOR QUALITATIVE DATA ANALYSIS
CODE DEFINITION EXAMPLE TEXT RESPONSE
Meds Participant mentions medications,
including prescriptions, patient
adherence, and outcomes
“Before I began attending Project ECHO sessions,
I typically prescribed [X medication] to treat
depression, but I learned from the ECHO clinics that
[Y medication] is actually more eective and has
fewer side eects for patients who are seniors. So,
now for my older patients, I prescribe Y instead of X.
(Codes: Meds, Knowledge)
Knowledge Participant discusses the
program’s impact on his/
her knowledge related to new
treatments, best practices, or
caring for the target population
Screening Participant mentions screening,
including screening-related
practices, uptake, and results
Aer Dr. A discussed how common depression is
among adolescents, and how it sometimes manifests
dierently in teens than in adults, I started to screen
all my adolescents for depression instead of only
those who seem to be exhibiting symptoms, so I
guess that was one lesson I really took to heart.
(Codes: Screen, Knowledge)
Access Participant describes access to
care, or barriers or facilitators
related to accessing care
“I honestly didnt have anyone to refer my depressed
patients to, or my schizophrenic patient to. There are
really only three primary care doctors who serve this
community. There arent any psychiatrists within 30
miles of us.” (Codes: Access)
61
Project ECHO® Evaluation 101
TIPS AND TRICKS
CODING YOUR DATA
Keep coding simple.
Use short, clearly dened and consistent codes to make analysis easier
Test it out early.
Review and revise your coding scheme early in the analysis process.
You can add, collapse, expand and revise the coding categories at that
point. However, do not continue to change your codes or you will end up
duplicating your eorts.
Keep an open mind.
Be on the lookout for new or unexpected themes that may arise in the
data. Avoid making assumptions about what should be there or what
you expect to nd.
Work with others.
Discuss data and codes with others who are familiar with the program
or topic. Multiple perspectives are very helpful!
Focus on the data, not the codes.
Remember that the data are most important, not the codes or code
names. The codes are simply a way to organize and easily access the
data. Always go back to the data captured within a code for analysis
and reporting.
NYAM.org
62
Identifying Themes and Interpreting Results
Once you have completed your coding, identify key themes and patterns within
the data. Themes that are oen of interest in program evaluations include:
Participant perspectives on the purpose and value of the program
Descriptions of outcomes from the program
Assessment of the strengths and weaknesses of the program’s design
or implementation; and
Lessons learned.
To identify themes, extract and review data from one code at a time, or from
two or more codes that frequently overlap, in order to more clearly examine
participant perspectives on each theme or aspect of the program. Pulling out
coded data and reviewing themes individually reduces a huge set of data into
more manageable pieces.
When reviewing the data, look for patterns related to:
Similarities and dierences
Consider when they arise and for whom they occur.
Frequencies
Assess whether a particular idea is common or rare within the data.
Sequences
Note the order in which ideas are discussed, and whether certain themes
precede others.
Correspondence
Consider whether there is a relationship between key themes and
other activities or events.
Causation
Assess whether participants regularly indicate a cause or contributing
factor to key themes or ideas.
15
63
Project ECHO® Evaluation 101
One of the most dicult aspects of working with qualitative data is extracting the
important patterns in a way that is systematic, comprehensive and relevant to your
evaluation, as even small evaluations can end up with a large amount of qualitative
data. Be selective and choose a limited number of key themes based on:
1. How frequently the theme was discussed by participants; and
2. Relevance to your program, your evaluation questions, your funder
(or future funders), and other stakeholders.
The frequency of a particular theme is reported as an assessment of how oen
or rarely a particular idea arose in the data. Although numerical counts are not
encouraged (as the nature of qualitative data prohibits use of quantitative or
statistical analyses), certain information from qualitative sources (e.g., participant
characteristics, participation of particular activity, experience of a specic outcome)
may be quantiable and useful to describe the study population or the strength of a
particular theme.
Aer themes are identied, discuss preliminary ndings and interpretations with
program stakeholders. Consider how lessons learned can be applied to various
program components, and to what extent qualitative and quantitative ndings are
consistent with each other.
ECONOMIC ANALYSIS
In addition to tracking the inuence of an ECHO program on participants and the
patients they serve, there is increasing interest in assessing whether there are cost
savings associated with the program and, if so, whether those savings oset the
cost of operation. These questions refer to the “business case” or the “return on
investment” (ROI) of an ECHO program.
NYAM.org
64
Making the Business Case for an ECHO Program
A useful operational denition of a “business case” for health care quality programs is
provided by Leatherman et al.:
A business case for a health care improvement intervention exists if
the entity that invests in the intervention realizes a nancial return on
its investment in a reasonable time frame, using a reasonable rate
of discounting. This may be realized as “bankable dollars” (prot), a
reduction in losses for a given program or population, or avoided costs.
In addition, a business case may exist if the investing entity believes that
a positive indirect eect on organizational function and sustainability
will accrue within a reasonable time frame” (p. 18)
16
Some ECHO programs have a clear social or economic case, meaning that they oer
a benet to society in general (e.g., healthier patients, happier health care providers)
but lack a clear business case for the program’s sponsor. For example, a diabetes
management ECHO program led by a hospital might result in fewer complications
related to uncontrolled diabetes (foot amputation, neuropathy, retinopathy), which
would likely reduce health care expenditures in the long-term. However, those cost
savings would not directly benet the hospital; in fact, the hospital might lose
money if they operate in a fee-for-service environment. Instead, savings from this
program would likely accrue to a health plan many years aer the initial investment
was made, or to government if the patients aected also relied on publicly-funded
health insurance (e.g., Medicare or Medicaid). An employer may also benet if more
eective diabetes management results in higher productivity. Although stakeholders
(including health plans, hospitals, or other funders) consider a range of benets
(other than costs) when making decisions around Project ECHO, the mismatch in
terms of who pays for, who administers, and who benets from an ECHO program
can lead to diculties in establishing sustainable nancing structures.
Dening Return on Investment (ROI)
One of the most eective ways to make a business case for a health care intervention
is to calculate the ROI of the program. ECHO stakeholders oen calculate the nancial
ROI of ECHO programs to assess whether the program is a good investment, whether
it should be continued, how it might become sustainable, and whether the model
should be adapted to address other health care shortages in the community.
65
Project ECHO® Evaluation 101
ROI is a ratio of “return” to a given “investment” (see Equation 1).
17
Return here
refers to the monetary value of the benets of the program, while investment refers
to costs associated with delivering the program. Traditional ROI calculations are
purely nancial and only include return and investment from activities that can be
quantied in dollar terms.
ROI =
Return
Investment
In interventions focused on health care delivery and quality interventions (like most
Project ECHO programs), the return in an ROI calculation typically refers to the net
savings accrued from changes in health care utilization (e.g., fewer hospitalizations,
fewer medications, fewer emergency room visits), whereas investment refers to the
cost of delivering the program (see Equation 2).
ROI of ECHO program =
Net savings from changes in health care utilization
Costs of delivering ECHO program
Net savings from changes in health care utilization can be estimated from data
directly provided by program participants (e.g., via surveys or case presentations), or
from administrative data on patients or others impacted by the program (e.g., health
insurance claims data, electronic health records).
Calculating ROI enables you to determine whether spending money on a particular
program will likely result in future savings. It also allows you to compare the relative
value of several dierent programs. For instance, if a given dollar invested in
program A has a larger return than the same dollar invested in program B, then the
organization theoretically should invest in Program A. Of course, this is only true if
the ROI estimates include the most relevant, quantiable benets and costs. Non-
monetary benets or costs are also important factors that should be considered
when deciding where to invest resources.
1.
2.
NYAM.org
66
Estimating Return
Expected benets (and the associated dollar value) will vary depending on the ECHO
program being evaluated; some programs may reduce total health care costs for
patients with a particular condition (oen through reductions in hospitalizations,
emergency room visits, or travel costs for obtaining specialty care), while others may
focus on preventing the development of a condition in the rst place.
To estimate return, rst enumerate all the benets of a program (a list of common
benets identied for ECHO programs is available on Box.com). Then, systematically
assess how they can be measured and valued for the ROI calculation. Monetary
values associated with ECHO benets can be determined through actual data
(e.g., claims data on hospitalization costs) or through a literature review (e.g., the
average cost for an inpatient hospitalization for a Medicare beneciary, or the average
cost of a particular course of treatment). In the end, your estimated return will be
the sum of all the costs that were averted through the program.
Some benets of ECHO programs may take place far into the future, rather than
within the project period being studied. For example, an ECHO program targeting
diabetes may teach clinicians how to prevent and manage diabetes by helping
patients eat healthy, exercise, and adhere to their medications. In this case, increased
use of high-cost prescriptions may lead to increased health care costs during the
early years of the program. However, a reduction in diabetes complications (e.g., foot
amputations, blindness, neuropathy) in the long-term may be large enough to show
net savings. In this case, you may need to estimate future health care utilization and
costs to fully capture the return or value of the program.
When ECHO programs target conditions for which health care utilization and costs
must be estimated into future years, return estimates can be based on forecasts
or computer simulation models designed to capture disease progression (though
ination will need to be taken into account).
18, 19
For example, a forecasting or
computer simulation model might predict that 10 out of 1,000 patients with
diabetes will develop retinopathy in the next year. However, we might estimate that
the ECHO program will result in a 30 percent reduction in the number of patients
who develop retinopathy in the next year (meaning 3 cases of retinopathy were
averted by the ECHO program). In this case, we could estimate return (or net health
care savings) as follows:
67
Project ECHO® Evaluation 101
Return =
(# of averted retinopathy cases) x (average cost of care for one patient
with retinopathy over a one-year period)
Estimates can be obtained from the results of existing models or from data reported
in peer-reviewed journal articles or other reputable sources. It also worth noting
that, depending on evaluation resources available, new models to estimate health
care utilization and costs can be developed. In general, you will want to engage an
experienced health economist or health services researcher in evaluations that use
forecasting or modeling as these analyses become complex quickly.
Some important benets of programs like Project ECHO, such as improved provider
satisfaction or fewer days of missed work, do not have an obvious monetary value.
There are two ways to incorporate these benets into your nal ROI analysis. The rst
is to track non-monetary benets and use them to evaluate whether your nal ROI
calculation is under or overestimating return. Reporting non-monetary benets along
with the ROI results will provide a more detailed and nuanced picture of your program.
The second method of accounting for non-monetary program benets is to examine
the economic literature and/or work with an economist to assign a monetary value
to a given benet. For example, economists have estimated that depression costs
society $44 billion annually due to lost productivity (meaning absences from work
and lost productivity while at work).
20
However, monetizing benets can lead to a
more complex analysis; thus, it may be more feasible for lower-resourced programs
to focus solely on those benets with an obvious monetary benet.
Estimating Investment
To estimate the cost of an ECHO program, add up all expenses related to running the
program for a specic period of time. Relevant expenditures may include the cost
of sta and specialist time dedicated to developing and delivering the program, the
cost of time participants spend attending ECHO sessions, costs associated with
performance monitoring and evaluation (e.g., contracting an outside evaluator or
internal sta time), expenses related to hardware, soware, supplies and overhead,
and indirect costs.
Costs for program delivery should be estimated using the same timeframe that was
used to estimate return. For example, if the ECHO clinic is delivered for six months
then the period to assess net savings could be six months aer the program began
NYAM.org
68
or aer the rst cohort completed the program. Of course, it is useful to take into
account the length of time required for the benets (return) to become visible in the
available data when choosing your timeframe.
Putting it all Together
Once you have estimated your return and investment gures, divide the estimated
net savings by the estimated investment to obtain your ROI. The ROI can then be
interpreted based on whether the value is negative or positive, and whether it is
greater or less than one.
A negative ROI indicates that your program resulted in a loss of money (or higher
health care costs) rather than savings (or reduced health care costs).
A positive ROI that is greater than one indicates that there is a net savings
resulting from your program.
An ROI that is between zero and one indicates that the program saved money
(reduced health care costs), but not enough to cover the cost of delivering it; in
other words, the program costs more than it saves.
The nal ROI calculation can be interpreted as: For every dollar invested in the
program, and estimated $____ is (saved/lost). ROI can also be interpreted as
a percentage (i.e., the funder saw a __% return on an investment). Note that
stakeholders will have dierent thresholds around the level of ROI that is sucient to
convince them to invest in a program; a positive ROI is not always enough.
69
Project ECHO® Evaluation 101
The example above is somewhat simplistic, but it provides the basic information
required to estimate the return on investment that might be expected from a
program designed to reduce hospitalizations. We also could have added changes in
other health care expenditures (e.g., medication, primary care visits), extended the
period during which net savings are calculated using forecasting or simulation, or
used more conservative estimates of hospitalization costs (which would result in a
more conservative ROI).
HYPOTHETICAL ROI EXAMPLE:
An ECHO program focused on improving access to care for mental
health conditions analyzes claims data from a health plan and nds
that, compared to beneciaries being treated by non-ECHO providers,
those treated by ECHO providers experience fewer emergency room visits
and fewer hospitalizations, leading to lower costs associated with both
services. To calculate their ROI, they conduct the following analysis:
Return (health care costs saved)
$200,000 (emergency room savings) + $400,000 (hospitalizations
savings) = $600,000
Total Investment:
$140,000 (sta time) + $30,000 (fringe) + $20,000 (indirect) + $10,000
(supplies) = $200,000
ROI:
$600,000 / $200,000 = 3
In this case, ROI is positive and equal to 3, which can be interpreted as
every $1 invested in this ECHO program generates $3 in return (health
care savings).
Sometimes, ROI is reported as a net benet, which would be the idea that
spending $1 generates an additional $2 aer covering program costs.
NYAM.org
70
Making Assumptions when Estimating Return on Investment
There are many assumptions that need to be made in ROI calculations, since not all
data are readily available. As was discussed above, you might make assumptions
around the average cost of a hospitalization, emergency room visit, or particular
course of treatment. You might also make assumptions regarding the calculation of
health care utilization and costs, comparison or control groups used, and timeline of
the program.
While assumptions are common in ROI calculations, you should develop a clear and
transparent plan regarding the assumptions used in your calculations to ensure
they are deemed appropriate by stakeholders. For example, a health plan may not be
interested in returns that occur ve years aer the program is implemented if they
know that most patients change health plans every three years. It is also helpful to
conduct sensitivity analyses, which means running the same analyses using dierent
assumptions (e.g., both an optimistic and a conservative estimate of cost savings) to
understand how various assumptions and scenarios aect the ROI.
See Appendix E for additional resources on calculating return on investment.
TIPS AND TRICKS
ROI CALCULATORS
Some may nd online ROI calculators useful in providing a general
sense of how much value you are getting from a given investment in
an ECHO program.
Although they provide imperfect estimates, these calculators may
provide some preliminary results that can be compared to your own
ROI calculations that are based on more complete data.
For examples of relevant ROI calculators available online,
see Appendix E.
71
Project ECHO® Evaluation 101
USING EVALUATION
FINDINGS
Using and disseminating ndings are the nal components of an evaluation.
Evaluation ndings can be used to:
Improve your program.
Results may indicate a need to modify program format, adapt your curriculum,
or implement strategies to increase program engagement and participation.
Demonstrate accountability.
Documentation of required activities and outputs is oen a requirement for
grant funding. Process evaluations can be key components of these reports.
Build awareness and educate others.
As recognition for the Project ECHO model grows, so will interest in your
ndings. Evaluation results will likely be of interest to a variety of stakeholders
across sectors, including those not directly involved in your program (e.g., the
broader ECHO community, academia, professional societies, policy makers,
funders and peers).
Engage new participants and stakeholders.
Demonstrating the benets of your ECHO program can be key to increasing buy-
in. Administrators of organizations such as FQHCs, ACOs, and hospital systems
may be particularly interested in understanding the business or economic case
for investing sta time in program participation.
Enhance program sustainability.
Demonstrating the ability of an ECHO program to achieve specic and clearly
dened outcomes that are of interest to administrators, funders, policymakers,
and others is essential to program sustainability. Both qualitative ndings
regarding improved patient and provider outcomes and a detailed analysis of
ROI have been noted as important methods of “making the case” for ongoing
support of ECHO programs.
Findings from an evaluation can be disseminated through presentations to
stakeholders, reports or other publications. ECHO leaders have also indicated the
importance of sharing ndings through informal discussions with stakeholders
NYAM.org
72
and with leaders in health care delivery and health policy. Dierent sources of
data (qualitative and quantitative) will appeal to dierent stakeholders, and any
reporting or presenting of information must take the audience, their priorities
and their understanding of the material into account. An administrator at a FQHC
might be most interested in professional satisfaction among providers, as her main
goal in engaging with the program may have been to reduce physician turnover.
A policymaker, on the other hand, may prefer a qualitative understanding of how
the program impacts his constituents so that he can present a compelling case
for funding to the broader community. Alternatively, the leader of an ACO may be
singularly focused on achieving a sucient level of savings in health care costs to
oset the cost of administering the program.
When presenting quantitative ndings, use easy-to-read charts, tables and graphs
to make complex data analyses comprehensible. Present terms like ROI in plain
language. Reports or presentations based on qualitative research should summarize
ndings and present illustrative quotes. Not only does sharing actual (de-identied)
quotes oer a uniquely compelling perspective that cannot be captured via a
summary, it also elevates the voices of relevant stakeholders and ensures they are
part of the record when decisions around programming are made.
When sharing your evaluation results, consider which ndings and communication
styles will be most eective and relevant for your target audience. While publishing
articles in peer-reviewed journals is an important and well-respected method of
dissemination within the academic community, a shorter report that clearly and
concisely describes ndings might be more accessible to non-academic audiences,
including many of your stakeholders. Writing blogs, posting on social media,
and publishing articles in traditional news sources are also eective methods
of communicating your ndings to a broader audience, which is important in
building interest in your program. Developing outreach materials containing clear,
visual graphics that convey outcomes of interest to administrators (e.g., provider
satisfaction or self-ecacy, reductions in cost) can be eective in garnering new
interest in your program.
73
Project ECHO® Evaluation 101
CONCLUSION
This guide was designed to support leaders of ECHO hubs with limited evaluation
resources in assessing the implementation, quality, outcomes and ROI of their
ECHO program. Every ECHO program is dierent, and each will have unique questions
related to program outcomes, improvement, expansion, and sustainability. The aim of
this guide is to provide ECHO programs with support around:
The justication for evaluation
Development and implementation of an evaluation plan
Nuances related to planning and conducting ECHO evaluations in
real-world settings
Evaluation approaches that are particularly useful for ECHO programs,
especially related to data collection and analysis; and
Strategies for reporting and disseminating ndings to stakeholders
The rapid growth of Project ECHO suggests that many clinicians, funders,
policymakers, and health care administrators already see great promise in the model.
However, it is important that the evidence base grows along with implementation,
both for quality assurance purposes and to understand how and when the model
works and does not work. Incorporating evaluation into your work is an essential step
towards ensuring that health care resources are directed in a way that will provide
maximum benet to patients, providers, and the broader health care system.
NYAM.org
74
ACKNOWLEDGEMENTS
Support for this work was provided by the New York State Health Foundation
(NYSHealth) and the GE Foundation. The mission of NYSHealth is to expand health
insurance coverage, increase access to high-quality health care services, and improve
public and community health. The mission of the GE Foundation is to build capacity
and strengthen communities all over the world. The views presented here are those of
the authors and not necessarily those of the foundations or their directors, ocers,
or sta.
In addition to the New York State Health Foundation and the GE Foundation, the
authors would like to thank the leaders and sta of Project ECHO programs and
evaluations who volunteered to be interviewed for this guide, as well as the sta
at the University of New Mexicos ECHO Institute for their support and guidance
during its development. The information presented in this guide does not necessarily
reect the viewpoints of the individual interviewees or their organizations; any errors
contained herein are the authors’ own.
AUTHORS:
Elisa Fisher, MPH, MSW
Assistant Deputy Director, Population Health and Health Reform,
Center for Health Policy and Programs, The New York Academy of Medicine
Kumbie Madondo, PhD
Research Scientist,
Center for Health Innovation, The New York Academy of Medicine
Linda Weiss, PhD
Director,
Center for Evaluation and Applied Research, The New York Academy of Medicine
José A. Pagán, PhD
Director,
Center for Health Innovation, The New York Academy of Medicine
75
Project ECHO® Evaluation 101
REFERENCES
1
Arora, S., Kalishman, S., Dion, D., Som, D., Thornton, K., Bankhurst, A., ... & Komaramy, M. (2011).
Partnering urban academic medical centers and rural primary care clinicians to provide complex
chronic disease care. Health Aairs, 30(6), 1176-1184.
2
University of New Mexico School of Medicine. (2016). Project ECHO: Model. Available at: http://
echo.unm.edu/about-echo/model/, last accessed January 13, 2016.
3
Zhou, C., Crawford, A., Serhal, E., Kurdyak, P., & Sockalingam, S. (2016). The Impact of Project ECHO
on Participant and Patient Outcomes: A Systematic Review. Academic Medicine, 91(10), 1439-1461.
4
Centers for Disease Control and Prevention. Types of Evaluation. Available at: https://www.cdc.
gov/std/Program/pupestd/Types%20of%20Evaluation.pdf. Last Accessed: December 2, 2016
5
Centers for Disease Control and Prevention. Types of Evaluation. Available at: https://www.cdc.
gov/std/Program/pupestd/Types%20of%20Evaluation.pdf. Last Accessed: December 2, 2016
6
Centers for Disease Control and Prevention. Types of Evaluation. Available at: https://www.cdc.
gov/std/Program/pupestd/Types%20of%20Evaluation.pdf. Last Accessed: December 2, 2016
7
Centers for Disease Control and Prevention. Types of Evaluation. Available at: https://www.cdc.
gov/std/Program/pupestd/Types%20of%20Evaluation.pdf. Last Accessed: December 2, 2016
8
Scriven, M. (2003/2004). Dierences between evaluation and social science research. The
Evaluation Exchange Harvard Family Research Project, 9(4).
9
Moore, D. E., Green, J. S., & Gallis, H. A. (2009). Achieving desired results and improved outcomes:
integrating planning and assessment throughout learning activities. Journal of Continuing
Education in the Health Professions, 29(1), 1-15.
10
Zhou, C., Crawford, A., Serhal, E., Kurdyak, P., & Sockalingam, S. (2016). The Impact of Project
ECHO on Participant and Patient Outcomes: A Systematic Review. Academic Medicine, 91(10),
1439-1461.
11
Rogers T, Chappelle EF, Wall HK, Barron-Simpson R. Using DHDSP Outcome Indicators for Policy
and Systems Change for Program Planning and Evaluation. Atlanta, GA: Centers for Disease Control
and Prevention; 2011. Available at: https://www.cdc.gov/dhdsp/programs/spha/evaluation_
guides/docs/using_indicators_evaluation_guide.pdf
12
Matt, V., & Matthew, H. (2013). The retrospective chart review: important methodological
considerations. Journal of educational evaluation for health professions, 10.
13
Abel Ickowicz, M. D. (2006). A methodology for conducting retrospective chart review research in
child and adolescent psychiatry. J Can Acad Child Adolesc Psychiatry, 15(3), 127.
NYAM.org
76
14
Bernard, H. R. (2011). Research Methods in Anthropology: Qualitative and Quantitative Approaches.
Introduction to Qualitative and Quantitative Analysis. Rowman Altamira: Lanham, MD.
15
Saldana, J. (2009). The Coding Manual for Qualitative Researchers. Sage Publications: London.
Available at: http://stevescollection.weebly.com/uploads/1/3/8/6/13866629/saldana_2009_
the-coding-manual-for-qualitative-researchers.pdf. Last accessed: February 16, 2017.
16
Leatherman S, Berwick D, Iles D, Lewin LS, Davido F, Nolan T, Bisognano M. The business case for
quality: case studies and an analysis. Health A (Millwood). 2003;22(2):17-30.
17
NACDD (National Association of Chronic Disease Directors). (2009). A Practical Guide to ROI
Analysis. Atlanta, GA: National Association of Chronic Disease Directors.
18
Li Y, Kong N, Lawley MA, Pagán JA. Using systems science for population health management in
primary care. J Prim Care Community Health. 2014 Oct;5(4):242-6.
19
Prezio EA, Pagán JA, Shuval K, Culica D. The Community Diabetes Education (CoDE) program: cost-
eectiveness and health outcomes. Am J Prev Med. 2014 Dec;47(6):771-9.
20
Stewart, W. F., Ricci, J. A., Chee, E., Hahn, S. R., & Morganstein, D. (2003). Cost of lost productive
work time among US workers with depression. Jama, 289(23), 3135-3144.
APPENDIX A
DATA COLLECTION
METHODS: EXAMPLES
FROM ECHO PROJECTS
NYAM.org
78
It may be helpful to review evaluations conducted by other ECHO programs to
determine what data is relevant and realistic for you to collect and analyze in your
own program. The examples provided here demonstrate how various data sources
have been used by others to evaluate Project ECHO programs (or their components).
Note that this is not meant to be an exhaustive list of the many ways particular types
of data have been used; instead, the goal is to describe some concrete examples that
can be provide guidance in developing your own evaluation. See the ECHO Institute’s
Box.com platform for sample evaluation materials (e.g., surveys, interview guides)
from other Project ECHO programs.
PRIMARY DATA SOURCES FOR PROJECT
ECHO EVALUATIONS
INTERVIEW: asking a series of open-ended questions to an individual
for the purpose of gathering detail-rich qualitative data.
Benets and challenges of using interviews include:
BENEFITS
Can be used for collecting nuanced data that
cannot be easily quantied
No baseline required; can be used for programs
that have already been implemented or have
been in existence for a long time.
Provides an opportunity to engage with
stakeholders
79
Project ECHO® Evaluation 101
CHALLENGES
Can be time consuming, both in terms of
conducting the interviews and analyzing the data
Scheduling time to conduct the interview can be
a challenge, particularly for providers
Privacy may be of concern, especially when
interviewing patients
All data are self-reported and therefore
susceptible to bias
Interviews with ECHO participants are a fairly common method of gathering data for
both process and outcome evaluations. For example, an evaluation conducted by The
New York Academy of Medicine in partnership with the Project ECHO GEMH (Geriatric
Mental Health) team at the University of Rochester Medical Center utilized interviews
for quality improvement purposes and to assess changes in participating clinicians’
knowledge, attitudes and behaviors resulting from their program. Questions related
to process and performance monitoring examined participants’ perceptions of the
accessibility and quality of the program, including the technology platform, the
timing and length of the teleECHO clinics, and whether there were aspects of the
program that the participants would recommend changing (process evaluation).
Interview questions also examined participants’ self-reported changes in knowledge,
self-ecacy, treatment practices and professional satisfaction, and the spread of
knowledge from Project ECHO beyond themselves to their colleagues. Participants
were also asked to what extent they saw changes in patient health outcomes as a
result of lessons or recommendations from Project ECHO (outcome evaluation).
ECHO evaluations can also include interviews with patients. For instance, the
Ontario Chronic Pain ECHO Program is in the process of conducting interviews
with patients. They plan to collect data on patient satisfaction with the care they
receive, as well as patient-reported changes in pain and function levels, mood, sleep
and quality of life three months aer their providers completed their participation
in the program. Interviews with patients are less common than interviews with
participating clinicians, due to the added time and cost of outreach and overcoming
privacy concerns.
NYAM.org
80
FOCUS GROUP: a facilitated group discussion conducted with the
goal of gathering qualitative information from
several people (usually six to twelve people) at once.
Benets and challenges of using focus groups include:
BENEFITS
Fairly ecient way to collect qualitative data
from multiple people at once
Produces rich, nuanced data, as participants
bond with one another, and build o each others
comments
Useful method of assessing how program
participants prioritize ideas, or which ideas
generate the most enthusiasm or traction.
CHALLENGES
Facilitation requires training and experience
Limited generalizability beyond those engaged in
the group(s)
Data are time consuming to analyze
Individual personalities can inuence group
processes and perceptions, biasing the results
Several ECHO programs have incorporated focus groups examining participant
experiences into their evaluations. For example, as part of an evaluation of an
endocrinology ECHO program that trains community health workers in New Mexico,
the University of New Mexico conducted four focus groups with a total of 21 program
participants. The groups were conducted in-person at a training session held for this
particular ECHO program. During the groups, participants described their perceptions
of how the program inuenced their condence and competency related to proving
care to patients, as well as their access to the supportive resources they needed to
do their work eectively.
1
Similarly, the Missouri Telehealth Network (MTN) has been involved with evaluating
many of the ECHO programs taking place in their state (which range from autism to
endocrinology to dermatology), and they regularly incorporate focus groups into their
evaluation work. Through these groups, MTN explores participant satisfaction and
81
Project ECHO® Evaluation 101
areas for program improvement. Facilitators ask participants to discuss their reasons
for participating in the selected ECHO program, whether the program is meeting
their needs, and where there is room for improvement. Because their participants are
dispersed across the state, they schedule the groups to take place immediately aer
a teleECHO session (for only about 30 minutes) via teleconference, which enables
clinicians to participate remotely.
For more information on using focus groups for Project ECHO Evaluations, see
Appendix C.
OBSERVATION: a method of gathering data by watching and
documenting events or behavior that take place
during or in relation to a program.
Benets and challenges of using observations include:
BENEFITS
Facilitates increased understanding of program
operations
Requires minimal time or data collection burden
on participants
Oers an alternative to self-report, which may
be biased
CHALLENGES
Evaluator presence can lead those being observed
to alter their behavior (termed the “Hawthorne
eect”)
Require careful planning and note taking,
otherwise, observations will lack structure and
data will be unreliable and dicult to interpret.
Can be time consuming and expensive, especially
if the goal is observation of multiple sites or
activities.
NYAM.org
82
Observations have proven useful for Project ECHO programs evaluating delity
(adherence to the ECHO model) and implementation of the ECHO model. For instance,
as part of a process evaluation of their program, ECHO Autism of the University
of Missouri used a 25-item Facilitation Score Card to examine delity as part of a
larger evaluation of their ECHO program. The scorecard examined key indicators of
model adherence and facilitator engagement of participants. Observers of teleECHO
sessions watched clinics and rated each indicator according to a 5-point scale
(1 = “strongly disagree” to 5 = “strongly agree). The percent of items rated as
strongly agree” or “agree” out of the total number of items assessed was used
to calculate a delity score for each clinic. Through observations, they found that
80% of their teleECHO sessions achieved delity to the model.
2
Assessing delity
is an important quality assurance component of process evaluations; veering too
much from the standard implementation of an evidence-based model means that
the program being implemented is not actually evidence-based. Sample delity
scorecards for the ECHO model are available on Project ECHO’s Box.com.
SURVEY OR QUESTIONNAIRE: A series of questions asked in
order to gather information from
individuals, oen about their personal
characteristics or their knowledge,
attitudes or behaviors.
Benets and challenges of using surveys or questionnaires include:
BENEFITS
Useful for collecting data from a large number
of respondents fairly quickly
Can be administered remotely (e.g., online,
mobile devices, telephone)
Data can be kept condential or anonymous,
which encourages respondents to be more
honest
Facilitates the collection of quantitative results
that can be tested for statistical signicance,
which may be prioritized by some stakeholders
83
Project ECHO® Evaluation 101
CHALLENGES
May be dicult to obtain a sucient number
of responses
Generally not suited to obtaining information
on “why” a particular outcome occurred, or to
understand novel or unexpected phenomena.
Surveys are the most common method of collecting data for Project ECHO
evaluations. A majority of the published studies on ECHO programs report on
data collected via surveys; in addition, programs oen conduct surveys as
part of their performance monitoring plans.
3
As part of their evaluation, a University of Chicago ECHO program targeting
uncontrolled hypertension in FQHCs relied on several previously developed
questionnaires. To assess changes in knowledge, the group used a pre-existing,
validated survey assessing hypertension knowledge among primary care providers.
Aer discussions with the researchers who created and validated the survey,
the University of Chicago ECHO team adapted the questionnaire and eliminated
unnecessary questions in order to reduce the burden on evaluation participants.
The group also adapted a self-ecacy survey that was previously created by the
ECHO Institute (focused on hepatitis C) to t their needs in assessing changes
related to managing hypertension.
4
Although on a dierent topic, the group found
it was a useful starting point to developing their own survey, as it had been
previously tested and used in similar evaluation studies.
The University of Ontario Chronic Pain ECHO program plans to utilize several
pre-existing, validated questionnaires to survey patients who are treated by
participating clinicians to examine changes in their health outcomes before and
aer the program. They intend to ask patients to complete the Brief Pain Inventory
(BPI) to assess the severity of chronic pain and its impact on daily functioning,
5
the nine-item Patient Health Questionnaire
6
(PHQ-9) to assess depression and
depressive symptoms, the Medical Outcomes Study 36-Item Short-Form Health
Survey (SF-36) to assess general patient health status, and the Outcome Patient
Satisfaction questionnaire to assess patient satisfaction with care. It is important
to note that the research team received funding specically for this evaluation,
and that patient surveys generally require more resources (sta, funding and time)
than surveys of participating providers.
NYAM.org
84
For more information on using surveys for Project ECHO Evaluations, see Appendix
B: Survey Toolkit. Additionally, an archive of survey questions that have been used in
Project ECHO evaluations is available on Box.com. 
SECONDARY DATA SOURCES FOR ECHO
EVALUATIONS
PROGRAM RECORDS: data available from program implementation
activities, including information on
participant engagement, program activities,
and program cost.
Benets and challenges of using program records include:
BENEFITS
Program sta can easily obtain program-specic
information without signicant additional work
Provide descriptive information program
activities, making them useful for performance
monitoring and process evaluation.
CHALLENGES
Some information is supercial, such as counts
and lists
Because they are implementation-focused, may
be limited with respect to measuring outcomes
If sta are not invested in careful documentation
of activities, may contain missing or unreliable
information
Several programs have published process evaluation data focused on participation
and engagement, which comes from program records. For example, in 2012, the
University of Washington reported process evaluation data on their ECHO program
for hepatitis C, such as the number of video sessions held, the number of program
participants, and the number of patients managed by participants.
85
Project ECHO® Evaluation 101
Project ECHO AGE is focused on improving care for patients with behavioral problems
related to dementia and/or delirium and implemented by clinicians at Beth Israel
Deaconess Medical Center and their partners. Project ECHO AGE utilized case
presentation forms to understand the impact of their program on patient health.
The forms, completed by participating clinicians in advance of presenting a case,
collected patient demographics and medical history. Because a majority of cases
were presented more than once, the team had access to follow up data on patient
health before and aer their case was presented in an ECHO session. Through this
data, researchers assessed 1) symptoms that led the provider to present patient
case; 2) types of recommendations provided; 3) clinical outcomes post-case
presentation; and 4) hospitalization and mortality post-case presentation.
7
OTHER DATASETS: datasets collected by an outside entity for
purposes that are not directly related to your
particular program. These can range from health
insurance claims data, to mandated quality
metrics from regulatory agencies, to community,
city, or state-wide surveys.
Benets and challenges of using other, larger datasets include:
BENEFITS
Can be used for context and benchmarking
patient characteristics and outcomes
May contain patient level information, which
can otherwise be costly and dicult to collect,
particularly on a large scale
May be familiar to policymakers and
administrators, thereby increasing their
condence in evaluation ndings
Advance planning for collection of baseline
data is oen not required as data is continuously
collected
NYAM.org
86
CHALLENGES
Information of interest may not be available in
datasets
Access to the data may be costly and is
restricted; restrictions can take signicant time
to overcome and may limit the analyses you can
conduct
Recent data may not be immediately available
due to lags in data collection and processing
Datasets are usually large and may therefore
require experts with a background in database
management and statistics for analyses
Datasets are oen de-identied prior to transfer,
which makes it dicult to link changes in health
outcomes to a specic program
ECHO programs have used a variety of external data sources to evaluate their
programs. For example, in 2016, evaluators examined Beth Israel Deaconess Medical
Centers ECHO-AGE program using the Minimum Data Set (MDS) 3.0, a clinical
assessment completed repeatedly for all nursing home residents in the United
States. The dataset provides information on a variety of quality indicators established
by the Centers for Medicaid and Medicare Services. Using these data, the evaluation
team assessed changes in antipsychotic prescriptions and use of physical restraints,
change in need for assistance with activities of daily living, severe pain, pressure
ulcers, severe weight loss, loss of bladder or bowel control, catheter insertion, urinary
tract infections, depressive symptoms, and falls with major injury. Because the data
are aggregated and identiable at the nursing home level, evaluators could create
two groups of facilities: an intervention group (consisting of those who participated
in the program) and a control group (consisting of facilities that were similar to those
that participated in terms of size, location, and other indicators, but that did not
participate in the program).
8
In 2012, the University of Chicago conducted a retrospective analysis of Medicaid
claims data to examine changes in prescribing habits of participating clinicians
before and aer participation in their hypertension management ECHO program and
their pediatric ADHD ECHO program. Using the same dataset, they were also able to
compare participants’ prescribing behavior to that of non-participating clinicians.
9
87
Project ECHO® Evaluation 101
In 2015, The New York Academy of Medicine used data from an insurance plan with
signicant coverage in upstate New York as part of their external evaluation of the
University of Rochester Medical Centers Project ECHO GEMH (geriatric mental
health) program. Using data aggregated at the practice level, they compared health
care utilization and costs for patients with the targeted mental health conditions
before and aer the program was implemented. They also compared pre and post
data for patients without GEMH conditions to assess spillover eects and general
market trends.
10
HEALTH RECORD REVIEWS:
ELECTRONIC AND/OR CHART: data gathered from a review of patient
health or medical records, which oen
includes diagnostic tests performed,
treatment provided, patient
symptomatology, and patient health
outcomes.
BENEFITS
Provides concrete information on treatment
practices and patient health outcomes before
and aer the intervention was implemented
Can be conducted retrospectively or prospectively
Access to patient-level data and outcomes,
which can be dicult to otherwise obtain
CHALLENGES
Patient condentiality regulations and
protections can make it dicult to gain access
to records
Data may be incomplete or dicult to interpret if
data entry practices vary signicantly
Dicult to pull information from charts in a
consistent manner (in other words, may have
poor inter-rater reliability).
NYAM.org
88
As part of an early pilot evaluation, the ECHO team at the University of Chicago
reviewed the records of approximately 20 patients whose providers had presented
their cases in a Project ECHO program focused on controlling hypertension. To access
these records, the team developed a protocol with detailed information on privacy
protections for patients, and worked closely with administrators at participating
federally-qualied health centers (including medical and administrative leadership)
to obtain their approval. The ECHO team then developed a chart review and extraction
tool that they used to review the records. Researchers were able to compare
blood pressure rates of case presentation patients to those of patients treated by
hypertension specialists.
With funding from the Health Resources and Services Administration (HRSA),
the University of Missouri’s ECHO Autism program is conducting a large-scale
evaluation that relies on chart review to collect information on changes in provider
behavior resulting from program participation. Through chart review, evaluators will
extract information from 150 practices on adherence to best practices related to
developmental and autism screening, screening for co-morbidities among children
with autism, and medication monitoring for those children who are prescribed
atypical antipsychotic medications.
For additional resources on each of these data sources, see Appendix E:
Additional Resources.
89
Project ECHO® Evaluation 101
REFERENCES
1
Colleran et al. (2012). Building Capacity to Reduce Disparities in Diabetes. Training Community
Health Workers Using an Integrated Distance Learning Model. The Diabetes Educator, 38(3), 386-
396.
2
Mazurek, M. O., Brown, R., Curran, A., & Sohl, K. (2016). ECHO Autism A New Model for Training
Primary Care Providers in Best-Practice Care for Children With Autism. Clinical pediatrics,
0009922816648288.
3
Zhou, C., Crawford, A., Serhal, E., Kurdyak, P., & Sockalingam, S. (2016). The Impact of Project ECHO
on Participant and Patient Outcomes: A Systematic Review. Academic Medicine, 91(10), 1439-
1461.
4
Masi, C., Hamlish, T., Davis, A., Bordenave, K., Brown, S., Perea, B., ... & Johnson, D. (2012). Using an
established telehealth model to train urban primary care providers on hypertension management.
The Journal of Clinical Hypertension, 14(1), 45-50.
5
Bann, C., Dodd, S. L., Schein, J., Mendoza, T. R., & Cleeland, C. S. (2004). Validity of the brief pain
inventory for use in documenting the outcomes of patients with noncancer pain. The Clinical
journal of pain, 20(5), 309-318.
6
Kroenke K, Spitzer R L, Williams J B (2001). The PHQ-9: validity of a brief depression severity
measure. Journal of General Internal Medicine, 16(9): 606-613.
7
Catic, A. G., Mattison, M. L., Bakaev, I., Morgan, M., Monti, S. M., & Lipsitz, L. (2014). ECHO-AGE:
an innovative model of geriatric care for long-term care residents with dementia and behavioral
issues. Journal of the American Medical Directors Association, 15(12), 938-942.
8
Gordon, S. E., Dufour, A. B., Monti, S. M., Mattison, M. L., Catic, A. G., Thomas, C. P., & Lipsitz,
L. A. (2016). Impact of a Videoconference Educational Intervention on Physical Restraint and
Antipsychotic Use in Nursing Homes: Results From the ECHO-AGE Pilot Study. Journal of the
American Medical Directors Association, 17(6), 553-556.
9
Masi et al. (2012). Using an Established Telehealth Model to Train Urban Primary Care Providers on
Hypertension Management. Journal of Clinical Hypertension, 14(1), 45-50.
10
Fisher et al. (2016). Telementoring Primary Care Clinicians to Improve Geriatric Mental Health Care.
Population Health Management.
APPENDIX B
SURVEY TOOLKIT
91
Project ECHO® Evaluation 101
Surveys are one of the most common methods used by ECHO programs to collect
data for evaluations because they are typically an inexpensive method of gathering
data from a larger number of participants, and basic data analyses can be conducted
fairly quickly. Because of their popularity and utility for programs with limited
evaluation resources, this appendix provides additional detail on utilizing surveys
eectively in Project ECHO evaluation.
THINGS TO CONSIDER WHEN WORKING
WITH SURVEYS
Before conducting a survey, ask yourself:
1. Who will you be asking to take the survey, and will enough people respond?
2. What information do you want to know?
3. Can you use an existing survey, or will you need to create a new one?
4. How can you encourage people to respond to your survey?
5. How will you administer the survey and collect the data?
6. How will you analyze the survey data?
Various considerations related to each of these questions are described below.
1. Who will we be asking to take the survey, and will enough people
respond?
Project ECHO evaluations usually survey participating clinicians. Although obtaining
patient level information is oen desirable, gaining access to patients who are
impacted by the program is normally a challenge (and resource intensive) due to
privacy concerns.
When surveying clinicians, consider the job responsibilities, training and educational
background of participants. A survey for physicians will likely have dierent questions
than a survey for health care administrators, which will have dierent questions
than a survey for community health workers. If surveys are to be used with people in
multiple roles, aim to make the questions broad enough that they are relevant to all
survey respondents.
NYAM.org
92
Note that surveys are best when collecting data from relatively large samples.
If you only expect to collect information from a few people, you might consider
interviews instead.
2. What information do you want to know?
Surveys are typically conducted to learn more about individuals’ knowledge,
attitudes and behavior (see Survey Design, below, for more information on each
of these domains).
When surveys are used with ECHO participants, they oen ask questions assessing:
Opinions about the program itself
Changes in knowledge around best practices in patient care
Shis in attitudes toward patients with particular conditions
Changes in condence and self-ecacy, or the belief in one’s ability to
provide eective and high-quality care for patients with the target
condition or diagnosis
Modications to treatment practices related to caring for patients with
the targeted condition or diagnosis.
Surveys of patients, on the other hand, might include questions related to
Their level of satisfaction with the care they receive
Treatments they have been prescribed, and/or
Their current health status (or whether they experienced specic
health outcomes).
Most surveys also ask participants a few questions about their background and/
or demographics in order to be able to describe who is in the sample. Consider what
information you will actually use (and how you will use it) before including questions
in the survey. You might be interested in data related to a participating clinician’s
practice setting, educational background or training, or years of experience in the
eld; or, you might ask about a patient’s age, gender, health status, etc.
93
Project ECHO® Evaluation 101
3. Can we use an existing survey, or will we need to create
a new one?
There are three types of surveys you might want to use in your ECHO evaluation:
i. A pre-existing, validated survey
ii. A previously developed survey that has been pilot-tested but not validated, or
iii. A new survey developed specically for your program.
Each type of survey has advantages and disadvantages.
i. Pre-existing, validated surveys
A validated survey is one that other researchers have tested and demonstrated that
it measures what it claims to measure. Using such questionnaires may save time and
resources, reducing the need to wordsmith questions and pilot test new instruments.
They also allow you to compare your ndings with those from other studies and
may make it easier to publish your results. Examine peer-reviewed literature to nd
validated surveys related to your eld, or discuss whether such surveys exist with
experts. See Appendix B for examples of ECHO evaluations that have used pre-
existing, validated questionnaires.
Despite benets, working with validated surveys poses certain challenges:
Some validated surveys are proprietary and require you to pay to use them
Many have strict rules that forbid even minor adaptations; and
Many are validated for only one population, which may be dierent from your
population; validation in one population or for one condition does not mean it is
validated for others.
ii. Previously developed survey (pilot-tested but not validated)
Several ECHO programs have developed surveys as part of their own evaluations.
Prior usage means that the questions were pilot tested, which improves question
clarity and helps reduce data irregularities. As a result, it may be easier to use or
adapt a previously developed measure to t the needs of your evaluation, rather than
starting from scratch.
Although some previously developed questionnaires are available on Box.com, you
may also want to contact ECHO programs that have done similar work who may be
willing to share surveys that they used in the past. They may also be able to share
any lessons learned aer they used the instruments, which can help you avoid
NYAM.org
94
unexpected pitfalls. Still, you should always assess an instrument for quality and t
for your particular program before using it.
iii. New survey developed specically for your program
In some cases, existing and relevant surveys may not exist, or those that exist may
not capture the information that you feel is important. If this is the case, you may
want to develop a survey that is specically targeted to your program.
If you decide to develop your own survey, be sure to review some basic literature on
survey development and to pilot test the instrument before administering it. This can
improve data quality and reduce errors that result from unclear questions. See Survey
Design section, below, for more information, along with Appendix E for additional
resources on survey development.
4. How can we encourage people to participate?
Surveys are only useful if you can obtain responses from a sucient proportion of
participants (known as the response rate). If a program reaches 50 providers but only
5 complete the survey (which represents a 10% response rate) you will not be able to
draw reliable conclusions from the data, as such a low response rate suggests that
ndings are not representative of the group at large. A response rate of 50% or higher
is recommended, but many ECHO programs have found that achieving it can be a
challenge. In general, programs with more engaged providers nd it easier to achieve
a high response rate.
In reality, people are busy and it can be dicult to achieve a response rate that that
allows you to be condent in your results. Some tips for improving response rates
include:
Keep it short.
Participants do not want to complete long surveys; if they get bored or
frustrated, responses are likely to be incomplete.
Provide incentives.
Providing an incentive to participate, even if it is small, increases the
likelihood that participants will respond to the survey. Some examples of
suggested incentives include: gi cards, textbooks related to care for the
targeted condition, and access to your institution’s academic library.
95
Project ECHO® Evaluation 101
Follow up.
People are busy and survey requests can oen be overlooked. Be sure to
plan time to follow up with participants multiple (i.e., three to ve) times to
encourage participation.
Administer surveys strategically.
Some ECHO programs have found that administering surveys electronically
during teleECHO clinic sessions can encourage participation and reduce the
time burden that surveys require. Others suggest holding a luncheon during
an in-person meeting when surveys can be distributed, or identifying an
“ECHO champion” at each participating site who will be responsible for
gathering responses from participants.
5. How will we administer the survey?
Surveys can be conducted electronically using a web-based platform, or via
handouts, telephone, or in-person interviews. There are pros and cons to each
method, and you should consider your resources as well as the needs of your
respondents when determining which you will use.
Administering surveys electronically reduces sta time needed for data entry and
management, as platforms like survey monkey or RedCap automatically create
databases from the responses submitted (check with your IRB to make sure your
soware is compliant). Using these platforms also allows you to build in controls
for data quality, such as skip patterns (e.g., if the answer is no, automatically skip to
question 10) and validation rules (e.g., no negative numbers allowed for age). However,
surveys requested via email are easily ignored, which can lead to poor response rates.
Administering the survey over the phone or in-person is best when literacy or
comprehension is a concern, since the questions are read out loud and explanations
for common questions can be provided. These methods, or a printed handout, are
also preferred when respondents are less comfortable with computer soware.
However, they also require greater sta time related to administration and data entry.
NYAM.org
96
6. What kind of analysis will we be doing?
The types of questions you include in your survey and your evaluation design will
dictate the type of analyses you can conduct. If you plan to compare data collected
before the intervention (pre” or baseline) to data collected aer the intervention
(“post” or follow-up), then you will want to make sure that both the baseline and
follow up surveys contain the exact same questions.
However, if you will only be collecting data aer the intervention, you will want to
create a reective survey. A reective survey asks participants to compare their
current experiences (related to, for example, knowledge, self-ecacy, or treatment
practices) to those they remember from before they began the program.
See Section 4.3: Selecting Evaluation Approaches for more information on
evaluation designs.
SURVEY DESIGN
Developing and administering your own survey enables you to collect information
that is specic to your program and ts with your evaluation goals. Yet, designing a
survey is dicult–the types of questions asked and the way they are phrased can
have a signicant impact on data quality. Reviewing literature on best practices in
survey design will support you in designing a reliable and valid survey. Additionally,
some ECHO specic considerations related to designing a survey to look at Project
ECHO outcomes (i.e., knowledge, attitudes, behavior) are noted below.
Measuring Knowledge
Surveys can be a useful method of objectively evaluating changes in knowledge
that result from your program. Such questions should cover topics that are
specically covered in your Project ECHO program. Unlike survey questions on
attitudes or behaviors, knowledge questions generally have “right” answers.
Knowledge surveys should be developed and administered with caution.
Avoid phrasing questions to sound “test-like.” Unfortunately, fears related to
performance could discourage those who think they will not do well or do not
like being tested from participating.
97
Project ECHO® Evaluation 101
Focus on key concepts discussed frequently throughout your ECHO program,
rather than detailed or minor lessons that were not discussed at length.
Include an option for “don’t know.” This can help reduce the “test-like”
feeling while also discouraging people from guessing, which can lead to poor
data quality.
Measuring Attitudes
Surveys are also useful to examine attitudes (especially of ECHO participants).
Some attitudinal questions in ECHO evaluations focus on the perspectives of
program participants on the quality, utility or relevancy of the ECHO program. Others
aim to assess the impact of the program on participants’ professional satisfaction,
perception of available professional support, or self-ecacy. Some ECHO evaluations
have also examined whether participation in Project ECHO changed respondents’
opinions of patients who have the targeted health condition or diagnosis (especially
those that are oen stigmatized, such as mental illness or substance use disorder).
Self-ecacy, or a persons condence in his or her ability to successfully complete
a specic task (in this case, provide eective patient care in line with best practices
for the target condition) is considered a pre-requisite to engaging in behavior change
and is the participant outcome that has been most commonly examined in ECHO
evaluations. If you are planning to assess self-ecacy, remember to seek examples
of similar surveys from others, as many groups have spent a considerable amount of
time developing self-ecacy survey questions for surveys.
Although most surveys are best administered using a pre-post design (as they are
usually considered more objective), a reective survey may actually be a better way
to measure change in self-ecacy. Experienced ECHO evaluators have found that
most clinicians feel fairly condent in their ability to provide high quality care before
beginning a program, possibly unsurprising since most have already been caring
for patients with the targeted condition. However, aer they participate, clinicians
oen report that the program increased their condence. Therefore, self-ecacy
questions on follow-up surveys should ask participants to retroactively compare their
current condence levels to condence levels prior to the program.
NYAM.org
98
EXAMPLE:
MEASURING ATTITUDES
Please use the scale below to report how much you agree or disagree with
the following statement:
My participation in Project ECHO has reduced my professional
isolation.
1. Strongly disagree
2. Disagree
3. Neither agree nor disagree
4. Agree
5. Strongly agree
People with substance use disorders are not interested in quitting
1. Strongly disagree
2. Disagree
3. Neither agree nor disagree
4. Agree
5. Strongly agree
Compared to 6 months ago, how condent are you in your ability to
care for geriatric patients who have mental health conditions using
behavioral interventions?
1. Less condent compared to 6 months ago
2. Equally condent compared to 6 months ago
3. More condent compared to 6 months ago
99
Project ECHO® Evaluation 101
Measuring Behavior Change
Behavior change in Project ECHO programs typically refers to changes in treatment
practices. For example, surveys might ask participants about changes in how they
prescribe medications, how they screen or diagnose a particular condition, or when
they make referrals to external providers. They might also ask whether the clinician
has implemented recommendations made by the specialist team or whether they
plan to do so in the future. Questions about behavior change that already took place
should be asked in past tense and include a timeframe.
EXAMPLES:
MEASURING BEHAVIOR CHANGE
In the last 3 months, how frequently did you discuss advanced
directives with geriatric patients who came in for an appointment?
1. Never
2. Rarely
3. Sometimes
4. Oen
5. Always
MEASURING BEHAVIORAL INTENT
In the next 3 months, how likely are you to use new information you
learned in Project ECHO while treating a patient?
1. Extremely unlikely
2. Unlikely
3. Neutral/Don’t know
4. Likely
5. Extremely likely
NYAM.org
100
In the case of ECHO, there may be a delay between the time clinicians learn a given
lesson and when they are able to utilize the new information (due to the fact that
months might pass before the provider sees a patient with a particular condition).
As a result, it may also be helpful to include questions about behavioral intent,
meaning how the clinician intends to change his/her practice in the future. Studies
have shown that changes in behavioral intent lead to changes in behavior.
1
Questions
about behavioral intent should ask about the likelihood of a behavior and should also
include a timeframe (e.g., in the next few months, how likely are you…).
DO’S AND DONTS FOR
DEVELOPING SURVEY QUESTIONS
DO: Keep it simple.
Aim to create clear, concise questions and avoid using jargon that
may be unfamiliar to respondents.
DONT: Use double-barrel questions.
Do not include more than one idea in a single question. For
instance, the example below inserts two ideas into one question
when it asks about both improvements in treatment and ability
manage patients.
DO: Use special formatting within questions.
Highlighting important parts of a question that might be easily
missed can help ensure it is interpreted correctly.
DONT: Use a scale that is unclear.
When using scales, be sure that the dierences between
categories are easily understood and the dierences in the
meaning of each response category is clear. It is best, when
possible, to use an existing scale (oen called Likert scales)
that has been previously used in research. In the example below,
the dierence between “Agree a little bit” and “Agree somewhat”
is unclear.
101
Project ECHO® Evaluation 101
DOS AND DON’TS FOR DEVELOPING SURVEY QUESTIONS (cont.)
EXAMPLE OF SURVEY QUESTION DON’TS
Project ECHO has improved the kind of treatments I provide to patients
with asthma and I am better able to manage patients who have a variety of
respiratory conditions.
a. Don’t agree
b. Agree a little bit
c. Agree somewhat
d. Agree
EXAMPLE OF SURVEY QUESTION DO’S
Project ECHO has improved my ability to treat patients with asthma.
a. Strongly disagree
b. Disagree
c. Neutral
d. Agree
e. Strongly Agree
Additional reading and resources
In sum, surveys are useful tools to gather information for evaluations. For more
information on developing and utilizing surveys for evaluation purposes, see Appendix
B and Appendix E, as well as sample surveys developed by others on Box.com.
NYAM.org
102
REFERENCES
1
Webb, T. L., & Sheeran, P. (2006). Does changing behavioral intentions engender behavior change?
A meta-analysis of the experimental evidence. Psychological bulletin, 132(2), 249.
APPENDIX C
FOCUS GROUP
TOOLKIT
NYAM.org
104
Focus groups are an valuable way to collect qualitative data, allowing you to
incorporate the words, voices, and perspectives of participants into your evaluation.
They are ideal for exploring the perspectives and experiences of ECHO stakeholders,
enabling you to understand how or why a particular process worked (or did not work)
or a particular outcome occurred (or did not occur). Focus groups allow you not only
to understand the experiences of individuals, but also how those experiences relate
to those described by others. This appendix provides additional detail on focus groups
and how they can be useful in Project ECHO Evaluations.
FOCUS GROUP OVERVIEW
Focus groups generally consist of six to twelve participants and usually last between
one and two hours. Although facilitators are present to help guide the conversation,
the goal of a focus group is to encourage participants to engage with each other. Not
only does this oen lead to increased self-disclosure and data richness, interaction
between participants can facilitate the discovery of unanticipated information and
themes, while also providing an ecient method of collecting information from
multiple people at once.
THINGS TO CONSIDER WHEN PLANNING
FOCUS GROUPS:
1. Who will you recruit to participate?
2. What information do you want to know?
3. When and where will you host the focus group?
4. How many groups can you feasibly conduct?
5. How will you recruit a sucient number of people to participate?
6. Who will facilitate the group?
105
Project ECHO® Evaluation 101
1. Who will you recruit to participate?
In ECHO evaluations, focus groups are most oen held with clinicians (or others)
who participated in the program. They could also be conducted with health care
administrators or other stakeholders. For example, in order to explore the broader
impact of Project ECHO on a health care setting, you may want to conduct a focus
group with providers who did not participate in ECHO sessions or with patients who
get care at the site.
2. What information do you want to know?
To plan for a focus group, you will need to develop a focus group guide. The guide
should contain approximately 15 open-ended questions designed to elicit descriptive
responses from participants on the topics that interest you and your stakeholders.
Facilitators are not expected to follow the guide word-for-word; paraphrasing,
probing questions, and ad hoc additions are expected. For each question, follow-up
prompts should be prepared to support the facilitator in encouraging discussion and
eliciting the detail sought via the groups.
Sample focus group questions might include:
Which lessons from teleECHO sessions have been most relevant to
your practice?
What can Project ECHO do to better engage community providers during
the teleECHO sessions?
How has participation in Project ECHO changed the treatment you
provide to patients?
What would you change to improve the program?
Note that it is helpful to begin the group with an introductory, easy question
(e.g., how did you rst hear about Project ECHO?) before moving into questions that
require deeper thought and a greater sense of trust. You can also use various voting
methods to prioritize ideas or understand the relative importance of a given topic.
However, keep in mind that a single focus group will only represent a small sample
of participants, so without conducting multiple groups, your ability to draw
conclusions is limited.
NYAM.org
106
3. When and where will you host the focus group?
Ideally, focus groups should take place in-person, as face-to-face meetings support
the collaborative and interactive nature of these groups. The location and time
should be convenient for the participants, who are generally seated around a table to
encourage discussion. Evenings or weekends are oen easiest for people to attend—
though lunchtime can work well for sta at a single site.
In-person groups may be feasible for some ECHO programs that take place within
one city or metropolitan area, or for programs that have an introductory training
session to welcome participants to the group. However, a majority of ECHO programs
invite participants from across a large region and in-person focus groups are not
be feasible. In these cases, some have found success holding focus groups via
videoconference (similar to the ECHO sessions themselves). Videoconferencing
reduces travel time required and enables participants to more easily t the group
into their busy schedules. Such groups are generally scheduled during lunch, or during
(or immediately aer) an ECHO clinic.
4. How many groups can you feasibly conduct?
Best practices in focus group research indicate that you should continue conducting
groups until you reach “data saturation,” meaning the same themes arise in each
new group and no new themes are identied. However, depending on funding, the size
of the program and the number of willing participants, this may not be feasible.
For groups with limited resources that need to plan for a concrete number of groups,
experts recommend aiming to conduct multiple groups while taking into account
the size of the program. Larger programs should conduct a minimum of three focus
groups, as this allows for assessment of consistency across responses, but the
reality is that small programs may only have enough participants to hold one or two.
Some ECHO programs have conducted just one focus group and still found valuable
information. Regardless of the number of focus groups conducted, keep in mind
that ndings are only representative of the group you interviewed and cannot be
attributed to, for example, all participants in a particular program.
107
Project ECHO® Evaluation 101
5. How will you recruit (a sucient number of) people to participate?
Given busy schedules, it can be dicult to plan a time when a sucient number of
stakeholders can attend a focus group, and even more dicult to plan several of
them. In addition to choosing a convenient time and location, providing an incentive
can encourage people to participate—and can help to ensure those who sign up
actually attend the group. An incentive can be anything from cash to continuing
medical education (CME) credits to academic library access. Oering additional
perks, such as lunch or free transportation, is also helpful.
To recruit participants, send emails, make announcements or reach out directly
via phone. Explain the process and time involved, and ask that participants commit
to participating in advance. Call or email participants one or two days in advance
of the group to remind them of the time and location, and conrm that each still
plans to attend. Aim to recruit three to ve more people that you want to attend.
If you are unable to recruit a large enough group, consider conducting individual
interviews instead.
6. Who will facilitate the group?
Focus groups require two facilitators: a primary facilitator who guides the discussion,
and a secondary facilitator who handles logistics, manages late-comers, takes
notes, and supports the primary facilitator in guiding the discussion. When resources
permit, aim to select a facilitator who is not intimately involved in administering the
program; doing so will promote honesty in responses and help participants feel more
comfortable giving negative feedback.
Facilitators should be very familiar with the focus group guide and the objectives of
the project so they know when to probe and when to encourage the conversation to
move on. They are also responsible for fostering a dynamic and rich discussion and
keeping the group on topic and on track. Common challenges faced by facilitators
include:
Managing group dynamics:
Some participants will want to dominate the conversation while others will
be quieter and less inclined to share. Facilitators should encourage all group
members to speak up and ensure that everyone remains respectful of each
other throughout the conversation.
NYAM.org
108
Tracking time:
Engaged groups can discuss and debate a single topic at length; facilitators
must ensure that the group moves through the topic guide within the allotted
time frame without stiing the conversation.
Staying on-topic:
Group discussions can oen digress from the original topics; facilitators
must be able to steer the conversation back without appearing dismissive or
antagonistic.
Avoiding the general:
Participants will oen speak about topics in general terms, for example saying
something worked well or poorly, without providing an explanation; facilitators
must recognize when there are opportunities to ask follow-up questions and
elicit more detailed information and specic examples from participants.
Encouraging alternative viewpoints:
Participants may feel the need to agree with the dominant speaker or the group;
facilitators should be aware of this tendency, and make a point of asking if
others have alternative views. Asking the question can remind group members
that alternatives are valid and encourage people with dierent perspectives to
speak up.
Identifying inconsistencies:
Facilitators should be able to recognize inconsistencies in opinions among group
members and ask appropriate follow-up questions to clarify and ensure that the
data accurately represents participants’ perspectives.
Coping with unexpected issues:
Late-comers, unanticipated group size (too small or too large), and domineering
participants are just a few of the things that can disrupt a group, and facilitators
must be able to handle these challenges eectively.
Starting the group by providing discussion guidelines provides an opportunity to
describe focus group processes and the occasional need for facilitators to redirect
the conversation (for more detail on guidelines see the next section).
109
Project ECHO® Evaluation 101
CONDUCTING A FOCUS GROUP
When participants arrive (or sign-on) for a focus group, have your materials (and
food) prepared and ready. You will want to have prepared:
Informed consent or information sheets
A high-quality audio recorder
A brief demographic survey
Incentives and a form to record their receipt
WHY SURVEY MY FOCUS GROUP?
Although you may not need to collect identifying information, it is oen
helpful to ask focus group participants to complete a brief survey. Surveys
allow you to quantify and report on relevant participant characteristics
(e.g., practice setting, educational training, demographics).
Aer forms (consent and survey) are completed, groups should begin with a brief
discussion of the purpose of the group, a reminder that they are being recorded, and a
review of basic guidelines. Common guidelines center around:
Condentiality.
Facilitators should ask group members not to share information outside of the
group, but should also note that condentiality cannot be guaranteed once
group members disperse. As such, group members should not share information
that they worry would be reported outside the group.
Reminders about patient information.
Facilitators should remind group members to avoid sharing identiable
patient information.
NYAM.org
110
Equal “air time”.
Facilitators should remind the group that everyone is encouraged to participate
is important in setting the stage for the group. It can also be helpful to explain
that the facilitator might interrupt if someone is dominating the conversation
and might “call on” those that have remained silent over the course of the
discussion.
Facilitator responsibility.
Facilitators can inform the group that s/he is responsible for keeping the
conversation on track and ensuring that there is sucient time for all the topics
included in the guide. Therefore, she or he might have to interrupt participants if
a conversation is too lengthy or o topic.
Next, the lead facilitator poses questions from the focus group guide to the group,
asking questions in a manner that follows the natural ow of the conversation as
much as possible. If the conversation naturally moves to a topic that appears later
in the guide, the facilitator can adapt the sequence and come back to other questions
later. Ideally, the conversation feels like a natural discussion among group members,
with only the occasional interjection from facilitators. When successful, the data
elicited in focus groups can be valuable components of an evaluation, providing a
detailed understanding of the perspectives, experiences, opinions and priorities
of participants.
ADDITIONAL READING AND RESOURCES
In sum, focus groups can be a great way to collect qualitative data for ECHO
evaluations. For examples of ECHO programs that have used focus groups in their
evaluations, see Appendix B. For additional information on conducting focus groups
for evaluation purposes, see Appendix E.
APPENDIX D
GLOSSARY OF KEY TERMS
NYAM.org
112
ACCOUNTABLE CARE
ORGANIZATION (ACO)
A group health care professionals or organizations
who work together to provide coordinated care to
patients. They generally form value-based payment
arrangements with insurers (i.e., Medicare) in which
both parties agree that payment will be based on
specic quality metrics, rather than the number of
services provided.
ACTIVITIES Processes, techniques, tools, events, technology,
and actions performed by sta members or
partners of the program in order to achieve
established objectives.
ANONYMOUS Identiable information is not collected during the
data collection process, making it impossible to link
data with a specic individual.
BASELINE (“PRE” PERIOD) The period of time when data is collected prior to the
implementation of the program.
BIAS Lack of objectivity due to study design and/or the
subjective experiences, perspectives and prejudices
of the individuals participating in the study.
BUSINESS CASE An analysis of the benets and costs of an
intervention from the perspective of the
organization investing in it.
113
Project ECHO® Evaluation 101
CATEGORICAL VARIABLE Categorical variables are discrete or qualitative in
nature – these variables have preset, non-numerical
responses. These can be “nominal” variables -
meaning that responses are distinct categories
(e.g., gender, race/ethnicity, profession) or “ordinal
variables, meaning that, although not numerical–
they can be ordered (e.g., responses that range from
agree to disagree).
CLOSED-ENDED
QUESTIONS
Closed-ended questions provide discrete, multiple-
choice answers that respondents can select.
CODEBOOK A codebook contains instructions for the
standardization of data elements and details how
the evaluation will utilize accumulated data as well
as ensure the alignment of said data with identied
evaluation indicators. While quantitative codes
name and describe each item, qualitative codes
categorize the data into themes.
COMPARISON GROUP A group comprised of individuals with
characteristics similar to those participating in
a program, but who are not enrolled. Data on
this group can be compared to data from the
intervention group in order to assess whether
changes observed in the intervention group can be
attributed to participation in the program.
CONFIDENTIAL Although identiable data (i.e., name) is collected for
evaluation purposes, data is not be shared or linked
directly participants.
NYAM.org
114
CONTEXTUAL FACTORS Contextual factors are elements of the programs
surroundings that could impact program
participants. Those elements could be political,
social, economic or physical.
CONTINUOUS VARIABLE A variable that can be any numerical value within a
given range of values. For instance, body weight or
the average score on a test are continuous variables.
COST-EFFECTIVENESS An assessment of the ability of a program to achieve
results relative to the cost of implementation.
COVARIATE A covariate is a variable that might explain some
or all of the perceived changes in the dependent
variable or might be linked to the dependent variable.
DATA RELIABILITY When the measurements obtained from the data
remain consistent throughout the duration of the
program.
DATA SOURCES The source of information that will inform your
evaluation (e.g., surveys, focus groups, interviews,
observations, program records, etc.).
DESCRIPTIVE STATISTICS Statistics that describe the data, for example,
frequency counts, measures of central tendency
(mean, medians and modes), measures of
dispersion (range, standard deviation), percentages
and rates.
115
Project ECHO® Evaluation 101
DEPENDENT VARIABLE Can also be described as an outcome or eect.
Usually observed to see if a program inuenced any
changes the variable might have experienced.
ECONOMIC CASE An analysis of the benets and costs of an
intervention that fall on patients, employers, and
society in general (rather than a specic funding
entity)
ECONOMIC EVALUATION Compare the expenses associated with
implementing and delivering the program to the
benets or savings derived from it
ELECTRONIC HEALTH
RECORD OR CHART
REVIEWS
A method of data collection that involves gathering
information from the health and medical records
of patients, which may include diagnostics,
treatments and health outcomes.
EVALUATION Figuring out how eective and ecient a program is
by systematically collecting and analyzing data in an
eort to continuously improve the program.
EVALUATION PLAN Specic explanation of the implementation process
of the evaluation as well as the program description,
evaluation goals and methods and analysis plan.
EVALUATION QUESTIONS Questions to be investigated during the evaluation
process that were developed and rened through
collaboration with evaluation stakeholders.
NYAM.org
116
FEDERALLY-QUALIFIED
HEALTH CENTER (FQHC)
A health center that qualies for enhanced
reimbursement from Medicare and Medicaid
under Section 330 of the Public Health Service
Act. In order to receive this designation, the health
center is required to 1) serve a community with
few health care resources (known as medically-
underserved, 2) oer care on a sliding scale, 3)
provide comprehensive care (either on-site or
through referral arrangements with other providers),
4) incorporate a quality assurance program and, 5)
have a board of directors.
FIDELITY Adherence to a program model and its core
components.
FOCUS GROUP A method of collecting qualitative data in which
several people come together for a facilitated group
discussion around their thoughts, opinions and
perspectives around a particular topic.
FOLLOW-UP (“POST”
PERIOD)
Period of data collection aer the implementation of
the program.
INDICATORS Indicators are markers of progress towards the
change you hope to achieve through your or
program.
INFERENTIAL STATISTICS A set of analyses that can be used to assess the
existence of a relationship between variables, such
as a correlation, chi-square, t-test, or analysis of
variance (ANOVA).
117
Project ECHO® Evaluation 101
INPUTS Resources like nding sources, partners, sta, or
program materials that are put into the program.
INSTITUTIONAL REVIEW
BOARDS
Institutional Review Boards (IRBs) are entities set
up to protect the rights and welfare of people who
participate in research. Evaluations of programs
involving Native Americans/Alaska Natives also
require permission from their tribal governments.
INTERMEDIATE OUTCOMES
OR SHORT-TERM
OUTCOMES
Changes or benets, usually within one to two years
of the immediate outcomes.
INTERVIEWS A method of data collection and qualitative research
that involves partially-structured interview guides.
LOGIC MODEL A logic model is a graphic representation of the
theory of change that illustrates the linkages among
program resources, activities, outputs, audiences,
and short-, intermediate-, and long-term outcomes
related to a specic problem or situation.
LONG-TERM OUTCOMES Lasting changes with organizational, community,
or systems-level benets (e.g., organizational
practices or policies, new or modied legislation,
improved social conditions). Sometimes, these
outcomes may be referred to as impact.
MEAN Equivalent to an average. Calculate a mean by
adding up values and dividing the sum of the values
by the total number of units in your sample.
NYAM.org
118
MEDIAN The middle value in a data set; this means that half
the data are greater than the median and half are
less. One way to compute the median is to list all
scores in numerical order, and then locate the score
in the center of the sample. If there are two middle
scores, you need to average them to determine the
median.
MODE The most frequently occurring value in a dataset.
METHODOLOGY A set or system of methods and procedures that you
use to answer your evaluation questions.
MIXED METHODS STUDY Involve the intentional use of two or more dierent
kinds of data gathering and analysis tools—typically
a combination of qualitative (e.g., focus groups and
interviews) and quantitative (e.g., multiple choice
surveys and assessments)—in the same evaluation.
OBJECTIVES Statements of the results the program aims to
achieve that are specic and can be achieved
within the timeframe of the project. Objectives can
relate to activities required for eective program
implementation (process objectives) or to outcomes
that would be expected if the program were a
success (outcome objectives). Each program will
have multiple objectives.
OBSERVATIONS A method of gathering data by watching and
documenting events or behavior that take place
during or in relation to a program.
119
Project ECHO® Evaluation 101
OPEN-ENDED QUESTIONS Questions that encourage responses that contain
detailed information, rather than a single-word
answer (e.g., “yes” or “no,” “good” or “bad”).
OUTCOMES Anticipated results of a program.
OUTCOME EVALUATION Evaluation that assesses whether the program
achieved its expected results within a given
timeframe.
OUTPUTS Direct and concrete results of the program’s
activities which are oen presented in the form
of documentation on the progress of activity
implementation.
PROCESS EVALUATION Evaluation that focuses on how a program is
implemented, including specic project activities,
the number and characteristics of participants, and
delity to the original program model.
QUALITATIVE DATA Information in the form of textual data like interview
or focus group transcripts, narratives within
medical or program records and open ended survey
questions which allows for more nuanced analysis.
QUANTITATIVE DATA Information that is numerical and that allows
for calculations and statistical analyses to be
conducted.
RANGE Describes the spread in your data (the dierence
between the minimum and maximum).
NYAM.org
120
RETURN ON INVESTMENT
(ROI)
Ratio of the monetary value of program benets to
the cost of implementing the program.
SAMPLE The group of individuals from whom data for an
evaluation is gathered.
SELF-EFFICACY An individuals beliefs about his/her own ability to
carry out an activity eectively.
SMART OBJECTIVES Specic, Measurable, Attainable, Realistic, and
Timely program objectives.
SHORT-TERM OUTCOMES Immediate changes or benets expected—usually
within one to two years—as a result of successful
implementation of the program.
SOCIAL CASE An analysis of the benets of a program to society
without consideration of any associated costs.
STAKEHOLDER Any person, group or entity that has an interest
in the strategy, initiative, or program being
evaluated or in the results of your evaluation,
including program administrators, program sta,
program participants and their patients, funders,
policymakers, and others.
STANDARD DEVIATION A measure of spread from the mean or the variability
within a data set.
121
Project ECHO® Evaluation 101
STATISTICALLY
SIGNIFICANT
A result nding that there is dierence between
two or more groups that is unlikely to be due to
random chance. Various statistical techniques are
used to determine whether a particular nding is
statistically signicant or not.
SURVEY OR
QUESTIONNAIRE
A series of questions asked in order to gather
information from individuals, oen about their
personal characteristics or their knowledge,
attitudes or behaviors.
THEMES Themes are patterns that you nd in your qualitative
data. The general rule is that a theme is formed
when there are three or more pieces of evidence
pointing to the same idea. For example, if three
interviewees felt the videoconferencing soware
was dicult to use, that would be a theme.
TRIANGULATION Comparing and linking ndings from multiple
(including quantitative and qualitative) sources.
VALIDITY How eectively an instrument measures the ideas
and concepts that it is supposed to measure.
APPENDIX E
ADDITIONAL RESOURCES
123
Project ECHO® Evaluation 101
OVERVIEW
This resource guide includes additional resources on evaluation topics such as logic
models, data collection, analysis and reporting of results. It also includes links to data
tools and databases that may be relevant to the ECHO program.
Basic Evaluation
Better Evaluation (2013). Framework Overview. Retrieved from:
http://www.betterevaluation.org/en/plan
This website contains resources related to planning and executing
an evaluation using the Better Evaluation Rainbow Framework. It
covers topics ranging from dening what is to be evaluated, describing
activities, outcomes, impacts and contexts, managing an evaluation,
understanding causality, synthesizing data from evaluation, reporting
ndings, and ensuring that evaluation results are used in the future.
Catholic Health Association of the United States. (2015). Evaluating your
community benet impact. Retrieved from: https://www.chausa.org/
communitybenet/evaluating-community-benet-programs
This guide provides an overview of evaluation basics for hospitals
implementing programs that aim to improve the health of the
community they serve.
Centers for Disease Control and Prevention. (2011). Introduction to program
evaluation for public health programs: A self-study guide. Retrieved from:
http://www.cdc.gov/eval/guide/cdcevalmanual.pdf
This “how to” guide provides support for program managers of
community health interventions in planning, designing, implementing
and using evaluation. It provides a basic and well-respected evaluation
framework developed by the Centers for Disease Control.
(Resources marked with the symbol “ ” are resources that are highly
recommended.)
NYAM.org
124
Community Toolbox. (2016). Our Evaluation Model, Evaluating Comprehensive
Community Initiatives. Part J. Evaluating Community Programs and Initiatives.
(Chapters 36-39). Retrieved from: http://ctb.ku.edu/en/table-of-contents/
evaluate/evaluation/framework-for-evaluation/main
Discusses issues involved in, and recommendations for implementing,
evaluation of community initiatives. Sections also address developing
an evaluation plan, characteristics of a good evaluation and
considerations in choosing an evaluator.
Glenaric Ltd. (2007). Six steps to eective evaluation: A Handbook for
programme and project managers. Joint Information Systems Committee.
Retrieved from: http://www.jisc.ac.uk/media/documents/programmes/
reppres/evaluationhandbook.pdf
Provides steps on 1) identifying stakeholders, (2) describing the
program, (3) designing the evaluation, (4) gathering evidence, (5)
analyzing results, and (6) reporting ndings.
Pawson, R. (2003). Nothing as practical as a good theory. Evaluation, 9,
471-490. doi: 10.1177/1356389003094007
This article, written for evaluation beginners, explains what evaluation
is. Methods of evaluation are discussed in great detail and are
supplemented with real examples.
Preskill, H & Jones, N. (2009). A Practical Guide for Engaging Stakeholders in
Developing Evaluation Questions. Robert Wood Johnson Foundation Evaluation
Series. Retrieved from: http://www.rwjf.org/pr/product.jsp?id=49951
Provides a ve-step process and worksheets for involving stakeholders
in developing evaluation questions.
(Resources marked with the symbol “ ” are resources that are highly
recommended.)
125
Project ECHO® Evaluation 101
Salazar, L. F., Crosby, R. A., & DiClemente, R. J. (2015). Research methods in
health promotion. John Wiley & Sons.
This textbook covers a broad range of methods for conducting
evaluation research of health programs.
W.K. Kellogg Foundation Evaluation Handbook (2010). Retrieved from:
http://www.wkkf.org/resource-directory/resource/2010/w-k-kellogg-
foundation-evaluation-handbook
Covers many evaluation topics, from evaluation planning through to
utilizing evaluation results. Spanish version also available.
Process Evaluation
Centers for Disease Control and Prevention. Developing Process Evaluation
Questions. Retrieved from: https://www.cdc.gov/healthyyouth/evaluation/pdf/
brief4.pdf
This short brief provides denitions and examples of process evaluation
questions.
Linnan, L., & Steckler, A. (2002). Process evaluation for public health
interventions and research. San Francisco: Jossey-Bass.
This book oers an overview of the history, purpose, strengths, and
limitations of process evaluation and includes illustrative case material
of the current state of the art in process evaluation.
Saunders, R.P., Evans, M.H. and Joshi, P. (2005). Developing a Process-
Evaluation Plan for Assessing Health Promotion Program Implementation:
A How-To Guide. SAGE Journal, 6(2) 1-1.
This article describes and illustrates the steps involved in developing a
process evaluation plan for any health promotion program.
(Resources marked with the symbol “ ” are resources that are highly
recommended.)
NYAM.org
126
Outcome Evaluation
Friedman, M. (2002). The Results and Performance Accountability
Implementation Guide. Retrieved from: http://www.raguide.org/
This guide includes tutorials, questions and answers, case studies and
links to other resources on performance outcomes.
Schalock, Robert L. (2001). Outcome-Based Evaluation, Second Edition.
Dordrecht, Netherlands: Kluwer Academic Publishers
This textbook provides an in-depth discussion of outcome-based
research for 1) program evaluation, 2) eectiveness evaluation,
3) impact evaluation and 4) policy evaluation.
Strengthening Nonprots. (2010). A Capacity Builder’s Resource Library:
Measuring Outcomes. Retrieved from: http://strengtheningnonprots.org/
resources/guidebooks/MeasuringOutcomes.pdf
This manual provides a comprehensive discussion of developing and
implementing an outcome evaluation, along with a toolkit and resources
that provide additional guidance.
The Urban Institute Outcome Indicators Project Materials. Retrieved from:
http://www.urban.org/policy-centers/cross-center-initiatives/performance-
management-measurement/projects/nonprot-organizations/projects-
focused-nonprot-organizations/outcome-indicators-project
This set of materials provides support around the development of
performance and outcome indicators for common program areas, such
as health risk reduction, as well as taxonomy (or listing) of outcomes
that are oen relevant to nonprot programs.
(Resources marked with the symbol “ ” are resources that are highly
recommended.)
127
Project ECHO® Evaluation 101
Economic Evaluation
Drummond MF, O’Brien B, Stoddart GL, Torrance GW. (1997). Methods for
the Economic Evaluation of Health Care Programs, 2nd ed. Oxford Medical
Publications, Oxford University Press, New York.
This book includes chapters on collecting and analyzing data, as well as
presenting and using results economic evaluation.
Economic Evaluation for Global Health Programs. Retrieved from: https://depts.
washington.edu/cfar/sites/default/les/uploads/01_Levin_Economic%20
Evaluation%20for%20Global%20Health%20Interventions%20CFAR%20
workshop%202013.pdf
This paper denes various methods of economic evaluation, the
operational steps for organizing them, and a strategic approach to
economic evaluation in the eld.
National Association of Chronic Disease Directors. (2009). A Practical Guide
to ROI Analysis. Atlanta, GA: National Association of Chronic Disease Directors.
Retrieved from: http://c.ymcdn.com/sites/www.chronicdisease.org/resource/
resmgr/services/roi-1.pdf
This guide provides public health professionals with the resources
and tools needed to understand the concepts and processes involved
in calculating return-on-investment (ROI), as well as other methods
to assess a programs economic impact when ROI is not possible or
appropriate.
Sewell, M., and Marczak, M. Using Cost Analysis in Evaluation. Retrieved from:
http://ag.arizona.edu/sfcs/cyfernet/cyfar/Costben2.htm
This online article provides a basic overview of cost allocation,
cost-eectiveness analysis and cost-benet analysis. Points out
the advantages and disadvantages of these approaches and provides
step-by-step instructions for each.
(Resources marked with the symbol “ ” are resources that are highly
recommended.)
NYAM.org
128
WHO Guide to Cost-Eectiveness Analysis. Retrieved from: http://www.who.
int/choice/publications/p_2003_generalised_cea.pdf
This guide aims to provide policymakers and researchers with a
clear understanding of the concepts and benets of utilizing cost-
eective analysis.
Project Objectives and Logic Models
Centers for Disease Control and Prevention, National Center for HIV/AIDS,
Viral Hepatitis, STD and TB Prevention, Division of STD Prevention. Developing
Measurable Program Goals and Objectives. https://www.cdc.gov/std/Program/
pupestd/Developing%20Program%20Goals%20and%20Objectives.pdf
This brief provides a basic overview of best practices in articulating
program goals and objectives for the purposes of evaluation.
Centers for Disease Control and Prevention, Division of Heart Disease and
Stroke Prevention. (n.d.). Evaluation guide: Developing and using a logic model.
Retrieved from: http://www.cdc.gov/dhdsp/programs/spha/evaluation_
guides/docs/logic_model.pdf
This series of guides provides support around eectively developing and
using logic models.
Innovation Network. (n.d.). Point K Tools: Logic model builder. Retrieved from:
https://www.innonet.org/news-insights/resources/point-k-logic-model-
builder/
This web-based workbook assists individuals in building a logic model
for their program.
(Resources marked with the symbol “ ” are resources that are highly
recommended.)
129
Project ECHO® Evaluation 101
Knowlton, L. W. & Phillips, C.C. (2012). The logic model guidebook: Better
strategies for great results, (Second edition). Los Angeles, CA: SAGE
Publications.
This guide provides users with practical support to develop and improve
logic models that reect knowledge, practice, and beliefs.
Program Development and Evaluation. (n.d.). Logic Model Materials. University
of Wisconsin – Extension. Retrieved from: http://www.uwex.edu/ces/pdande/
evaluation/evallogicmodel.html
Includes a number of resources related to developing logic models
including “Enhancing Program Performance with Logic Models.
Substance Abuse & Mental Health Services Administration, Center for
Substance Abuse Prevention. (2007). A Manual for Developing Competitive
SAMHSA Grant Applications. Retrieved from: http://store.samhsa.gov/product/
Developing-Competitive-SAMHSA-Grant-Applications/SMA07-4274
This manual contains easy-to-follow training materials to help
program sta develop and use logic models for program planning,
implementation and evaluation.
Sundra, D., Scherer, J., & Anderson, L. (2003). A guide on logic model
development for CDC’s Prevention Research Centers. Centers for Disease
Control and Prevention. Retrieved from https://www.bja.gov/evaluation/guide/
documents/cdc-logic-model-development.pdf
This guide examines what a logic model is and the benets of using one.
(Resources marked with the symbol “ ” are resources that are highly
recommended.)
NYAM.org
130
Taylor-Powell, E. & Henert, E.(2008). Developing a logic model: Teaching
and training guide. Madison, WI: University of Wisconsin - Extension.
Retrieved from: https://fyi.uwex.edu/programdevelopment/les/2016/03/
lmguidecomplete.pdf
This guide explains what a logic model is, the benets of logic models,
and how to develop a logic model.
W.K. Kellogg Foundation. (2004). Logic model development guide. Retrieved
from: http://www.wkkf.org/knowledge-center/resources/2006/02/wk-
kellogg-foundation-logic-model-development-guide.aspx
This guide aims to give sta of nonprots and community
members alike sucient orientation to the underlying principles of
"logic modeling."
DATA SOURCES AND INSTRUMENTS
General
Reisman, J., Gienapp, A., & Stachowiak, S. (2007). A handbook of data collection
tools: A companion to “A guide to measuring advocacy and policy.” Annie
E. Casey Foundation. Retrieved from: http://orsimpact.com/wp-content/
uploads/2013/08/a_handbook_of_data_collection_tools.pdf
This guide provides users with examples of practical tools and
processes for collecting useful evaluation data.
Taylor-Powell, E & Steele, S. (1996). Collecting Evaluation Data: An Overview of
Sources and Methods. Retrieved from: https://learningstore.uwex.edu/Assets/
pdfs/G3658-04.pdf
This document discusses options for collecting information and reasons
for choosing various approaches.
(Resources marked with the symbol “ ” are resources that are highly
recommended.)
131
Project ECHO® Evaluation 101
Focus Groups
Centers for Disease Control and Prevention. (2008). Data Collection Methods
for Program Evaluation: Focus Groups. Evaluation Briefs, 13. Retrieved from:
https://www.cdc.gov/healthyyouth/evaluation/pdf/brief13.pdf
This brief provides an overview of utilizing focus groups for data
collection in evaluations, including guidelines around appropriate usage
and the advantages and disadvantages associated with their use.
Dick, B. (2010). Structured Focus Groups. Retrieved from: http://www.aral.com.
au/resources/focus.html
Provides step-by-step description of how to conduct a structured
focus group.
Krueger, R.A. (1998). Analyzing and Reporting Focus Group Results. Sage
Publications Focus Group Kit, #6.
This booklet presents advice on ways to summarize information
gathered from focus groups and present ndings in ways that are
sensitive to audience needs.
Morgan, D. L. (1996). Focus groups as qualitative research. (Vol. 16). Sage
publications.
This book provides detail on best practices in using focus groups in
qualitative research.
(Resources marked with the symbol “ ” are resources that are highly
recommended.)
NYAM.org
132
Surveys
American Association for Public Opinion Research (AAPOR). Retrieved from:
http://www.aapor.org
This website provides information on best practices and standard
denitions for survey research. Includes links to other survey research
organizations.
Centers for Disease Control and Prevention. (2008). Data Collection Methods
for Program Evaluation: Questionnaires. Evaluation Briefs, 14. Retrieved from:
https://www.cdc.gov/healthyyouth/evaluation/pdf/brief14.pdf
This brief provides an overview of surveys (also referred to as
questionnaires) as a method of data collection. It includes guidelines
around how they can be used appropriately, along with a discussion
of the advantages and disadvantages associated with using them in
evaluations.
Centers for Disease Control and Prevention. (2010). Data Collection Methods
for Program Evaluation: Increasing Questionnaire Response Rates. Evaluation
Briefs, 21. Retrieved from: https://www.cdc.gov/healthyyouth/evaluation/pdf/
brief21.pdf
This brief oers a basic overview of survey (or questionnaire) response
rates and how they can be improved when using surveys/questionnaires
to collect data for program evaluations.
Dillman, D A., Smyth, Jolene D., Christian, Leah M. (2009). Internet, Mail, and
Mixed-Mode Surveys: The Tailored Design Method, Third Edition. John Wiley &
Sons, Inc.
Recommended textbook on developing and implementing surveys.
(Resources marked with the symbol “ ” are resources that are highly
recommended.)
133
Project ECHO® Evaluation 101
Fink, A. (2003) The Survey Kit Series. Second Edition. Sage Publications.
This series of 10 booklets addresses basic content of survey
development and analysis in an easy to follow format. Also provides
useful information about some qualitative research techniques such as
interviews, focus groups, observational analysis, and content analysis.
Floyd J. Fowler Jr. (2002). Survey Research Methods, 5
th
edition. Sage
Publishing.
This textbook provides information on sampling, sampling errors,
correcting for nonresponse, advantages of alternative approaches to
data collection, ethical issues in survey research, and advice to increase
the validity and reliability of interviews and mail surveys.
McDowell, I. (2006). Measuring Health: A Guide to Rating Scales and
Questionnaires. NY: Oxford University Press.
This guide provides in-depth reviews of over 100 of the leading health
measurement tools and serves as a guide for choosing among them.
Interviews
Better evaluation. Interviews. Retrieved from: http://www.betterevaluation.
org/en/evaluation-options/interviews
This website provides an overview of the dierent types of interviews
that are useful for evaluation, as well as resources for selecting the right
type of interview for your evaluation.
(Resources marked with the symbol “ ” are resources that are highly
recommended.)
NYAM.org
134
Centers for Disease Control and Prevention. (2009). Data Collection Methods
for Program Evaluation: Interviews. Evaluation Briefs, 17. Retrieved from: https://
www.cdc.gov/healthyyouth/evaluation/pdf/brief17.pdf
This brief provides an overview of utilizing interviews for data collection
in evaluations, including guidelines around appropriate usage and the
advantages and disadvantages of using the method.
United States General Accounting Oce. Program Evaluation and Methodology
Division. Using Structured Interviewing Techniques. Retrieved from: http://
www.gao.gov/products/PEMD-10.1.5
A report on designing and pre-testing interview approaches, training
interviewers, contacting persons to interview, conducting the interview
and analyzing the data, including analysis of open-ended questions.
Observations
Centers for Disease Control and Prevention. (2008). Data Collection Methods
for Program Evaluation: Observation. Evaluation Briefs, 16. Retrieved from:
https://www.cdc.gov/healthyyouth/evaluation/pdf/brief16.pdf
This brief provides an overview of observations as a method of data
collection in evaluations, including guidelines around appropriate usage
and the advantages and disadvantages of using the method.
Taylor-Powell, E., Steele, S. (1996). Collecting Evaluation Data: Direct
Observation. Program Development and Evaluation, University of Wisconsin-
Extension. Retrieved from: http://learningstore.uwex.edu/assets/pdfs/g3658-
5.pdf
This report describes the value of using direct observations in program
evaluations and provides an overview of the process of collecting data
using observations.
(Resources marked with the symbol “ ” are resources that are highly
recommended.)
135
Project ECHO® Evaluation 101
Document Review (e.g., electronic health records)
Centers for Disease Control and Prevention. (2009). Data Collection Methods
for Program Evaluation: Document Review. Evaluation Briefs, 18. Retrieved from:
https://www.cdc.gov/healthyyouth/evaluation/pdf/brief18.pdf
This brief provides an overview of collecting evaluation data from
existing documents, including when to conduct document reviews as
well as advantages and disadvantages of relying on existing documents
for evaluation data.
Gearing, R. E., Mian, I. A., Barber, J., & Ickowicz, A. (2006). A Methodology for
Conducting Retrospective Chart Review Research in Child and Adolescent
Psychiatry. Journal of the Canadian Academy of Child and Adolescent Psychiatry,
15(3), 126–134. Retrieved from: https://www.ncbi.nlm.nih.gov/pmc/articles/
PMC2277255/
This journal article describes a nine-step process for conducting chart
reviews. Although geared towards child and adolescent psychiatry
research, it is relevant to evaluation research on a range of conditions.
Vassar, M. & Holzmann, M. (2013). The retrospective chart review: important
methodological considerations. Journal of educational evaluation for
health professions, 10. Retrieved from: https://www.e-sciencecentral.org/
articles/?scid=SC000000493
This journal article reviews important methodological considerations for
conducting chart reviews in evaluation research.
(Resources marked with the symbol “ ” are resources that are highly
recommended.)
NYAM.org
136
QUALITATIVE DATA ANALYSIS
Centers for Disease Control and Prevention. (2009). Analyzing Qualitative Data
for Evaluation. Evaluation Briefs, 19. Retrieved from: https://www.cdc.gov/
healthyyouth/evaluation/pdf/brief19.pdf
This brief includes an overview of qualitative data. It discusses planning
for qualitative data analysis; methods of analyzing qualitative data; and
the advantages and disadvantages of using qualitative data in program
evaluations.
Corbin, J., & Strauss, A. (2008). Basics of Qualitative Research: Techniques and
Procedures for Developing Grounded Theory (3rd ed.). Thousand Oaks, CA: Sage
This textbook provides information on qualitative data analysis, with a
focus on the commonly-used grounded theory approach.
The Pell Institute. (2017). Evaluation Toolkit: Analyzing Qualitative Data.
Retrieved from: http://toolkit.pellinstitute.org/evaluation-guide/analyze/
analyze-qualitative-data/
This toolkit oers tips on analyzing qualitative data.
Silverman, D. (2006). Interpreting qualitative data: Methods for analyzing talk,
text and interaction. Sage Publications.
This textbook oers users the kind of hands-on training in qualitative
research required to guide them through the process.
(Resources marked with the symbol “ ” are resources that are highly
recommended.)
137
Project ECHO® Evaluation 101
QUANTITATIVE DATA ANALYSIS
Centers for Disease Control and Prevention. (2009). Analyzing Quantitative
Data for Evaluation. Evaluation Briefs, 20. Retrieved from: https://www.cdc.gov/
healthyyouth/evaluation/pdf/brief20.pdf
This brief includes an overview of quantitative data. It discusses
planning for quantitative data analysis; methods of analyzing
quantitative data; and the advantages and disadvantages of using
quantitative data in program evaluations.
Pell Institute. (2017). Evaluation Toolkit. Analyzing Quantitative Data.
Retrieved from: http://toolkit.pellinstitute.org/evaluation-guide/analyze/
analyze-quantitative-data/
This toolkit oers tips on analyzing quantitative data.
StatSo Electronic Statistics Textbook (2010). Retrieved from: http://www.
statso.com/textbook/stathome.html
This text covers basic and more advanced statistical techniques,
including topics such as data mining. Includes a “statistical advisor” to
help you select appropriate approaches to use.
Trochim, W. (2006). The Web Center for Social Research Methods.
Retrieved from: http://www.socialresearchmethods.net
This website provides social survey research/evaluation advice. Click on
the “Selecting Statistics” icon for suggestions on selecting appropriate
statistical techniques. Click on “Knowledge Base” for information on
program evaluation and data analysis approaches. Content includes
foundations of research, sampling, measurement, evaluation design,
statistical analysis, and writing reports.
(Resources marked with the symbol “ ” are resources that are highly
recommended.)
NYAM.org
138
United States General Accounting Oce, Program Evaluation and Methodology
Division. (1992). Quantitative Data Analysis: An Introduction. Retrieved from:
http://www.gao.gov/products/GAO/PEMD-10.1.11
This report provides basic information on evaluation design and
methods of quantitative analysis to those without statistical expertise.
Topics include calculating descriptive statistics and associations among
variables, along with estimating population parameters, determining
causation and avoiding pitfalls in data analysis.
EVALUATION REPORTING AND DISSEMINATION
OF RESULTS
Canadian International Development Agency. (2002). How to Perform
Evaluations - Evaluation Reports. Retrieved from: http://www.oecd.org/
dataoecd/51/60/35138852.pdf
This brief provides tips and checklists for writing each section of an
evaluation report.
Centers for Disease Control and Prevention. (2013). Evaluation reporting: A guide
to help ensure use of evaluation ndings. Retrieved from: http://www.cdc.gov/
dhdsp/docs/evaluation_reporting_guide.pdf
This guide covers: (1) key considerations for eectively reporting
evaluation ndings; (2) essential elements for evaluation reporting; (3)
importance of dissemination; and (4) tools and resources.
Emery, A. (n.d.). Anns blog: Equipping you to collect, analyze, and visualize data
[Blog archives]. Retrieved from: http://annkemery.com/blog/
This blog, written by the chair of the American Evaluation Associations
data visualization interest group, provides tips on presenting data in a
pictorial or graphical format.
(Resources marked with the symbol “ ” are resources that are highly
recommended.)
139
Project ECHO® Evaluation 101
Evergreen, S. (2013). Presenting data eectively: Communicating your ndings
for maximum impact. Thousand Oaks, CA: SAGE Publications.
This book focuses on the guiding principles of presenting data in ways
that eectively engage and inform audiences.
Holm-Hansen, C. (2008). Communicating evaluation results. Wilder Research.
Retrieved from: http://www.wilder.org/wilder-research/publications/studies/
program%20evaluation%20and%20research%20tips/communicating%20
evaluation%20results%20-%20tips%20for%20conducting%20program%20
evaluation%20issue%2014,%20fact%20sheet.pdf
This publication oers tips on organizing and analyzing quantitative and
qualitative data, as well as tips for writing reports.
Minter, E., & Michaud, M. (2003). Using Graphics to Report Evaluation Results.
Program Development and Evaluation, University of Wisconsin-Extension.
Retrieved from: https://learningstore.uwex.edu/Assets/pdfs/G3658-13.pdf
This booklet provides a brief overview of how to choose among common
types of graphics and ensure that they accurately represent your data.
Torres, R. T., Preskill, H.S., & Piontek, M.E. (2004). Evaluation strategies for
communicating and reporting: Enhancing learning in organizations, (Second
edition). Thousand Oaks, CA: SAGE Publications.
This book includes worksheets and instructions for creating a detailed
communicating and reporting plan based on audience needs and
characteristics.
(Resources marked with the symbol “ ” are resources that are highly
recommended.)
NYAM.org
140
SECONDARY DATA SOURCES AND TOOLS
Data Tools
Centers for Disease Control and Prevention. Community Health Status
Indicators (CHSI). Retrieved from: http://wwwn.cdc.gov/communityhealth
This an interactive web application that produces health proles for all
3,143 counties in the United States. Each prole includes key indicators
of health outcomes.
Centers for Disease Control and Prevention. Wide-ranging Online Data for
Epidemiologic Research (WONDER). Retrieved from: http://wonder.cdc.gov
This system provides users access to statistical research data
published by CDC, as well as reference materials, reports, and guidelines
on health-related topics.
County Health Rankings. Retrieved from: http://www.countyhealthrankings.org/
This site provides access to 50 state reports, ranking each county
within the 50 states according to its health outcomes and the multiple
health factors that determine a county’s health.
Health Indicators Warehouse. Retrieved from: http://www.healthindicators.gov/
The purpose of the site is to (1) provide a single source for national,
state, and community health indictors; (2) meet needs of multiple
population health initiatives; (3) facilitate harmonization of indicators
across initiatives; and (4) link indicators with evidence-based
interventions.
(Resources marked with the symbol “ ” are resources that are highly
recommended.)
141
Project ECHO® Evaluation 101
ROI Calculators
Agency for Healthcare Research and Quality. Asthma Return-on-Investment
Calculator. Available at: https://www.ahrq.gov/cpi/centers/ockt/kt/tools/
asthroisumm.html
This web-based tool was designed to support public health
practitioners and policy makers in estimating the cost savings and
nancial benets of improving the quality of asthma care at the state
level. The tool focuses on educational programs targeting disease
management, and provides information on care utilization, cost, and
asthma prevalence.
American Medical Association. Diabetes Prevention Program Cost Saving
Calculator. Available at: https://ama-roi-calculator.appspot.com/
An online calculator designed to evaluate the ROI of programs that aim
to prevent diabetes in pre-diabetic patients over a three-year period.
Americas Health Insurance Plans. Making the Business Case for Smoking
Cessation. Available at: http://www.businesscaseroi.org/roi/apps/calculator/
calcintro.aspx
This online tool was designed to support health insurance plans and
employers in estimating the ROI related to encouraging smoking
cessation and providing coverage for related treatment to employees or
beneciaries.
Center for Health Care Strategies. Return on Investment Forecasting Calculator.
Available at: http://chcsroi.org
A web-based tool created to help Medicaid stakeholders identify the
cost-savings potential of various quality initiatives. The tool supports
users in completing a step-by-step process to calculate ROI forecasts
for Medicaid quality initiatives generally as well as a separate tool
specically for assessing ROI for new care delivery models that rely on
health or medical home.
(Resources marked with the symbol “ ” are resources that are highly
recommended.)
NYAM.org
142
Ensuring Solutions to Alcohol Problems. The Substance Use Disorder Calculator.
Available at: http://www.alcoholcostcalculator.org/sub/
This online calculator can be used to examine current costs related to
alcohol and substance use disorders as well as cost savings associated
with reducing the number of untreated individuals.
Wellsteps Wellness Solutions. ROI Calculator. Available at: https://www.
wellsteps.com/roi/resources_tools_roi_cal_health.php
A web-based ROI calculator specically designed to assess ROI for
worksite wellness programs that promote healthy lifestyles that result
in changes in wellbeing (i.e., obesity and smoking rates). The calculator
looks at impact of changes in wellness on health care costs and
productivity.
Select Secondary Data Sources
All Payer Claims Database Council: State Summary Map. Retrieved from:
https://www.apcdcouncil.org/state/map
This website oers a state-by-state summary on the current status
and availability of large-scale databases that systematically collect
health care claims data from a variety of health care payers, oen
referred to as “all-payer claims databases.
American Community Survey. Retrieved from: https://www.census.gov/
programs-surveys/acs/
The American Community Survey is designed to support policy makers
and community leaders in understanding basic information on their
community. It contains information on occupations, educational
attainment, veteran-statuses, housing, and more. Data is available at
the state and county levels, as well as for some metropolitan areas.
(Resources marked with the symbol “ ” are resources that are highly
recommended.)
143
Project ECHO® Evaluation 101
Area Health Resource Files (AHRF). Retrieved from: https://datawarehouse.
hrsa.gov/topics/ahrf.aspx
The AHRF is a county-specic health resources data le that contains a
wide variety of information, including but not limited to: health facilities,
health professions, health status, economic activity, and socioeconomic
and environmental characteristics. The dataset is useful for program
planners and policy makers in describing the health care context at the
county, state and national level, and can be downloaded free of charge.
Behavioral Risk Factor Surveillance System. (BRFSS). Retrieved from: http://
www.cdc.gov/brfss/data_documentation/
Data includes state and county level information on health risk
behaviors, preventive health practices, and health care access primarily
related to chronic disease and injury.
The Healthcare Cost and Utilization Project (HCUP). Retrieved from: https://
www.ahrq.gov/research/data/hcup/index.html
Includes the largest collection of longitudinal hospital care data in the
United States, with all-payer, encounter-level information beginning
in 1988. These databases enable research on a broad range of health
policy issues, including cost and quality of health services, medical
practice patterns, access to health care programs, and outcomes of
treatments at the national, state, and local levels.
The Medical Expenditure Panel Survey (MEPS). Retrieved from: https://meps.
ahrq.gov/mepsweb/
Provides data on families and individuals, their medical providers, and
employers across the United States. MEPS is a comprehensive source
of data on health care cost, utilization and insurance coverage. A limited
dataset is available for download; researchers can apply for access to
the restricted data at the Agency for Healthcare Research and Qualitys
Data Center.
(Resources marked with the symbol “ ” are resources that are highly
recommended.)
NYAM.org
144
National Health & Nutrition Examination Survey (NHANES). Retrieved from:
https://www.cdc.gov/nchs/nhanes/
Captures data on a variety of health issues (e.g. dietary behavior, health
conditions such as diabetes, high blood pressure, high cholesterol
and depression, and environmental exposures); some cities and
municipalities also conduct the survey at the local level.
The National Health Interview Series (formerly IHIS). Retrieved from: https://
ihis.ipums.org/ihis/index.shtml
Collects information on the health, health care access, and health
behaviors of the civilian, non-institutionalized U.S. population, with
digital data les available from 1963 to present. Users can create
custom NHIS data extracts for analysis.
The National Longitudinal Study of Adolescent to Adult Health (Add Health).
Retrieved from: http://www.cpc.unc.edu/projects/addhealth
Provides access to data on a longitudinal survey that began in 1994,
which collects data on respondents’ social, economic, psychological,
and physical well-being. The site also provides data on family,
neighborhood, community, school, friendships, peer groups, and
romantic relationships, providing unique opportunities to study how
social environments and behaviors in adolescence are linked to health
and achievement outcomes in young adulthood.
Nursing Home Compare. Retrieved from: https://data.medicare.gov/data/
nursing-home-compare
Contains quarterly data on specic quality measures from every
Medicaid-certied nursing home in the United States. Available for
download free-of-charge.
(Resources marked with the symbol “ ” are resources that are highly
recommended.)
145
Project ECHO® Evaluation 101
Youth Risk Behavior Surveillance System (YRBSS). Retrieved from: https://
www.cdc.gov/healthyyouth/data/yrbs/
Monitors six types of health-risk behaviors that contribute to the
leading causes of death and disability among youth and adults,
including—sexual behaviors, behaviors leading to injuries and violence,
tobacco use, unhealthy dietary behaviors and inadequate physical
activity. Available at state level and for certain large, urban counties.
(Resources marked with the symbol “ ” are resources that are highly
recommended.)
INSTITUTE FOR URBAN HEALTH
FELLOWS
LIBRARY
The New York Academy of Medicine
1216 Fih Avenue
|
New York, NY 10029 212.822.7200
NYAM.org
© 2017 The New York Academy of Medicine. All rights reserved.
About the Academy
The New York Academy of Medicine advances
solutions that promote the health and
well-being of people in cities worldwide.
Established in 1847, The New York Academy
of Medicine continues to address the health
challenges facing New York City and the
world’s rapidly growing urban populations.
We accomplish this through our Institute
for Urban Health, home of interdisciplinary
research, evaluation, policy and program
initiatives; our world class historical medical
library and its public programming in history,
the humanities and the arts; and our Fellows
program, a network of more than 2,000
experts elected by their peers from across
the professions aecting health. Our current
priorities are healthy aging, disease prevention,
and eliminating health disparities.
The views presented in this publication are
those of the authors and not necessarily those
of The New York Academy of Medicine, or its
Trustees, Ocers or Sta.