1
Standard Definitions
Final Dispositions of Case Codes and Outcome Rates for Surveys
Revised 2023
List Samples
Address-Based (ABS) Samples
Phone Samples
Online Panel Surveys
Calculating Outcome Rates from Final Dispositions
2023
THE AMERICAN ASSOCIATION FOR PUBLIC OPINION RESEARCH
2
Table of Contents
About This Report ................................................................................................................................... 5
Background ............................................................................................................................................. 6
This report: .......................................................................................................................................... 7
Introduction ............................................................................................................................................ 8
Using the Updated Standard Definitions Guide .................................................................................... 8
Final Disposition Codes ...................................................................................................................... 10
Completed and Partial Questionnaires ............................................................................................... 10
Modifications of the Final Disposition Codes .................................................................................. 11
Temporary vs. Final Disposition Codes ........................................................................................... 12
Substitutions ..................................................................................................................................... 14
Proxies .............................................................................................................................................. 14
Complex designs ................................................................................................................................ 15
Section 1: List Samples .......................................................................................................................... 16
Table of disposition codes ................................................................................................................. 17
Table 1.1. Valid Eligible, No Interview (Non-response) Dispositions for List Samples ...................... 17
Table 1.2 Valid Unknown Eligibility, Non-Interview Dispositions for List Samples ........................... 19
Table 1.3. Not Eligible 4.0 for List Samples ..................................................................................... 21
1.1 Mail Surveys of Specifically Named Persons/Entities .............................................................. 21
Eligible, no returned questionnaire (non-response) ....................................................................... 22
Unknown eligibility, no returned questionnaire ............................................................................. 24
Not eligible .................................................................................................................................... 25
1.2 Email Surveys of Lists of Specifically Named Persons .............................................................. 26
Eligible, No Returned Questionnaire (Non-response) ..................................................................... 28
Unknown Eligibility, No Questionnaire Returned............................................................................ 29
Not Eligible .................................................................................................................................... 29
1.3 Phone Surveys of Lists of Specifically Named Persons ............................................................. 30
Eligible, No Interview (Non-response) ............................................................................................ 30
Unknown Eligibility, Non-Interview ................................................................................................ 30
Not Eligible .................................................................................................................................... 30
1.4 In-Person Surveys of Lists of Specifically-Named Persons/Entities .......................................... 31
Eligible, No Interview (Non-response) ............................................................................................ 31
Unknown Eligibility, Non-Interview ................................................................................................ 31
3
Not Eligible .................................................................................................................................... 31
1.5 Web-Push Surveys of Lists of Specifically Named Persons ....................................................... 31
Eligible, No Interview (Non-response) ............................................................................................ 32
Unknown Eligibility, Non-Interview ................................................................................................ 32
Not Eligible .................................................................................................................................... 32
1.6 SMS (Short Message Service) or “Text Message” Surveys ....................................................... 32
Eligible cases that are not interviewed (non-respondents) ............................................................. 33
Cases of unknown eligibility ........................................................................................................... 34
Not Eligible .................................................................................................................................... 34
Section 2: Address-Based Samples (ABS)................................................................................................ 35
Within-Unit Screening ....................................................................................................................... 36
Using appended supplemental contact information ........................................................................... 37
Appending a Name to Randomly Selected Addresses ......................................................................... 38
Table of disposition codes ................................................................................................................. 38
Table 2.1. Valid Eligible, No Interview (non-response) Dispositions for Samples of Unnamed
Addresses ...................................................................................................................................... 39
Table 2.2. Valid Unknown Eligibility, Non-Interview Dispositions for Samples of Unnamed Addresses
...................................................................................................................................................... 41
Table 2.3. Valid Not Eligible Dispositions for Samples of Unnamed Addresses ................................ 45
2.1 Mail/Web-push surveys of Randomly Selected Addresses ...................................................... 45
Eligible, No Interview (Non-response) ............................................................................................ 45
Unknown Eligibility, Non-Interview ................................................................................................ 49
Not Eligible .................................................................................................................................... 49
2.2 In-person surveys of Randomly Selected Addresses (ABS) ...................................................... 50
Eligible, No Interview (Non-response) ............................................................................................ 51
Unknown Eligibility, Non-Interview ................................................................................................ 51
Not Eligible .................................................................................................................................... 51
2.3 Email surveys of Randomly Selected Addresses ...................................................................... 51
2.4 Phone Surveys of Randomly-Selected Addresses .................................................................... 52
2.5 SMS (Short Message Service) or “Text Message” Surveys ....................................................... 53
Section 3: Phone Samples, Random-Digit Dial (RDD) .............................................................................. 55
Within-Unit Selection ........................................................................................................................ 56
Dual-frame (DFRDD) Samples ............................................................................................................ 57
Table of disposition codes ................................................................................................................. 58
4
Table 3.1. Valid Eligible, No Interview (non-response) Dispositions for RDD Samples ..................... 59
Table 3.2. Valid Unknown Eligibility, Non-Interview Dispositions for RDD Samples ......................... 61
Table 3.3. Valid Not Eligible Dispositions for RDD Samples ............................................................. 64
3.1 RDD Phone Surveys ................................................................................................................ 65
Eligible, No Interview (Non-response) ............................................................................................ 65
Unknown Eligibility, Non-Interview ................................................................................................ 67
Not Eligible .................................................................................................................................... 68
3.2 SMS/Text Messaging .............................................................................................................. 69
Eligible, No Interview (Non-response) ............................................................................................ 70
Unknown Eligibility, Non-Interview ................................................................................................ 70
Not Eligible .................................................................................................................................... 71
Section 4: Online Panel Surveys ............................................................................................................. 72
Probability-Based Internet Panels ...................................................................................................... 72
Table of disposition codes ................................................................................................................. 73
Table 4.1. Valid Eligible, No Interview (non-response) Dispositions for Samples from Online
Probability Panels .......................................................................................................................... 74
Table 4.2. Valid Unknown Eligibility, Non-Interview Dispositions for Samples from Online Probability
Panels ............................................................................................................................................ 75
Table 4.3. Valid Not Eligible Dispositions for Samples from Online Probability Panels ..................... 77
Online Non-Probability Samples ........................................................................................................ 78
Section 5. Conclusion............................................................................................................................. 80
Section 6. References ............................................................................................................................ 81
Section 7. Calculating Outcome Rates from Final Disposition Distributions ............................................ 85
Calculating Outcome Rates from Final Disposition Distributions ........................................................ 85
Response Rates ................................................................................................................................. 85
Cooperation Rates ............................................................................................................................. 87
Refusal Rates ..................................................................................................................................... 87
Contact Rates .................................................................................................................................... 88
Some Complex Designs ...................................................................................................................... 88
Multistage Sample Designs ............................................................................................................ 88
Single-Stage Samples with Unequal Probabilities of Selection ........................................................ 89
Two-Phase Sample Designs ............................................................................................................ 90
5
About This Report
Standard Definitions has been a work in progress, with this being the tenth major edition. The American
Association for Public Opinion Research (AAPOR) plans to continue updating it going forward, adding
comparable definitions for other modes of data collection and making other refinements as appropriate.
AAPOR also will be working with other organizations to further the widespread adoption and utilization
of Standard Definitions. AAPOR has been asking academic journals to use AAPOR standards in
evaluating and publishing articles; several, including Public Opinion Quarterly and the International
Journal of Public Opinion Research, have agreed to do so.
The first edition (1998) was based on the work of a committee headed by Tom W. Smith. Other AAPOR
members who served on the committee include Barbara Bailar, Mick Couper, Donald Dillman, Robert M.
Groves, William D. Kalsbeek, Jack Ludwig, Peter V. Miller, Harry O’Neill, and Stanley Presser. The second
edition (2000) was edited by Rob Daves, who chaired a group that included Janice Ballou, Paul J.
Lavrakas, David Moore, and Smith. Lavrakas led the writing for the portions dealing with mail surveys of
specifically named persons and for the reorganization of the earlier edition. The group wishes to thank
Don Dillman and David Demers for their comments on a draft of this edition. The third edition (2004)
was edited by Smith, who chaired a committee of Daves, Lavrakas, Daniel M. Merkle, and Couper.
Groves and Mike Brick mainly contributed the new material on complex samples. The fourth edition was
edited by Smith, who chaired a committee of Daves, Lavrakas, Couper, Shap Wolf, and Nancy
Mathiowetz. The new material on Internet surveys was mainly contributed by a subcommittee chaired
by Couper, with Lavrakas, Smith, and Tracy Tuten Ryan as members.
The fifth edition was edited by Smith, who chaired the committee of Daves, Lavrakas, Couper, Mary
Losch, and J. Michael Brick. New material in the fifth edition largely relates to the handling of cell
phones in surveys. The sixth edition was edited by Smith, who chaired the committee of Daves,
Lavrakas, Couper, Reg Baker, and Jon Cohen. Lavrakas led the updating of the section on postal codes.
Changes mainly dealt with mixed-mode surveys and methods for estimating eligibility rates for unknown
cases. The seventh edition was edited by Smith, who chaired the committee of Daves, Lavrakas, Couper,
Timothy Johnson, and Richard Morin. Couper led the updating of the section on internet surveys, and
Sara Zuckerbraun drafted the section on establishment surveys. The eighth edition was edited by Smith,
who chaired the committee of Daves, Lavrakas, Couper, and Johnson. Sara Zuckerbraun and Katherine
Morton developed the revised section on establishment surveys. The section on dual-frame phone
surveys was prepared by a sub-committee headed by Daves, with Smith, David Dutwin, Mario Callegaro,
and Mansour Fahimi as members. The ninth edition was edited by Smith, who chaired the committee of
Daves, Lavrakas, Couper, Johnson, and Dutwin. The new section on mail surveys of unnamed person was
prepared by a sub-committee headed by Dutwin with Couper, Daves, Johnson, Lavrakas, and Smith as
members.
This tenth edition was edited by Ned English, with significant contributions by Ashley Kirzinger, Ashley
Amaya, Cameron McPhee, Jenny Marlar, Mickey Jackson, Jennifer Berktold, and Amanda Nagle. Amaya
and McPhee led the revision and update of dispositions for this new version and drove much of the
restructuring. Nagle, McPhee, and P.J. Lugtig led the new paper on calculating e, which will appear
separately. Additional support for this edition was provided by Kristen Olson, Ashley Hyon, Ben Phillips,
Stephen Immerwahr, and Clifford Young. We also removed a section on establishment surveys that
needs updating and will be included in an addendum.
6
The tenth edition represents a wholesale reorganization of Standard Definitions, structured by frame
rather than mode as in previous versions to allow greater clarity and flexibility for users. We feel this
organization is more consistent with current survey designs and methodologies, specifically multi-mode
data collection, as modes can be appropriate for various frames and vice-versa. We also have a new
discussion of multi-mode designs and material on SMS (text) contact.
How to cite this report
This report was developed for AAPOR as a service to public opinion research and the survey research
industry, so please feel free to cite it. AAPOR requests that you use the following citation:
The American Association for Public Opinion Research. 2023 Standard Definitions:
Final Dispositions of Case Codes and Outcome Rates for Surveys. 10
th
edition. AAPOR.
Background
Survey researchers have needed comprehensive and reliable diagnostic tools to understand the
components of total survey error. Some components, such as margin of sampling error, are relatively
easily calculated and familiar to many who use survey research. Other components, such as the
influence of question-wording on responses, are more difficult to ascertain. Groves (1989) catalogues
error into three other major potential areas where it can occur in sample surveys. One is coverage,
where error can result if some members of the population under study do not have a known nonzero
chance of being included in the sample. Another is measurement effect, such as when the instrument
or items on the instrument are constructed in such a way as to produce unreliable or invalid data. The
third is nonresponse error, where nonrespondents in the sample that researchers initially drew differ
from respondents in ways that are germane to the survey's objectives.
Often it is assumed correctly or not that the lower the response rate, the more question there is
about the validity of the sample. At the same time, the survey research industry has seen wholesale
declines in response rate in recent decades across mode and design (Curtin et al. 2005, Dutwin and
Lavrakas 2015, Brick and Williams 2013, de Leeuw et al. 2002). Although response rate information
alone is insufficient for determining how much nonresponse error exists in a survey, or even whether it
exists, calculating the rates is a critical first step to understanding the presence of this component of
potential survey error. By knowing the disposition of every element drawn in a survey sample,
researchers can assess whether their sample might contain nonresponse error and the potential reasons
for that error. Defining final disposition codes and calculating study outcome rates is the topic for this
report.
With this report, AAPOR offers an updated tool that can be used as a guide to quantifying one important
aspect of a survey’s quality. It is a comprehensive, well-delineated way of describing the final
disposition of cases and calculating outcome rates for surveys conducted using samples selected from a
variety of frames (list frames, RDD phone frames, address-based frames, as well as online sample
frames) and for data collected through multiple modes, including web, phone, paper-and-pencil, in-
person. These modes may be used alone or in combination.
7
The AAPOR Council stresses that all disclosure elements, not just selected ones, are important to
evaluate a survey. The Council has cautioned that there is no single number or measure that reflects the
total quality of a sample survey. As such, the information in this report should be used to report
outcome rates. Researchers will meet AAPOR's Standards for Minimal Disclosure requirements (Part III
of the Code of Professional Ethics and Practices) if they report final disposition codes as they are
outlined in this report, along with the other disclosure items. AAPOR's statement on reporting final
disposition codes and outcome rates can be found at the back of this booklet.
With this 10
th
edition, AAPOR hopes to continue the standardization of the codes researchers use to
catalogue the dispositions of sampled cases and their outcome rates. This objective requires a common
language and definitions the research industry can share. AAPOR urges all practitioners to use these
codes in all reports of survey methods, no matter if the project is proprietary work for private sector
clients or a public, government, or academic survey. This will enable researchers to find common
ground to compare the outcome rates for different surveys.
As observed by Tom Smith in the Ninth Edition, Linnaeus noted that “method [is] the soul of science.”
There have been earlier attempts at methodically defining response rates and disposition categories.
One of the best attempts is the 1982 Special Report on the Definition of Response Rates, issued by the
Council of American Survey Research Organizations (CASRO). The AAPOR members who wrote the
current report extended the 1982 CASRO report, building on its formulas and definitions of disposition
categories.
This report:
Has separate sections by frame, defined as list samples, address-based samples (ABS), phone
samples, and other situations.
Contains an updated, detailed and comprehensive set of definitions for the four major types of
survey case dispositions: interviews, non-respondents, cases of unknown eligibility, and cases
ineligible to be interviewed.
Contains tables delineating final disposition codes.
Provides operational definitions and formulas for calculating response rates, cooperation rates,
refusal rates, and contact rates. The full set of definitions and formulas can be found at the end
of the report. Here are some basic definitions that the report details:
Response rates - The number of complete interviews with reporting units divided by the
number of eligible reporting units in the sample. The report provides six definitions of
response rates, ranging from the definition that yields the lowest rate to the definition
that yields the highest rate, depending on how partial interviews are considered and
how cases of unknown eligibility are handled.
Cooperation rates - The proportion of all cases interviewed out of all eligible units ever
contacted. The report provides four definitions of cooperation rates, ranging from a
minimum or lowest rate to a maximum or highest rate.
Refusal rates - The proportion of all cases in which a housing unit or the selected
respondent refuses to be interviewed or breaks off an interview out of all potentially
eligible cases. The report provides three definitions of refusal rates, which differ in how
they treat dispositions of cases of unknown eligibility.
8
Contact rates - The proportion of all cases in which the survey reached some
responsible housing unit member. The rates here are household-level rates. They are
based on contact with households, including respondents, rather than contacts with
respondents only. Respondent-level contact rates could also be calculated using only
contact with and refusals from known, eligible respondents.
Demonstrates how to calculate response rates for dual frame RDD samples that require
multiple estimates for cases of unknown eligibility, or e.
Provides an updated bibliography for researchers who want to understand better the influences
of non-random error (bias) in surveys.
Introduction
Using the Updated Standard Definitions Guide
There are a few topics that users should bear in mind when reading Standard Definitions for the first
time, which we summarize below. First, this tenth version of Standard Definitions has been re-organized
by frame rather than by data collection mode as in previous versions. This change is intended to reflect
how researchers design surveys in the present day and to better-accommodate multi-mode studies. An
important theme throughout this report is that case dispositions are tied to the frame from which a case
is sampled, and some dispositions can be applied consistently across frames. In contrast, others are only
appropriate for a specific frame, as reflected in the tables presented at the start of each section.
Researchers should also be aware that the salience of a disposition can vary depending on frame,
especially with respect to eligibility status.
Second, we acknowledge the proliferation of multi-mode designs in the survey industry over the past
decade. Whether multi-mode, or mixed-mode, designs can consist of surveys in which there are
separate samples that are each measured with different modes, a unified sample in which multiple
modes are used for individual cases, or a combination of both. As noted by an AAPOR task force, many
large-scale surveys have been transitioning from phone to combinations of multiple modes for
recruitment and survey administration, where phone may be only one of a number of modes that are
used, if at all (AAPOR, 2019). Multi-mode designs are utilized in surveys for a number of reasons: 1)
improving coverage; 2) increasing response rates and reducing non-response error; 3) reducing costs;
and 4) improving measurement (de Leeuw, 2018).
One example of a multi-mode survey of specifically named persons would be a survey of the AAPOR
membership, where members receive a postcard invitation with a link to an online survey, and
nonrespondents are subsequently contacted with a paper-and-pencil mail survey or by a live phone
interviewer. This type of web-push survey (see Section 1.5) is one example of a multi-mode survey, but
multi-mode surveys can be much more complex in their design. Multi-mode surveys can be sequential,
where different modes are offered to respondents in sequence, or concurrent, where respondents are
offered a choice of data collection mode. Other multi-mode surveys are multi-frame surveys. For
example, a study may combine an address-based sample of unnamed individuals (see Section 2) with
supplemental phone or online samples in an attempt to reach essential subgroups. Sampled units from
each frame may be contacted via various modes (e.g., mail, phone, email). The assignment of disposition
codes for these samples will vary by sampling frame. Users of the Guide should refer to different
9
sections if a multi-mode design utilizes multiple frames (e.g., ABS and RDD). Multi-mode designs that
use the same frame can refer to the mode references in the same section.
With respect to the AAPOR response rate calculator, users should make determinations about case
eligibility based on the sample frame from which a case is sampled. Among those known to be eligible
(disposition codes starting 1.0 or 2.0), specific interview sub-disposition codes will often be determined
by the contact or data collection mode(s). Disposition codes related to participant ineligibility or
unknown eligibility (3.0, 4.0) will often be determined by considerations related to the sample frame.
Following the example of the multi-mode survey of AAPOR members, all interview disposition codes
would draw from those discussed in the section on list samples (Section 1). The multiple modes of
contact and response (i.e., mail, web, and phone) will determine the subcodes used to classify cases into
different subsets of respondents, eligible nonrespondents, and ineligibles. However, whether the case
information leads to the classification of a sampled case as eligible (1.0 and 2.0), ineligible (4.0), or
unknown (3.0) will be based on the fact that a list of named individuals was used as the sample frame.
This is an important distinction that we address throughout the document. For example, a notification
that a piece of mail could not be delivered to a particular sampled unit on a list frame indicates a
locating issue. In contrast, a unit sampled from an address-based frame producing this notification
would likely be considered an ineligible sampled unit. Consequently, we expect appropriate disposition
categories and their outcomes to vary depending on frame.
For suggestions on keeping track of cases across modes, see Chearo and Van Haitsma (2010).
Third, there are many schemes for classifying the final disposition of cases in a survey. Previous
Standard Definitions committees reviewed more than two dozen classifications and found no two
exactly alike. They distinguished between seven and 28 basic categories. Many codes were unique to a
particular study and categories often were neither clearly defined nor comparable across surveys.
1
To limit the complexity of final disposition codes as much as possible and to allow the comparable
reporting of final dispositions and consistent calculation of outcome rates, AAPOR proposes a
standardized classification system for final disposition of sample cases, and a series of formulas that use
these codes to define and calculate the various rates.
A detailed report of the final disposition status of all sampled cases in a survey is vital for documenting a
survey’s performance and determining various outcome rates. Such a record is as important as detailed
business ledgers are to a bank or business. In recognition of this premise, the reports on the final
disposition of cases are often referred to as accounting tables (Frankel, 1983; Madow et al., 1983). They
are as essential to a well-documented survey as the former are to a well-organized business.
2
Assigning disposition codes to cases in-field is not entirely straightforward. Cases may be contacted at
multiple points through potentially different contact modes, yielding different outcomes (e.g., a survey
sequentially attains three dispositions for a single sample element: a non-contact, a soft refusal, and
1
Examples of some published classifications can be found in Hidiroglou et al., 1993; Frey, 1989; Lavrakas, 1993; Lessler and
Kalsbeek, 1992; Massey, 1995; and Wiseman and McDonald, 1978 and 1980.
2
The AAPOR statement on “best practices” (AAPOR, 1997, p. 9) calls for the disclosure of the “size of samples and sample
disposition the results of sample implementation, including a full accounting of the final outcome of all sample cases:
e.g., total number of sample elements contacted, those not assigned or reached, refusals, terminations, non-eligibles, and
completed interviews or questionnaires …”
10
another non-contact). Researchers should assign the “highest” disposition to cases, reflecting the most
information we have thus far. A “soft refusal” establishes that a sampling unit contains a household,
while a “non-contact” provides less information about the sampled unit. Following the four disposition
categories described below, category 1 (completes) is the highest disposition, followed by category 4
(ineligible cases, category 2 (eligible cases that are not interviewed), and finally, category 3 (cases of
unknown eligibility). So, in our simplified example, we would assign the case in question to be a “soft
refusal” if data collection ended, as that provides more information than the most recent “non-contact”
disposition.
Final Disposition Codes
Survey cases can be divided into four main categories:
Category 1: Completed interviews;
Category 2: Eligible cases that are not interviewed (non-respondents);
Category 3: Cases of unknown eligibility; and
Category 4: Cases that are not eligible.
The following text and the tables at the end of this report are organized to reflect these four categories.
Although these classifications could be refined further (and some examples of further sub-categories are
mentioned in the text), they are meant to be mutually exclusive and exhaustive in that all possible final
dispositions should fit under one and only one of these categories.
The first of the following sections covers list samples, no matter what mode they are contacted. We
discuss designs based on web-push surveys, phone surveys, mail surveys, in-person surveys, and multi-
mode surveys of specifically named people. We also consider text or SMS surveys of specifically named
people, such as from a list and registry-based designs.
The second section deals with address-based sampling (or ABS) designs, employing any modes
mentioned in Section 1 for list samples.
The third section handles phone samples, specifically random-digit-dial (or RDD), with the fourth section
covering online panel surveys.
The four individual frame-oriented sections contain some redundancy, which is intentional, so
researchers interested only in one frame or mode can learn about the disposition codes for that frame
and mode without reading the sections dealing with others.
Completed and Partial Questionnaires
The definition of completed interviews is consistent across modes, so we address them in the
introduction. In any mode we can consider multiple levels of completion of the instrument, as described
in Table 1, specifically completes and partials. The distinction between completes and partials is the
proportion of questions answered by a respondent and should be defined by the researcher before data
collection using justifiable criteria. At one extreme, a respondent may provide an answer to each item.
But some respondents will get partway through the questionnaire and then, for various reasons, fail to
ever complete it, but still provide sufficient information as to be a “partial” rather than a “breakoff,” the
11
latter category being a type of refusal. In any case, a survey must provide a clear definition of these
statuses. Researchers may choose to report response rates with and without partials in the numerator,
for example, to illustrate the importance of partials and their definition to a given study.
Table 1. Valid Interview Dispositions across Modes
Description
Value
Notes & Examples
Interview
1.0
A priori definitions are required to determine whether a case is a complete
or partial interview (or a breakoff). Three widely used standards for
defining these three statuses are:
a) the proportion of all applicable questions answered,
b) the proportion of all applicable questions asked, and
c) the proportion of crucial or essential questions answered (Frankel,
1983).
The above standards could be used in combination.
Complete
1.1
Example A: More than 80% of questions answered
Example B: More than 80% of questions asked
Example C: 100% of crucial or essential questions answered
Complete by Proxy
1.11
Partial
1.2
Example A: 50%-80% of questions answered
Example B: 50%-80% of questions asked
Example C: 50%-99% of crucial or essential questions answered
Partial by Proxy
1.21
How these types of incomplete cases are classified depends on the objectives of the survey and the
relative importance of various questions in the instrument., as well as on the particular design of the
survey (whether, for example, it is permitted to skip items without providing an answer). The sections in
this document on different modes of survey data collection for each frame discuss the different decision
rules for classifying cases as complete versus partial versus break-off so that discussion will not be
repeated here. The breakoff category could be further differentiated into the various sections or even
items at which the breakoff occurred, depending on the importance of these sections to the survey.
Modifications of the Final Disposition Codes
It is permissible to collapse categories if this does not compromise the calculation of outcome rates. For
example, refusals and break-offs can be reported as 2.10 rather than separately as 2.11 and 2.12 or
others (2.31-2.37) reported as generic others (2.3). Simplifications are permissible when they do not
obscure any of the standard rates delineated below. For example, no outcome rates depend on the
distinctions among non-contacts (2.21-2.27), so only the summary code 2.20 could be used if surveys
wanted to keep the number of categories limited. Simplified categories do not redefine classes or
remove the need for clear definitions of sub-classes not separately reported (e.g., break-offs).
As indicated above, more refined codes may be useful in general and for special studies. These should
consist of sub-codes under the categories listed in the relevant tables in each section. If researchers
want categories that cut across codes in the tables, they should record those categories as part of a
separate classification system or be distinguished as sub-codes under two or more of the codes already
provided. For example, one could subdivide refusals into a) refusals by the respondent; b) broken
appointments to avoid an interview; c) refusals by other household members; and d) refusals by a
household member when the respondent is unknown. These refusal distinctions can be especially
12
valuable when a survey deploys a “refusal conversion” process (Lavrakas, 1993). It is important to note
that while it is possible to subdivide a category in this way, it is not possible to define categories that
cross the main groups listed on page ten.
Temporary vs. Final Disposition Codes
Several disposition classifications used within the industry may include codes that more appropriately
reflect a temporary case status. Examples include:
Maximum call limit met,
Call back, respondent selected,
Call back, respondent not selected,
No call back by date of collection cut-off, and
Broken appointments.
These and other temporary dispositions often are peculiar to individual CATI systems and survey
operations and are not necessarily dealt with here. These temporary, attempt-specific codes should be
replaced with final disposition codes listed in the tables in each section when final dispositions are
determined at the end of the survey.
In converting temporary codes into final disposition codes, one first must use appropriate temporary
codes. Temporary disposition codes should reflect the outcome of specific contact attempts before the
case is finalized. Many organizations mix disposition codes with what can be called action codes. Action
codes do not indicate the result of a contact attempt but what the status of the case is after a particular
attempt and what steps are to be taken next. Examples of these are:
Maximum Number of Attempts
General Callback
Supervisor Review
In each case, these codes fail to indicate the outcome of the last contact attempt but instead suggest
the next required action (respectively, no further calls, callback, and supervisor to decide on the next
step). While action codes are important from a survey management point of view, they should not be
used as contact-specific, temporary disposition codes. Action codes are generally based on summaries
of the status of cases across attempts to date. In effect, they consider the case history to date, indicate
the summary status, and usually also the next step. These should also be distinguished from final codes,
representing the most informative status of a case at the close of data collection.
Typically, one will need to select a final disposition code from the often numerous and varied temporary
disposition codes. In considering the conversion of temporary to final disposition codes, one must
consider the best information from all contact attempts and how that information connects to the
frame from which the sample was selected. Temporary disposition codes may lead to different final
status codes for different frames. For example, a phone disposition indicating that a business has been
reached instead of a residential household would likely be coded with a final disposition of ineligible
(4.51) for an RDD phone survey. Still, due to the uncertainty introduced by phone matching, it may lead
to an unknown eligibility status of 3.1263 for an ABS survey that includes phone contacts. In deciding
between various possibly contradictory outcomes, four factors need to be considered: 1) status day, 2)
13
uncertainty of information, 3) hierarchy of disposition codes and 4) the frame from which the sample
was selected.
3
First, when eligibility is based on criteria that can change over time, it is necessary to choose a date on
which eligibility is determinedreferred to as the “status day. For example, suppose that the target
population is 18 to 65. Suppose further that when initial contact is made with a respondent, they are 65
and thus qualify for the study; an appointment is made to complete an interview later. If by that date,
the respondent has turned 66, the status date should determine their classification; specifically, the
respondent would remain eligible if they turned 66 after the status date but would be classified as
ineligible if they turned 66 before the status date. Similar considerations would apply if a person was
initially confirmed as eligible and selected to complete an interview but passed away before an
interview could be completed.
Second, information on a case may be uncertain due to contradictory information across or within
attempts. For example, one neighbor reported that a residence is vacant versus other evidence that it
may be occupied, or one mailing coming back as undeliverable with a U.S. Postal Service (USPS) “vacant”
code, while another yields a refusal. Or the lack of sufficient information to determine eligibility, for
example, whether the sample unit has a member of the target population. If the definitive situation for
a case cannot be determined, one should take the conservative approach of assuming the case is eligible
or possibly eligible rather than not eligible.
Next, there is a hierarchy of disposition codes in which certain temporary codes take precedence over
others. If no final disposition code is clearly assigned (e.g., completed case, all attempts coded as
refusals), the outcome of the last attempt involving contact with a sampled household or respondent will
determine the final disposition code.
Following the logic of the some-contact-over-other-outcome rule means that once there was a refusal,
the case would ultimately be classified as a refusal unless: a) the case was converted into an interview or
b) definitive information was obtained later that the case was not eligible (e.g., did not meet screening
criteria). For example, repeated no answers after a refusal would not lead to the case being classified as
no contact, nor would a subsequent disconnected phone number justify it being considered a non-
working number.
Likewise, in converting temporary codes into final codes, a case that involved an appointment that did
not end as an interview might be classified as a final refusal, even if a refusal was never explicitly given,
depending on circumstances. Unless there is specific evidence to suggest otherwise, it is recommended
that such cases be classified as a refusal.
If no final disposition code is clearly assigned and there is no contact of any kind on any attempt,
precedence should be given to the outcome providing the most information about the case. If there are
different non-human-contact outcomes and none are more informative than the others, one would
generally base the final disposition code on the last contact. For example, in a case sampled from a
phone number frame consisting of a combination of rings-no-answer, busy signals, and answering-
machines outcomes, the final code would be answering machine (3.123 for RDD or 2.22 if working with
3
For a discussion of assigning codes see McCarty, Christopher, "Differences in Response Rates Using Most Recent Versus Final
Dispositions in Phone Surveys," Public Opinion Quarterly, 67 (2003), 396-406.
14
a named, list sample, if the name is confirmed in the outgoing message) rather one of the other
disposition codes.
Of course, when applying these hierarchy rules, one must also follow the status day and uncertainty
guidelines discussed above.
Finally, as noted above, the frame from which a sampled unit is selected must be considered in the
assignment of final disposition codes. When possible, disposition codes must provide information about
the sampled unit, regardless of the contact mode for a given attempt. For example, the sampled unit will
be a phone number for an RDD frame. In contrast, the unit sampled from an ABS frame will be an
address or housing unit, which differs from a case sampled from a list or registry-based sample, which
may be a specifically named individual. A mailed survey to an ABS unit that is returned with information
that the household resident is deceased should not be coded as 4.11 (deceased) if there is a chance that
the housing unit is occupied by another individual since the sampling unit is the address, not a specific
person. However, the same type of mailed contact attempt would be classified as 4.11 if that specific
individual was sampled from a list (e.g., from a company’s employee list). Similarly, a soft refusal
provided to a phone number matched to an ABS sample that was not verified to be at the expected
address would be considered unknown eligibility (3.126). The same information provided in the RDD
context would be classified as a refusal (2.10) if no other eligibility criteria are required for that study.
Substitutions
Any substitution of sampled cases, replacing an originally-sampled unit with another, must be reported.
The main issue with substitution is that it violates probability sampling, as the probability of selection for
the substitute will be unknown. First, whatever substitution rules were used must be documented.
Second, the number and nature of the substitutions must be reported. These should distinguish and
cover both between- and within-household substitutions. Third, all replaced cases must be accounted
for in the final disposition codes. For example, if a household refuses, no one is reached at an initial
substitute household, and an interview is completed at a second substitute household. The total
number of cases would increase by two, and the three cases would be listed as one refusal, one no-one-
at-residence, and one interview. In addition, these cases should be listed in separate reports on
substitutions.
Similarly, within-household substitution would have to report the dropped and added cases and
separately document procedures for substitutions and number of substitutions. We recommend
calculating response rates with and without substitutes to show the importance of substitution to your
study. Respondent selection procedures must be clearly defined and strictly followed. Any variation
from these protocols likely constitutes a substitution and should be documented.
Proxies
A proxy is one individual who reports on behalf of an originally sampled person. This person might be a
sampled person's household member or a non-member (e.g., a caregiver). Any use of proxies must be
reported.
First, rules on the use of proxies must be reported. Second, the nature and circumstances of proxies
must be recorded, and any data file should distinguish proxy cases from respondent interviews. Third,
complete and partial interviews must be sub-divided into respondents (1.1 or 1.2) or proxies (e.g., 1.12
15
or 1.22) in the final disposition code. In the case of household informant surveys in which a) one person
reports on and for all household members and b) any responsible person in the household may be the
informant, this needs to be clearly documented, and the data file should indicate who the informant
was. In the final disposition codes and any rates calculated from these codes, researchers need to state
clearly that these are statistics for household informants. Rates based on household informants must be
explicitly and clearly distinguished from those based on a randomly chosen respondent or someone
fulfilling some special household status (e.g., head of household, chief shopper, etc.) When household
and respondent-level statistics are collected, final dispositions for both households and respondents
should be reported.
Complex designs
Complex surveys such as multi-wave longitudinal designs, surveys with multi-stage sampling, and
surveys that use a listing from a previous survey as a sample frame must report disposition codes and
outcome rates for each separate component and cumulatively. For example, a three-wave longitudinal
survey should report the disposition codes and related rates for the third wave (second reinterview) and
the cumulative dispositions and outcome rates across the three waves. Similarly, a survey such as the
National Survey of College Graduates (NSCG), which was based on a sample of respondents from the
American Community Survey (ACS), should report on both the outcomes from the NSCG field efforts and
incorporate results from the earlier ACS effort (i.e., accounting for nonresponse cases from both NSCG
and ACS).
Many other complex designs exist, such as samples with unequal probabilities of selection or designs
conducted in two or more stages, where non-respondents are subsampled in later stages. These
scenarios may require the calculation of weighted response rates. See the discussion in the "Some
Complex Designs" section for more details about these calculations.
16
Section 1: List Samples
This first section assumes a frame that lists specifically-named persons, with or without the ancillary
information necessary to collect data. Such people or list members could have associated physical
addresses, phone numbers (landline and/or mobile), and email addresses. It is possible to use
commercial vendors to match any contact information that may be missing or out-of-date. We assume
only that our frame is a list of individuals, and it would be possible to acquire or match the necessary
information to conduct data collection. One example of list samples is registry-based surveys or RBS.
RBS surveys include all surveys in which a random sample is drawn from units on a registration-based
list. Examples of RBS designs include sampling from the United States voter files (list of registered
voters) and market research among individuals subscribed to a particular service.
Importantly, this section assumes that once contact with the named respondent is made, some
screening would be needed to confirm that they are still eligible for inclusion. For a survey of registered
voters drawn from voting records, the eligibility rules could require that sampled voters still reside at
their indicated address, in the same state or community, and/or are still registered to vote. For this
reason, a failure to receive any reply to the survey would place them in the unknown eligibility category,
since it could not be confirmed that they meet these criteria. Similarly, various postal return codes that
failed to establish whether the person still lives at the mailed address would continue to leave eligibility
unknown.
When screening is required to confirm eligibility, care must be taken in determining whether a sampled
unit should be assigned an eligible nonrespondent or an unknown eligibility code. Cases for which the
respondent is contacted, but it is unknown whether they are eligible, usually occur because of a failure
to complete a needed screener. Even if this failure were the result of (for example) a “refusal,” a
breakoff, or the return of a blank questionnaire, it would only be assigned to one of these eligible
nonresponse codes. If eligibility were otherwise confirmed or could be inferred; otherwise, it should be
assigned a code of “No screener completed” (unknown eligibility). If useful for operational reasons,
researchers could create sub-codes that delineate the reason for the non-completion of the screener.
In some surveys, however, screening may not be necessary; it may be possible to assume that all
persons on the list are eligible unless otherwise determined. In such situations, the concept of unknown
eligibility does not apply, and the dispositions identified here as unknown eligibility codes should instead
be classified as eligible nonrespondent codes. Two examples of scenarios in which this treatment could
be appropriate include:
1. A sample of company employees drawn from a list of employees is known to be complete,
accurate, and up-to-date.
2. The second phase of a two-phase survey, in which the sampling frame is a list of persons who
were rostered and confirmed to be eligible in the first phase. Generally, the first phase in two-
phase surveys should follow the standards described in Section 2 for address-based samples
(ABS) or Section 3 for phone (RDD) samples. The second phase should follow the standards
described here for list samples, with all sampled units presumed to be eligible unless
determined otherwise. Additional discussion of two-phase surveys is provided in Section 2 on
ABS surveys.
17
In all cases, it is important that sampling and eligibility criteria and assumptions be decided upon
explicitly and precisely when the survey is designed. In these and other instances, the rules of eligibility
and the assumptions about eligibility will vary with the sample design and study objectives. The same
return codes may properly be assigned to different final dispositions in two studies based on different
eligibility assumptions, as in the examples above. Researchers must clearly describe their sample design
and study objectives and explicitly state and justify their assumptions about the eligibility of cases in
their sample to properly inform others how the case dispositions are defined.
Throughout this section, Standard Definitions explicitly uses the language employed by the USPS to
account for all USPS dispositions in which mail is not delivered to an addressee. Researchers operating in
other countries should treat these classifications as instructive and naturally will have to use their own
postal service codes. Non-USPS codes should follow the Standard Definitions’ logic and intent, as
illustrated by the USPS codes.
Table of disposition codes
Tables 1.1, 1.2, and 1.3 provide eligible nonresponse, unknown eligibility, and ineligible codes
(respectively) applicable when sampling from a list of individuals or households. Refer to the
Introduction to this report for a discussion of general principles related to the identification of (fully or
partially) completed surveys, which apply regardless of frame. Note that in all the subsequent tables, a
single asterisk identifies a new disposition code; a disposition changed from the prior version of the
AAPOR Standard Definitions is indicated by two asterisks.
Table 1.1. Valid Eligible, No Interview (Non-response) Dispositions for List Samples
Description
Value
Notes & Examples
Eligible, Non-Interview
2.0
To be considered in this category, a case must first have been
determined to be eligible.
Example: An individual who states, ‘I do not want to participate’ before
confirming that you have reached a household and/or other eligibility
criteria should not be classified as an eligible refusal (2.10). See the
discussion about “Unknown Eligibility”.
Refusal and break-off
2.10
Refusal
2.11
Household-level (or proxy) Refusal
2.111
A member of the household of the named sample member has
declined to interview for the entire household.
Another individual from named entity explicitly refuses to allow
participation. No screening or confirmed eligibility is required
Parent or Guardian Explicit Refusal
2.1111*
The parent or guardian of the named minor respondent refuses to
allow participation
Known Respondent Refusal
2.112
The named respondent or entity directly refuses to participate
Logged on to survey, did not
complete any items
2.1121
Web-only
Email read receipt confirmation,
refusal
2.1122
Other Implicit Refusal
2.113
18
Blank questionnaire returned
(mailed survey)
2.1131*
No additional screening required
Named respondent set appointment
but did not keep it (phone or in-
person)
2.1132*
No additional screening required
Opted out of communications (SMS
or Email)
2.1133*
Break-off
2.12
The named respondent began the interview, web survey, or
questionnaire but opted to terminate it (or returned it with too many
missing items) before completing enough of it to be considered a
partial complete (see Introduction for guidance on classification of
partial interviews).
Non-contact
2.20
Named respondent never
available during field period
2.21**
Must confirm named respondent has been reached at address or
phone number.
If email contact, email is confirmed eligible and attached to named
respondent
Phone answering device (Phone)
2.22
No contact has been made with a human, but a phone answering
device (e.g., voicemail or answering machine) is reached that includes a
message confirming it is the number for the named sample member.
This code is only used if all sample members are eligible (i.e., no
additional screening is necessary).
Example: “You have reached John Smith. Please leave a message”.
Answering machine - no message
left (phone)
2.221
Answering machine - message left
(phone)
2.222
The interviewer left a message, alerting the household that it was
sampled for a survey, that an interviewer will call back, or with
instructions on how a respondent could call back.
Other non-contact
2.23
No additional screening is necessary
Quota filled (in released replicate4)
2.231*
No one reached at housing unit (in-
person)
2.24
No screening required for eligibility
Inability to gain access to sampled
housing unit (in-person)
2.241*
Completed questionnaire, but not
returned during field period
2.27
Other
2.3
Deceased respondent
2.31
Named respondent is deceased
Must be able to determine that named respondent was eligible on the
survey status date and died subsequently
Physically or mentally
unable/incompetent 
2.32
The named respondents physical and/or mental status makes them
unable to do an interview. This includes both permanent conditions
(e.g., senility) and temporary conditions (e.g., pneumonia) that
prevailed whenever attempts were made to conduct an interview.
With a temporary condition, the respondent could be interviewed if re-
contacted later in the field period
4
A replicate may be defined as a subsample from the same population as the overall sample, designed under the same conditions.
19
Language or Technical Barrier
2.33
Household-level language problem
2.331
No one in the household speaks a language in which the interview is
offered (no screening required)
Respondent language problem
2.332
The named respondent does not speak a language in which the
interview is offered (no screening or respondent eligibility confirmed).
No interviewer available for needed
language/Wrong language
questionnaire
2.333
The language spoken in the household or by the respondent is offered.
However, an interviewer with appropriate language skills cannot be
assigned to the household/respondent at the time of contact (no
screening or respondent eligibility confirmed).
Inadequate audio quality or literacy
issues 
2.34
No screener or eligibility confirmed
Location/Activity not allowing
interview
2.35
Example: cell phone reached while person is driving (no screening
required, or eligibility confirmed)
Someone other than respondent
completes questionnaire or
interview
2.36
Someone other than respondent
completes questionnaire or
interview Full questionnaire
completed
2.361
Someone other than respondent
completes questionnaire or
interview Partial questionnaire
completed
2.362
Wrong number
2.37
Eligibility of named person confirmed but the number dialed is
incorrect for the named person
Miscellaneous, non-interview
2.90
Miscellaneous (eligibility confirmed)
Examples: vows of silence, lost records, faked cases invalidated later on
Table 1.2 Valid Unknown Eligibility, Non-Interview Dispositions for List Samples
Description
Value
Notes & Examples
Unknown Eligibility, Non-Interview
3.0
No Screener Completed, Unknown
3.20
No screener completed, unknown if sampled person is eligible
respondent
Refusals where screening is required
Undeliverable or unanswered where screening is required
Unreachable/screener not
completed
3.21
SEE APPENDIX FOR LIST OF POSSIBLE USPS CODES
USPS Category: Refused by
addressee (Mailed survey)
3.211
USPS Category: Refused by Addressee [REF] (screener required)
USPS Category: Return to Sender
(Mailed survey)
3.212
USPS category: Returned to Sender due to Various USPS Violations by
Addressee (screener required)
USPS Category: Cannot be delivered
(Mailed survey)
3.213
USPS Category: Cannot be Delivered [IA] (screener required)
USPS Category: Cannot be delivered
(Mailed survey)
3.214
Mail returned with Forwarding Information
NOTE: This can only be a final disposition for listed sample if a screener
is required
20
Unreachable (Phone)
3.215**
Unreachable, unknown if connected to named sampled
individual/entity/household (Screener required)
Always busy (Phone)
3.2151**
Screener required
No answer (Phone)
3.2152**
Screener required
Phone answering device (Phone)
3.2153**
Phone answering device (unknown if named respondent & screener
required)
The phone number connected to an answering device (e.g., voicemail
or answering machine), but the automated message did not
conclusively indicate whether the number is for the specifically named
individual or household.
Telecommunication technological
barriers (Phone)
3.2154**
Telecommunication technological barriers, e.g., call-blocking (unknown
if named respondent & screener required)
Call-screening, call-blocking, or other telecommunication technologies
that create barriers to getting through to a number
Technical phone problems (Phone)
3.2155**
Technical phone problems (unknown if named respondent & screener
required)
Examples: phone circuit overloads, bad phone lines, phone company
equipment switching problems, phone out of range (AAPOR Cell Phone
Task Force, 2008 & 2010b; Callegaro et al., 2007).
Ambiguous operator’s message
(Phone)
3.2156**
Ambiguous operator’s message (unknown if named respondent &
screener required)
An ambiguous operator’s message does not make clear whether the
number is associated with a household. This problem is more common
with cell phone numbers since there are both a wide variety of
company-specific codes used, and these codes are often unclear
(AAPOR Cell Phone Task Force, 2010b).
Non-working/ disconnected number
3.216*
Includes Fax/Data line (Unknown if named respondent & screener
required)
Interviewer unable to reach housing
unit (In-person)
3.217*
Includes situations where it is unsafe for an interviewer to attempt to
reach a housing unit (screener required)
Interviewer unable to locate housing
unit/address (In-person)
3.218*
Screener required
Invitation returned (Email or SMS
survey)
3.219*
Email/SMS invitation returned undelivered (screener required)
Message blocked by carrier (SMS
survey)
3.2191*
Carrier blocked message from being delivered
Message failed to send (SMS survey)
3.2192*
Screener required
Device unreachable (SMS)
3.2193*
Screener required
Device not supported (SMS)
2.2194*
Device does not support SMS (screener required)
Device powered off (SMS)
3.2195*
Screener required
Unknown error (SMS)
3.2196*
Screener required
Nothing ever returned
3.22
Nothing ever returned (screener required)
Not attempted or worked
3.23
No invitation sent
Questionnaire never mailed
No contact attempt made
Address not visited
21
Note, all cases in unassigned replicates (i.e., replicates in which no
contact has been attempted for any case in the replicate) should be
considered ineligible (Code 4), but once interviewers attempt to
contact any number in a given replicate, all cases in the replicate have
to be individually accounted for.
Other
3.90
This should only be used for highly unusual cases in which the eligibility
of the number is undetermined and does not clearly fit into one of the
above designations.
Example: High levels of item nonresponse in the screening interview
prevents eligibility determination.
Returned from an unsampled email
address (e-mail)
3.91
Screener required
Table 1.3. Not Eligible 4.0 for List Samples
Description
Value
Notes & Examples
Not Eligible
4.0
Selected Respondent Screened Out
of Sample
4.10
The named sample entity is reached but is determined to be ineligible
based on screening criteria.
Deceased
4.11*
Named respondent is deceased prior to survey start (status day)
Quota Filled
4.80
Ineligible in current replicate because quota filled in unreleased sample
replicate
Duplicate listing
4.81
Other
4.90
*New disposition code
**Updated disposition code
1.1 Mail Surveys of Specifically Named Persons/Entities
This section describes surveys that recruit respondents via mail in which the sampling unit is a
specifically named person, household, or other entity who is sent a self-administered questionnaire
(SAQ). Surveys using mail to contact participants vary greatly in the populations they cover and the
nature and quality of the sample frames from which their samples are drawn. As described in the frame-
level introduction, the named entity is the appropriate respondent. An example might be a sample of
registered voters residing in a particular community drawn from voting records for which mailing
addresses are available on, or can be appended to, the frame. In other words, the assumption is that
the target population is synonymous with the sampling frame and thus is defined as those persons on
the list with a valid mailing address. Different assumptions need to be made, and different rates apply
in the case of mixed-mode (e.g., email, phone, and mail push-to-web) designs. Web-push surveys are
covered in Section 1.5.
For mailed surveys sent to named sample units and other modes of contact discussed below, it is
important to remember that eligibility for the survey is linked to the sampled (listed) individual or entity
and not the contact information provided. For example, consider a survey of currently enrolled college
students at a particular university drawn from the registrar’s records or a study of professional
organization members pulled from the organizational directory. The records may include students who
have graduated, dropped out, transferred, or are no longer affiliated with the organization. Information
indicating that the sampled individual does not live at the provided mailing address does not determine
22
the sampled persons final eligibility, as the address on the list could be incorrect or outdated. The
individual may have moved or changed how they receive their mail but could still be an enrolled, eligible
student or association member. Similarly, a failure to receive a reply to the survey invitation would place
them in the unknown eligibility category since it could not be confirmed whether they were still active
students/members.
Conversely, if a listed, sampled individual is reachable at a particular address, this does not necessarily
indicate the persons eligibility. Often, screening is required to determine eligibility. Depending on the
quality of the list, different assumptions can be made about eligibility. For example, if it is known that
the list is accurate and current, it can be assumed that all those who receive no response are eligible
sample persons who must be treated as non-respondents. As with the other modes of data collection
described in this document, appropriate assumptions about eligibility may depend upon details of the
sample design and the state of the sampling frame or list. Researchers thus must clearly describe their
sample design and explicitly state and justify their assumptions about the eligibility of cases in the
sample to properly inform others of how the case dispositions are defined and applied. AAPOR has
prepared a document describing how to estimate the status of cases with unknown eligibility, known as
the parameter e, ate (https://www.aapor.org/AAPOR_Main/media/MainSiteFiles/ERATE09.pdf), with an
updated version planned for 2023.
As noted above, the discussion of completed interviews for mail surveys is similar to that for other
modes, so one may refer to Table 1 in the Introduction section for the list of dispositions.
Eligible, no returned questionnaire (non-response)
Eligible cases for which no interview is obtained consist of four types of non-response: a) refusals and
break-offs (2.1); b) non-contacts (2.2); c) others (2.3); and miscellaneous (2.9) as summarized in Table
1.1 in Section 1.
Refusals and break-offs consist of cases in which some contact has been made with the specifically
named person or with the housing/business unit in which this person is/was known to reside/work, and
the person or another responsible household/business member has declined to have the questionnaire
completed and returned (2.11); or a questionnaire is returned with too few items completed to be
considered a partial complete, with some notification that the respondent refuses to complete it further
(2.12 see the Introduction section on what constitutes a break-off vs. a partial questionnaire).
5
Further useful distinctions include a) who refused, i.e., the named person (2.112) vs. another person
(2.1111); b) the point within the questionnaire of refusal/termination; and c) the reason for
refusal/break-off. In mail surveys, entirely blank questionnaires are sometimes mailed back in the return
envelope without explaining why the questionnaire was returned blank. Unless there is good reason to
do otherwise, this should be treated as an “implicit refusal” (2.113). In some instances, when a
noncontingent cash incentive was mailed to the respondent, the incentive was mailed back along with
the blank questionnaire. Researchers may want to create a unique disposition code to differentiate
these from the outcome in which no incentive was returned.
5
“Responsible household members” should be clearly defined. For example, the Current Population Survey considers any
household member 14 years of age or older as qualifying to be a household informant.
23
Known non-contacts in mail surveys of specifically named persons include cases in which researchers
receive notification that a respondent was unavailable to complete the questionnaire during the field
period (2.21).
6
This would include instances where the sampled unit is an entity other than a person
(e.g., a named household or a business), in which the sampled unit itself is confirmed eligible but no
person is available to respond on behalf of the sampled unit (e.g., no responsible household member
available).
7
There also may be instances in which the questionnaire was completed and mailed back too
late after the field period has ended to be eligible for inclusion (2.27), thus making this a “non-
interview.”
A related situation occurs in surveys that employ quotas when returned questionnaires are not treated
as part of the final dataset because the quota, or target number of completes, for a specific subgroup
has already been filled (2.231). The guiding principle when applying quotas is that eligibility criteria
must be established when a unit is released for data collection and should not change based on how
long it takes a unit to respond. Otherwise, eligible units excluded from the final dataset solely because of
a late response (whether “late” means after the end of the field period or after a quota was filled) are
properly coded as eligible nonrespondents, not ineligible cases.
Code 2.231 should be used when a unit meets the survey’s eligibility criteria. Otherwise, it would have
been included in the final dataset if they had responded earlier before the quota was met. Applying a
quota this way is akin to ending the field period early for subgroups whose quota has been filled. This
differs from a situation in which a sample replicate is released to only accept responses from particular
subgroups to meet quotas for those subgroups. In such situations, respondents from that replicate who
are outside of the target subgroups(s) for the replicate would be assigned code 4.1 (ineligible selected
respondent screened out of sample) because they do not meet the eligibility criteria for the replicate for
which they were sampled. Consider the scenario where a survey sets separate quotas for Black and
Hispanic respondents. If the survey used only one sample release and stopped accepting responses from
Hispanic respondents after their quota was met, any Hispanic responses after this point would be
assigned code 2.23 (an eligible, non-interview code) because they were eligible at the time of sample
release. In contrast, if the survey met the Hispanic quota in the first sample release and then released a
second replicate for which only Black respondents were eligible (to meet their quota), Hispanic
respondents to the second replicate would be assigned code 4.1 (ineligible) while Hispanic respondents
in the first replicate who completed after the quota was met would be set to 2.23. In all cases, what the
quotas are and how they are to be filled must be clearly defined, and whether survey responses
received after quotas have been met are accepted and included in the final data set should be clarified
in survey documentation.
Other cases (2.3) represent instances in which the respondent is eligible and does not refuse the
interview, but no interview is obtainable because of: a) deaths, including cases in which the USPS
identifies the addressee to be “Deceased” (2.31); b) the respondent is physically or mentally unable to
do the questionnaire (2.32); c) language barriers (2.33); d) literacy problems (2.34); e) location does not
permit participation (2.35); or f) completion by the “wrong” respondent (2.36).
6
Further distinctions could distinguish cases involving temporary absences (e.g., family away on vacation for two weeks) and
other reasons for non-contact.
7
Responsible household members” should be clearly defined. For example, the Current Population Survey considers any
household member 14 years of age or older as qualifying to be a household informant.
24
As noted above, whether death makes a case a non-respondent or an ineligible respondent depends on
fieldwork timing. If a person were alive and selected as the respondent on this status date but died
before a questionnaire was completed, the case would be classified as a non-response due to death
(2.31).
Eligible respondents who are physically or mentally unable to complete the questionnaire (2.32) would
include both permanent conditions (e.g., senility, blindness, paralysis) and temporary conditions (e.g.,
pneumonia, drunkenness) that prevailed throughout the field period. With a temporary condition, it is
possible that the respondent could/would complete the questionnaire if re-contacted later in the field
period or if the field period were later extended.
Language barriers (2.33) include cases in which the respondent does not read a language in which the
questionnaire is printed (2.331).
8
It would also include instances in which a questionnaire printed in a
language the respondent can read is never sent to the respondent (2.332). In contrast, literacy problems
(2.34) would apply to cases in which the specifically named person could speak the language in which
the questionnaire was printed but could not read it well enough to comprehend the meaning of the
questions.
When the researcher learns that someone other than the named entity that was sampled (or a qualified
proxy, if proxy responses are permitted) completed the questionnaire, the unit should be classified as an
eligible nonresponse (2.36). In this scenario, the researcher could choose to re-approach the sampled
unit to gain cooperation from the correct person. In this case, what happens during that subsequent
effort would determine the final outcome.
In mail surveys of named persons particularly ones in which mail is the only contact mode this
subset of dispositions (Other, the 2.3 series) would typically occur only if the researchers received
unsolicited information about the respondent that allowed for such classification of the final disposition.
The miscellaneous designation (2.9) would include cases involving some combination of other reasons
(2.3) or special circumstances (e.g., lost records or falsified cases invalidated upon review).
Unknown eligibility, no returned questionnaire
As shown in Tables 1.2 and 1.3 above, cases of unknown eligibility and no interview (3.0) include
situations in which it is unknown whether the selected list member is eligible based on screening
criteria, as nothing is ever returned for various reasons (3.20). In practice, it is more common to assume
eligibility in list samples than for general population samples, as reflected by higher assumed values of e.
Researchers using a specialized carrier like FedEx or UPS may have additional information on delivery
status that may be considered in assigning codes appropriately, i.e., it may be clearer if a household
resident refuses a delivery than with standard mail.
Situations in which the mailing reached the address, but it is unknown whether the specifically named
person is present at the address include instances in which the U.S. Postal Service (USPS) labels “refused
by addressee” (3.211.). There are many circumstances, denoted by various USPS codes, in which a
8
Language cases can be counted as not eligible (4.1) if the survey is defined as only covering those who read certain languages.
For example, until 2006 the General Social Survey defined its target population as English-speaking adults living in households in
the United States (Davis, Smith, and Marsden, 2007). Whenever language problems are treated as part of 4.1 instead of 2.33,
this must be explicitly stated.
25
mailing cannot be delivered to the address and/or the named person. Therefore, the person’s eligibility
cannot be confirmed as listed in table 1.2 above. As discussed previously, for listed samples of named
individuals, these cases should be coded as Unknown Eligibility unless there is reason to believe the list
frame is accurate and no additional screening is required. The remaining set of 3.21 codes represents
those cases that cannot be delivered for various reasons. A more comprehensive set of USPS returned
mail designations are provided in the Appendix.
Various undeliverable codes denote some problem with the address preventing the USPS (or other
carriers) from delivering the mailing. These generally fall into the code 3.213, and include situations
where an “illegible” or “insufficient” address is provided that cannot be read by the USPS; the mailing is
deemed to contain unmailable contents; there is an absence of a proper mail receptacle at the address
for the USPS to leave mail; a postal box is closed, e.g. for nonpayment of rent; there is a dispute over
which party has the right to delivery; there is a USPS suspension of mail to the address; or there is an
inadequate address for a commercial mail receiving agency. These include instances where the USPS
tries, but cannot find, the “known” addressee at the designated address.
There also are cases in which the USPS does not attempt delivery because of a determination that no
such address exists (3.213). This subcategory may be due to there being “no such number,” “no such
postal office” in a state, “no such street,” or a vacant address. There are also cases in which the USPS
will not deliver mail to certain addressees because they have committed USPS violations (3.212); the
USPS does not deliver these mailings and returns them to the sender as undeliverable due to “USPS
violations by addressee.”
A separate group of dispositions where the researcher is left not knowing if the addressee is eligible
occurs when some information indicates that the named entity is not physically present at the address
to which the survey invitation was mailed (3.214). Final dispositions in this group should be classified as
unknown eligibility unless the survey’s eligibility rules require residence and/or physical presence at the
specified address. In this case, some or all of them would be classified as ineligible. These include the
USPS categories “temporarily away, holding period expired,which indicates that the respondent still
resides at the address but is temporarily away with no current holding order, and “moved, left no
address,” which indicates that the respondent no longer resides at the address but did not file a change-
of-address order. In other cases, the mail is returned undelivered but has forwarding information; the
mail may be either unopened or opened. Ultimately, whether these dispositions are temporary or final
depends on the researcher’s choice to re-send a mail with the corrected address.
Commonly, the mail is delivered, but eligibility cannot be confirmed because the required screener was
not completed (3.22).
Final unknown eligibility categories include cases that are not attempted or worked for whatever reason
(3.23); or other miscellaneous types of nonresponse with unknown eligibility (3.9).
Not eligible
Table 3 above summarizes ineligible cases for mail surveys of list-based samples. In mail surveys of
specifically named persons that require the addressee to complete a screener to determine eligibility,
researchers may have sampled cases that later are determined not to be eligible. For example, there
may be cases in the sampling frame that are no longer registered as university students or whose
26
association membership has lapsed. Category 4.10 is thus reserved for cases screened out using
information obtained in the questionnaire or other means.
As noted previously, there may be instances in which living at a specific address or within a small
geographic area is what “qualifies” a person for eligibility. If that named person no longer lives at the
address for which he or she was sampled, it may make the person ineligible and s/he is out of the
sample (4.1). However, this is study-specific and often does not automatically make a sampled entity
“ineligible.”
Depending on fieldwork timing, death (as indicated by the USPS “deceased” code or other information
obtained by the researcher) may make a case of either an ineligible respondent or an eligible
nonrespondent. Surveys have to define a date on which eligibility status is determined. This would
usually be either the first day of the field period or the first day a particular case was mailed the
questionnaire. If it can be determined that the respondent died before the status date, the case would
be classified as ineligible due to death (4.11). Otherwise, the case should be classified as a nonresponse
due to death (2.31).
In mail surveys that employ a quota, there will be cases in which returned questionnaires are not
treated as part of the final dataset because the quota for their subgroup of respondents has already
been filled (e.g., responses from women when a gender quota is used and the female target has already
been met) (4.80). What the quotas are and how they are to be filled must be clearly defined, as
discussed above. The key distinction between being an eligible non-interview and ineligible determined
by the criteria associated with their individual replicate. Cases should only be coded as ineligible
because their quota was filled if they come from sample replicates where that distinction was made
before sample release.
Another type of “ineligibility” occurs in mail surveys, especially those that use a large “mailing list” as
the sampling frame. This will happen when duplicate listings are sampled in which the same individual
inadvertently appears more than once in the sampling frame if these are recognized as duplicates only
after the respondent has returned the mailings, e.g., when a respondent mails back a completed
questionnaire and a blank one with a note that s/he received two questionnaires, all but one of the
mailings should be treated as not eligible due to duplicate listings (4.81).
Finally, additional reasons for non-eligibility can be coded under Other (4.90).
In all cases of final disposition codes involving ineligibility, definite evidence of the status is needed.
When in doubt, a case should be presumed to be eligible or possibly eligible rather than ineligible unless
there is unambiguous evidence leading to the latter classification.
1.2 Email Surveys of Lists of Specifically Named Persons
Like surveys using postal mail to contact sampled individuals or entities, surveys using email to contact
participants also vary greatly in the populations they cover and the nature and quality of the sample
frames from which their samples are drawn. In this case, we assume our frame is a list of individuals
with emails attached. Many types of Internet surveys do not involve probability sampling, however.
These include opt-in or access panels (see AAPOR, 2010a), unrestricted self-selected surveys (for a
review, see Couper, 2000), or online surveys or access panels (see AAPOR 2023). The AAPOR Task Force
report on opt-in or access panels (2010a) provides a detailed discussion of the inferential issues related
27
to non-probability panels and specifically recommends that researchers avoid non-probability online
panels when planning to estimate population values accurately. For non-probability samples, response
rate calculations make little sense, given the broader inferential concerns and the inability to determine
a denominator (cf. Callegaro and DiSogra, 2008). We do not cover non-probability sources in our
response rate calculations, rather only probability-based designs. The 2023 AAPOR Task Force Report on
Data Quality from Online Samples (AAPOR 2023) discusses alternative metrics for evaluating data quality
and the risk of bias from such sources. For email surveys of specifically named persons, in particular
sampling frames of individuals with emails for all members, one can establish parallels with the
discussion of mail surveys of specifically named persons from a list provided in section 1.1.
In other words, the assumption is that the target population is synonymous with the sampling frame and
thus is defined as those on the list with Internet access and a working email address. Different
assumptions need to be made, and different rates apply in the case of mixed-mode (e.g., mail and email)
designs. For instance, in the case of mailed invitations to a web survey, such as when mail addresses but
not email addresses are available, a hybrid combination of the categories in the previous tables may
apply. Web-push surveys are covered in Section 1.5.
Tables 1.1 through 1.3 address surveys of specifically named persons. In this case, it is assumed that the
request or invitation to participate in the survey is sent electronically. This frame also assumes that only
the named person is the appropriate (i.e., eligible) respondent and that some confirmation is needed
that the named respondent is reached at the sampled email address and/or otherwise still eligible for
inclusion.
As in the case of mail surveys, an email invitation may be returned as undeliverable, not because the
sampled person is no longer eligible, but because the email address that appears on the list is incorrect
or outdated. Following the example provided in 1.1, consider an email list of university students or
professional association members. Some persons on the list may no longer be registered as students or
members of the association but still have other valid email addresses unknown to the researcher.
Others may still be students or members in good standing, but they have changed email addresses.
Compared to the accuracy of a regular mail address and the effect that accuracy has on delivery to the
intended recipient, email addresses are much less tolerant of errors. Whereas a postal employee often
can and will “make sense” of inaccuracies in a standard mailing address, there currently is no process on
the Internet that strives to match email addresses with spelling errors to the most likely recipient. Email
may experience a greater degree of “churn” or changes in address than regular mail; hence, one cannot
simply assume that such cases are ineligible. Thus, an undelivered email message essentially would
place such cases in the unknown eligibility category. Of course, such persons eligibility could be verified
by other means.
Furthermore, unlike regular mail, email addresses tend to be associated with an individual rather than a
household or business. Therefore, if the email is not read by the targeted person (for reasons of change
of employment, death, illness, etc.), it is less likely to be opened and read by another person than is a
regularly mailed questionnaire sent to the same sampled respondent. This means the researcher may be
less likely to learn of email messages sent to someone no longer at that address. Similarly, email
messages may not be read or returned for technical reasons. Mail return receipt, a service where the
sender is provided proof of delivery, may be unreliable depending on the domain, so surveys conducted
over the Internet (as opposed to an Intranet) are likely to include email addresses for which the delivery
28
status is unknown. In addition, email may be successfully delivered to the email box but never seen by
the addressee because of spam filters, inboxes that are too full, or other technical reasons.
Depending on the quality of the list, different assumptions can be made about eligibility. For example, if
it is known that the list is both accurate and current, it can be assumed that all those from whom one
receives no response are eligible sample persons who therefore must be treated as nonrespondents. As
with the other modes of data collection described in this document, appropriate assumptions about
eligibility may depend upon details of the sample design and the state of the sampling frame or list.
Email surveys provide imperfect information about their delivery and receipt, similar to physical mail.
Once a sampled person reads the email and clicks on the URL to start the survey, the researcher may
know much more about the later stages of the questionnaire completion process than in traditional mail
surveys. Such information may vary depending on the particular design of the email survey. For
example, surveys that use a paging design, breaking the survey into groups of items that are submitted
in turn to the Web server, can identify the point at which a respondent decided to terminate the survey,
and break-offs can be identified in similar ways to interviewer-administered surveys. In addition, if a
respondent submits the questionnaire to the Web servereven without answering all questionsit can
capture incomplete information.
In summary, break-offs can be identified by the item or point in the questionnaire at which the survey
instrument is terminated. In contrast, partials are identified by the number or proportion of questions
answered. Similar rules as used in mail surveys to distinguish between complete interviews and partials,
where break-offs can be used for push-to-web and other Internet surveys.
Again, clear descriptions of the decisions made and justification for the classification used are needed
for others to understand the outcome of the email or Web data collection effort.
As noted above, the discussion of completed interviews for email surveys is similar to that for other
modes. Therefore, one may refer to Table 1 in the introduction section for the list of completed
interview dispositions.
Eligible, No Returned Questionnaire (Non-response)
Eligible cases for which no interview is obtained consist of the same four types of non-response
discussed in Section 1.1 above: a) refusals and break-offs (2.10); b) non-contacts (2.20); c) others (2.30);
and d) miscellaneous (2.90) and should be coded similarly; see Tables 1.1 through 1.3.
Refusals and break-offs consist of cases in which some contact has been made with the emailed person,
and the sampled person or email recipient (e.g., in the case of another household member or
parent/guardian is contacted) declined to complete the questionnaire or otherwise indicated a refusal
(2.11), or a questionnaire is only partially completed with some notification that the respondent refuses
to complete it further (2.12).
Eligible non-contacts in web surveys of specifically named persons include cases where researchers have
received notification that a respondent was unavailable to complete the questionnaire during the field
period (2.21).
9
There also may be instances in which the questionnaire was completed and submitted
9
Further distinctions could distinguish cases involving temporary absences (e.g., family away on vacation for two weeks)
and other reasons for non-contact.
29
too late after the field period has ended to be eligible for inclusion (2.27), thus making this a “non-
interview.”
As with mailed surveys of specifically named people, other cases (2.30) represent instances in which the
respondent is eligible and does not refuse the interview, but no interview is obtainable because of: a)
mortality (2.31); b) the respondent is physically or mentally unable to do the questionnaire (2.32); c)
language barriers (2.33); d) literacy problems (2.34); or e) the incorrect person responding (2.36).
In email surveys of specifically named persons, particularly ones in which email is the only contact mode,
the subset of dispositions (particularly noncontact, 2.20, and Other, 2.30) would occur only if the
researchers received unsolicited information about the respondent that allowed for such classification
of the final disposition or the list was known to be accurate and include only eligible respondents.
However, in most instances, one would assume that no information would be returned, leading to the
case being classified as an “unknown eligibility” disposition.
The miscellaneous designation (2.90) is uncommon and would include cases involving some combination
of other reasons or special circumstances (e.g., lost records or faked cases invalidated later on).
Unknown Eligibility, No Questionnaire Returned
Cases of unknown eligibility and no completed questionnaire for email surveys (3.0) primarily include
situations in which the invitation or request was not delivered for a variety of reasons (3.20) or other
unknown eligibility situations (3.90).
Whether and how information comes back to the researcher about an email that is not delivered varies
across different email systems and servers. Due to such wide variations and rapid changes in email
technology, a detailed breakdown of codes to parallel the USPS categories in Table 3 would not be
reliable. For this reason, the subcategories of unknown eligibles (3.0) are left deliberately broad.
Depending on the particular circumstances of their study, some researchers may have more information
about what happened to the outgoing email message. In such cases, providing more detailed
dispositions under the 3.0 category umbrella is appropriate.
Cases in which the email invitation generates a response that indicates the invitation was returned
generically as undelivered and a screener is required are classified under 3.219.
10
Cases that are not
attempted or worked may be classified under 3.22. Finally, category 3.90 is reserved for other
miscellaneous types of nonresponse with unknown eligibility.
Not Eligible
Not eligible cases for web surveys of specifically named persons contacted using email include: a) the
named person is found to be ineligible due to screening information returned to the researchers and is
thus out-of-sample (4.10); b) the respondent is found to be deceased before the start of data collection
(4.11); c) situations in which quotas have been filled (4.80); d) duplicate listings (4.81); and e) situations
10
More detailed automated returns may include enough information to further evaluate a different disposition. For example, a
researcher may be using a list of company emails to conduct an employee survey. An IT-generated auto-reply may note that a
given email is no longer valid; the individual is no longer an employee. In this situation, the case may be coded as ineligible
(4.90).
30
where automated reply messages provide sufficient information to classify the sampled person as
ineligible (4.90). See Section 1.3 for detailed descriptions of these scenarios and their relevant codes.
As with other modes, definitive evidence of the status is needed in all cases concerning final disposition
codes involving ineligibility. When in doubt, a case should be presumed to be eligible or possibly eligible
rather than ineligible unless there is clear evidence leading to the latter classification.
1.3 Phone Surveys of Lists of Specifically Named Persons
This section covers surveys based on sampling frames of specifically named people or households where
sample members are contacted via the phone. Phone surveys include those conducted via landlines, cell
phones, or a combination. Interviews conducted using text messages or SMS surveys will be covered
separately in Section 1.7. Standard Definitions use Census definitions for households, group quarters,
and other related entities.
This section assumes that within-household selection procedures are not relevant because particular
individuals or households of particular individuals are sampled, and no further selection is necessary.
The discussion of completed interviews for phone surveys is similar to that for other modes, so one may
refer to Table 1 on page 11 for the list of dispositions.
Eligible, No Interview (Non-response)
Eligible cases for which no interview is obtained consist of four types of non-response:
refusals and break-offs (2.10); b) non-contacts (2.20); c) others (2.30); and d) miscellaneous (2.90).
Please refer to Table 1.1 above for more details. However, note that to be considered in one of these
categories, they must first have been determined to be eligible. This determination may be made before
sampling if it is determined that the sample frame (list) is complete, accurate, and up-to-date and no
additional eligibility screening is required.
Unknown Eligibility, Non-Interview
Dispositions related to Unknown Eligibility are summarized in Table 1.2 above. Cases of unknown
eligibility and no interview (3.0) include situations in which the sampled entity is unreachable at the
listed phone number (3.21) and those in which the named sample unit is reached, but it is unknown
whether they are eligible based on screening criteria (3.20). In several situations, it is impossible to
determine if a phone number is for a named individual, and therefore they cannot complete the
screener (3.215). Because several of these statuses often are temporary problems, it is advised that
these numbers be redialed occasionally within the field period before assigning a final disposition of
unknown eligibility.
Not Eligible
Table 1.3 above summarizes ineligible cases for named or list samples. Ineligible cases for named
samples primarily consist of two scenarios: a sample member is deemed ineligible based on a screener
(4.1), or a sample member is ineligible for specific replicates because a quota was filled.
31
Surveys with frames of named individuals tend to have fewer ineligible codes because the person or
sampled entity is often assumed to be potentially eligible, even when the contact method creates a
barrier to contact.
1.4 In-Person Surveys of Lists of Specifically-Named Persons/Entities
This section applies to surveys that recruit respondents in person in which the sampling unit is a
specifically named person, household, or other entity.
In such surveys, the named entity is the appropriate respondent, as described in the frame-level
introduction.
The discussion of completed interviews for in-person surveys is similar to that for other modes, and so
one may refer to Table 1 in the introduction section for the list of dispositions.
Additionally, many of the same situations can apply to in-person surveys as do for mail surveys, so one
may refer to Section 1.1 and Tables 1.1 through 1.3 for a comprehensive discussion of the application of
the various codes. The subsections below provide additional information relevant to in-person surveys
of named individuals.
Eligible, No Interview (Non-response)
Eligible cases for which no interview is obtained consist of: a) refusals and break-offs (2.10); b) non-
contacts (2.20); c) other non-contact (2.30); and d) miscellaneous situations (2.90) as summarized in
Table 1.2 in Section 1.
In situations where screening is not required to determine eligibility, cases in which no one was reached
at their housing unit should be given code 2.24. Specific cases in which a data collector was unable to
gain access to a housing unit should be coded as 2.241.
Unknown Eligibility, Non-Interview
As shown in Table 1.2 in Section 1, cases of unknown eligibility and no interview (3.0) include situations
in which it is unknown whether the selected list member is eligible based on screening criteria and
screening is not completed for various reasons. Situations where screeners are not completed and
required, can be assigned disposition 3.21. If an interviewer cannot reach the housing unit, including if it
is unsafe, that case should be given disposition 3.217. Housing units that are unable to be located may
be assigned 3.218. Cases that were not attempted or worked but should have been may be given
disposition 3.23. Finally, other situations can be given disposition 3.90.
Not Eligible
Table 1.3 in Section 1 summarizes ineligible cases for face-to-face surveys of list samples. Face-to-face
surveys are typically more straightforward than others due to the potential for finalizing more
dispositions, including those not eligible.
1.5 Web-Push Surveys of Lists of Specifically Named Persons
This section covers surveys based on sampling frames of specifically named people or households where
sample members are initially contacted via one mode (e.g., mail, text message) but completes the
survey online. In this way, participants are “pushed” from a sampling frame oriented in one mode to
32
data collection via the web, which is a different mode (“web-push”). An example of a web-push survey
of specifically named persons would be a survey of AAPOR membership, where members receive a
postcard invitation with a link to a web survey.
This section assumes that within-household selection procedures are irrelevant because particular
individuals are sampled, and no within-household selection is necessary.
Web-push surveys differ from other types of surveys of specifically-named persons because of their
hybrid approach. In calculating response rates, disposition codes related to participant ineligibility or
unknown eligibility (3.0, 4.0) should be determined by considerations related to the sample frame.
However, among those who are eligible, interview disposition codes (1.0, 2.0) should be determined by
data collection mode. Following the example of a survey of AAPOR members, participant eligibility
disposition codes would draw from mail surveys; however, interview disposition codes would draw from
those related to web surveys.
Eligible, No Interview (Non-response)
As with the modes previously discussed, eligible cases for which no interview is obtained consist of: a)
refusals and break-offs (2.10); b) non-contacts (2.20); c) others (2.30); and d) miscellaneous
(2.90). Please refer to Table 1.1 in Section 1 for more details. However, note that to be considered in
one of these categories, they must first have been determined to be eligible.
Surveys have to define a date on which eligibility status is determined. This usually would be either the
first day of the field period or the first day a particular case was fielded. Thus, for example, if a person
were selected as a sample member and alive on the first day of data collection but died before an
interview was completed, the case would be classified as a non-response due to death (2.31). If the
individual died before the eligibility date, they would be considered ineligible.
Unknown Eligibility, Non-Interview
As shown in Table 1.2, cases of unknown eligibility and no interview (3.0) include situations in which it is
unknown whether the selected list member is eligible based on screening criteria, as nothing is ever
returned for various reasons (3.20). Codes under 3.21 reflect various USPS return codes that may be
appended to a returned mailing. Researchers using a specialized carrier like FedEx or UPS may have
additional information on delivery status that may be considered in assigning codes appropriately, i.e., it
may be clearer if a household resident refuses a delivery than with standard mail.
Not Eligible
As shown in Table 1.3 in the introduction of the “List Samples” section, ineligible cases for named
samples primarily contacted via web push consist of two types of non-response: a sample member is
deemed ineligible based on a screener (4.1) or a sample member is ineligible for specific replicates
because a quota was filled. Surveys with frames of named individuals have fewer ineligible codes
because the person is often assumed to be potentially eligible, even when the contact method creates a
barrier to contact.
1.6 SMS (Short Message Service) or “Text Message” Surveys
An SMS survey is one in which the primary mode of contact is a text message sent to a mobile phone
number. This section will use the terms “text message” and “SMS” interchangeably. In an SMS survey,
33
the text message serves as the survey invitation, and respondents may be asked to answer questions via
back-and-forth messaging or a link to a web survey. In the back-and-forth scenario, respondents are sent
a question via SMS, reply to the question via SMS, and are then sent the next question via SMS. This
continues until all questions have been asked.
SMS may also be used as part of a design incorporating multiple contact modes. Examples include using
SMS as a second form of contact for a web survey or using SMS to send prenotification messages for a
phone or ABS survey. The SMS frame may be constructed from “named” individuals, such as a customer
list or study participants recruited via another frame/mode and who provided SMS contact information.
This section considers “named” respondents. For “unnamed” respondents, refer to the “unnamed”
sections.
In cases where SMS is used as a secondary form of contact for listed samples of named individuals,
readers should reference the section above for the primary mode of contact.
In some cases, research is subject to the United States Phone Consumer Protection Act
11
(TCPA)
requirements, and it may be necessary to obtain consent from respondents to send them a text
message (if technology that complies with TCPA is not used). If a consent stage is required, the
researcher(s) should also calculate and disclose the response rate for the consent stage.
As with other modes of contact, dispositions for SMS surveys can be divided into interviews, eligible
cases that are not interviewed, cases of unknown eligibility, and cases that are not eligible. SMS
technology platforms can provide the disposition of each message sent, although the available
disposition information can vary greatly by provider. In the case of an SMS invitation with a push to a
web survey, the final disposition is typically the outcome of the SMS invitation or final reminder (if a
series of reminder messages are sent). For back-and-forth messaging, each message sent will have a
disposition, and temporary and final disposition codes can be assigned based on the series of messages
(see the section on temporary and final disposition codes for more information).
In many studies using SMS, the named individual may have provided the number for the text messages
and provided consent to send the message. In this case, the respondent may have been pre-screened
and has known eligibility, and there may not be SMS dispositions that belong in the unknown eligibility
or ineligible categories. The researcher should be transparent with the calculation method and disclose
how respondents were categorized into eligible, unknown, and ineligible.
As noted above, the disposition of completed interviews for SMS surveys is similar to that for other
modes, and therefore one may refer to Table 1 in the introduction section for the list of completed
interview dispositions. The sections below discuss codes unique to SMS.
Eligible cases that are not interviewed (non-respondents)
Eligible cases that are not interviewed include refusals (2.11), partial completes with insufficient data
(2.12), and cases of known eligibility where contact with the respondent was not made (2.20). In the
case of SMS, a refusal may come as a request to opt out of future messages. Most text messages include
an option for the respondent to text “STOP” (or some equivalent phrase), which opts them out of future
messages.
11
https://www.fcc.gov/sites/default/files/tcpa-rules.pdf
34
Cases of unknown eligibility
There are two primary scenarios for unknown eligibility -- cases where the SMS is successfully delivered,
but there is no response, and those where the message is not successfully delivered. If your frame has
been pre-screened for eligibility and has provided a valid phone number for SMS contact, many of these
scenarios may belong in the “eligible non-interview” category.
Messages may be confirmed as successfully delivered by the SMS provider, but eligibility has not yet
been determined (3.20).
Messages may also go undelivered. Reasons for an undeliverable message include cases that are
returned undeliverable (3.219). This could be generic, or more specific information may be known: a
message blocked by the carrier (3.2191), a message that failed to send (3.2192), reaching a device that
does not support SMS (3.2194), having an otherwise unreachable device (3.2193), or a device that is
powered off (3.2195). Messages may also be sent to disconnected or non-working numbers (3.216).
The SMS provider may use categories slightly different from the specific examples provided here but
should be able to provide details to classify why the message could not be delivered.
Not Eligible
Ineligible cases include respondents who do not qualify for the survey, such as respondents who screen
out of the survey (4.1) or ineligible because quotas are filled with criteria defined by replicate (4.8).
35
Section 2: Address-Based Samples (ABS)
This second section assumes a frame of randomly-selected addresses, a common example of which is
address-based sample designs. Such frames are compatible with both single and multi-mode designs.
Being randomly-selected addresses, however, it is assumed that no ancillary information is necessary to
collect data besides the address itself.
This section covers all surveys in which the original sampling unit is the address of a residence or a
business that is, the entity at a specific location. We assume only that the frame is a list of addresses
and that it may or may not be possible to acquire or match the necessary information to conduct data
collection beyond mail or face-to-face.
In practice, for samples drawn from such frames, the mail is often the primary mode of contact; a
hardcopy questionnaire, an invitation to complete an online questionnaire (referred to as “web-push” or
push-to-web), or both may be provided via the mail. Moreover, single-mode face-to-face surveys often
have advance letters as a first contact. Therefore, this section begins (Section 2.1) with a detailed
discussion of disposition codes applicable to mailed invitations or instruments, most of which also apply
to other modes.
This section then discusses additional disposition codes unique to in-person surveys (Section 2.2),
another relatively common primary mode for samples of unnamed addresses.
Sometimes, sample members may also be contacted using data matched from other sources (such as a
matched phone number). Disposition codes specific to these “secondary” modes are discussed in
Sections 2.3 through 2.6.
A common example of a survey of unnamed persons would be a survey that uses an address-based
sampling (ABS) frame built from the USPS’s Delivery Sequence File. In such designs, a sampled unit’s
eligibility can be decomposed into two considerations (ABS Task Force, 2016):
Whether the address itself is eligible, that is, whether the address exists and is occupied by a
household.
Whether the household at the address is eligible, that is, whether (conditional on the address
being eligible) the household contains at least one person in the survey’s target population.
Only the first consideration may be relevant in practice for general-population studies in which the
target population consists of all households. For studies of specific subpopulations, both considerations
are relevant. In either case, failure to receive a reply to the survey questionnaire would place an address
into the "Unknown Eligibility" category since it cannot be confirmed that the address was an occupied
dwelling unit. Similarly, various postal return codes that failed to establish whether any eligible person
lives at the mailed address would leave the unit’s eligibility unknown.
Because it is common for a substantial number of cases to have unknown eligibility after address-based
surveys of unnamed persons, we recommend that the value of e (i.e., the estimated eligibility rate) be
computed carefully, with consideration of a series of factors such as vacancy rates, rural delivery, non-
residential addresses, etc., plus an adjustment for whatever is known about the addresses in the
36
sample
12
. That said, until such time, if and when a method is found to produce a more reliable
estimation of e, researchers must be guided by the best available scientific information on what share
eligible cases make up among the unknown cases. For example, a study may have realized a mail-
returned-undeliverable rate of 5%, but information from the American Community Survey indicates that
the unoccupied household rate is closer to 10%. In such an instance, it would be reasonable to use an e
of 10% as long as the assumptions are clearly stated. However, it is important to emphasize that
researchers should not intentionally select a proportion for e to boost the response rate.
Within-Unit Screening
This section assumes that, within each sampled address, some form of within-unit respondent selection
or screening will be used to determine if there is at least one eligible respondent to complete the survey
questionnaire. For example, the Kish method or some form of the so-called birthday method might be
used randomly (or pseudo-randomly) to sample a respondent among all eligible persons residing there.
Alternatively, a purposively determined respondent might be designated by their role within the unit.
(e.g., a parent or guardian of any children in the household, the person most knowledgeable of the
household’s expenses, the accountant for the business, or the secretary-treasurer of a club or other
voluntary organization). Of course, other selection procedures, such as including all persons eligible,
might also be employed.
Importantly, this section assumes that the survey is implemented in a single phase. Namely, any
screening and within-unit selection is implemented in the same questionnaire as the primary interview,
and a single disposition code applies to both the screener and the main interview.
When screening and the main interview are implemented in a single phase, care must be taken in
determining whether a sampled unit should be assigned an eligible nonrespondent or an unknown
eligibility code. Cases for which there is a household at the sampled address, but it is unknown whether
an eligible respondent usually crops up because of a failure to complete a needed screener. Even if this
failure clearly were the result of (for example) a “refusal” or a breakoff, it would only be assigned to one
of these eligible nonresponse codes if the existence of an eligible respondent were known or could be
inferred (e.g., the target population includes all households). Otherwise, it should be assigned “No
screener completed” (unknown eligibility, code 3.21). If useful for operational reasons, researchers
could create sub-codes that delineate the reason for the non-completion of the screener.
A two-phase design is an alternative approach, particularly common for mailed surveys targeting
subpopulations. In such a design, sampled addresses are first mailed a screening questionnaire in which
they are asked to enumerate the residents of the address and provide the necessary information to
determine whether each person is eligible. For households that return the screener and list at least one
eligible person, one or more eligible members are selected for the main interview, which is then mailed
out separately.
In a two-phase design, dispositions are assigned separately for the screening and main interview phases.
Different standards should be applied to the two phases.
12
For a comprehensive discussion of the calculation of e, please see A Revised Review of Methods to Estimate the Status of
Cases with Unknown Eligibility, Second Edition (forthcoming).
37
The first (screening) phase should follow the standards described in this section for surveys of
unnamed persons at addresses. At this phase, the target population for the screener is all
households; therefore, all addresses occupied by a household are eligible, regardless of whether
the household includes a person in the target subpopulation. Consequently, screener refusals
would be coded as eligible nonrespondents to the first phase.
The second (main interview) phase should follow the standards described in Section 1 on
surveys of named persons. This is because, at this phase, the sampling frame is the completed
screening roster (limited to eligible persons).
Using appended supplemental contact information
In unnamed samples of randomly selected addresses, units are sometimes contacted by modes other
than mail or in person. This could include contact by email, phone, and SMS/text. Such contacts require
appending additional information, namely an email or phone number, to sampled addresses from
external sources such as commercial databases. Appended email addresses and/or phone numbers are
typically available only for a portion of sampled addresses. Therefore, in surveys of randomly selected
addresses, these are typically used as secondary contact modes (e.g., for nonresponse follow-up
contacts) rather than primary contact modes (which would typically be mail, including push-to-Web, or
in-person).
It is also important to note that email, phone, or SMS may be used as a mode of contact, but the
sampling unit remains the physical address. In an unnamed sample of randomly selected addresses, the
goal of data collection is to obtain a response from the physical address to which the phone number or
email address was matched, not the phone number or email address per se. Furthermore, the accuracy
of appended phone numbers or email addresses may vary. In some cases, the appended phone number
or email address may not actually be associated with the sampled address. For example, an appended
phone number could be the cell phone number of a person who no longer lives at the address.
With this in mind, if an attempt is made to contact an address via an appended phone number or email
address, the screening for persons contacted by these modes should include confirmation that the
person lives at the sampled address. Suppose the contacted person does not live at the sampled
address, or whether they live at the sampled address cannot be determined. In that case, the proper
classification is unknown eligibility (specifically one of the sub-codes under 3.126), since no information
about the sampled address has been obtained. In both cases, the unknown eligibility code should take
precedence over any other disposition unless some other information about the address’s eligibility
status has been obtained. If such screening is omitted because of cost considerations, survey
organizations should be aware that this may introduce an unknown amount of error in assigning
disposition codes and, therefore into procedures that rely on accurate disposition information, such as
response rate calculations and weighting.
These considerations also imply that, as discussed below, some phone and email dispositions classified
as eligible nonresponse or ineligible in an RDD or named-person sample are appropriately classified as
unknown eligibility in an unnamed sample of randomly selected addresses. For example, it would not be
appropriate to classify a business number as ineligible in an ABS design unless it was confirmed that the
sampled address was a business address.
38
Appending a Name to Randomly Selected Addresses
Usually, when conducting a study of unnamed households by mail, a generic salutation such as “Postal
Customer” or “[CITY] Resident” is used in the address. However, researchers sometimes append a name
(individual or family) to a sample of addresses by merging addresses to a commercial database. In these
cases, using the appended name in addressing the mailing envelope or package is considered a “tool” of
unknown reliability to try to reach and gain cooperation at the address, not as a means to select a priori-
specific respondent. That is, the person or household named in the address is not themselves the
sampled unit, but it is hoped that including a name will increase the probability of receiving a response
from the actual sampled unit (the address). In such situations, the standards for surveys of unnamed
persons are most appropriate, even though a name is included on the contact materials.
It is important to note that appending a name to the envelope may result in unintended consequences
in a survey of unnamed persons. Utilizing a name may result in the sampled address being circumvented
if the mail is redirected to a new address to which the person whose name was appended has moved.
The USPS will typically direct the mailing to the named person even if they no longer reside at the
address on the mailing. Thus, researchers may have unknowingly sidestepped their goal of sampling an
address and administering a screener for within-address selection within the survey.
The same postal return codes may properly be assigned to different final dispositions in two studies
based on different eligibility assumptions as in the examples above. In these and other instances, the
rules of eligibility and the assumptions about eligibility will vary with the study design. Because the
nature of surveys that sample and recruit respondents via the mail is quite variable, researchers must
clearly describe their study and its sample design and explicitly state and justify their assumptions about
the eligibility of the units in their initially designated sample to properly inform others of how the final
unit dispositions are determined.
Throughout this section and in the tables, Standard Definitions explicitly uses the language employed by
the USPS to account for USPS dispositions in which mail is not delivered to an address. Researchers
operating in other countries or utilizing non-USPS mailers (e.g., Federal Express) should treat these
classifications as illustrative and naturally will have to use their own postal service’s codes. Non-USPS
codes should follow the Standard Definitions’ logic and intent, as illustrated by the USPS codes.
Table of disposition codes
Tables 2.1, 2.2, and 2.3 provide eligible nonresponse, unknown eligibility, and ineligible codes
(respectively) that are applicable when sampling randomly selected addresses. As in earlier sections, a
single asterisk identifies a new disposition code; a disposition that has been changed from the prior
version of the AAPOR Standard Definitions is indicated by two asterisks. Please refer to the Introduction
of this report for a discussion of general principles related to identifying (fully or partially) completed
surveys, which apply regardless of frame.
Since mail is usually the primary mode of contact when sampling randomly selected addresses, the
definition section begins with dispositions that apply to surveys conducted through the mail (via a hard-
copy questionnaire and/or push-to-web mailings). Additional subsections then provide more
information specific to interviews conducted in-person or via a secondary mode relying on appended
39
information such as matched phone numbers (either live interviewer or SMS/text-based survey. Most of
which are also applicable when using other modes with an address-based frame.
Table 2.1. Valid Eligible, No Interview (non-response) Dispositions for Samples of Unnamed Addresses
Description
Value
Notes & Examples
Eligible, Non-interview
2.0
To use any of these codes, the sampled address must have been
confirmed to be an occupied residence and (if further screening for
eligibility is required) to contain at least one eligible person. If
contacting via an appended phone number or email address, it
must be confirmed that the individual reached lives at the sampled
address.
Example: If an individual is reached by phone and states "I do not
want to participate" before confirming that they live at the
sampled address and meet the eligibility criteria (if any), the
address should not be classified as an eligible refusal (2.10). See
Table 2.2 for guidance on classification of these types of cases.
Refusal and break-off
2.10
Some contact has been made with the household, and they have
refused to participate or have broken-off.
Refusal
2.11
Household-level (or proxy) Refusal
2.111
A member of the household has declined to do the interview for
the entire household.
Another individual explicitly refuses to allow participation.
Parent or guardian refusal
2.1111*
The parent or guardian of named minor respondent refuses to
allow participation
Known respondent refusal
2.112
Selected respondent or entity directly refuses to participate.
Logged on to web survey, did not
complete any items (appended e-
mail)
2.1121
If contacting via an appended email address, this code is unlikely to
be used with ABS surveys since this would require confirmation
that the email address was associated with the sampled address.
Such cases should typically be classified as Unknown Eligibility,
specifically using the "failure to complete screener" code.
Read receipt confirmation, refusal
(appended e-mail)
2.1122
This is unlikely to be used with ABS surveys since this would
require confirmation that the email address was associated with
the sampled address. Such cases should typically be classified as
Unknown Eligibility, typically, "failure to complete screener" code.
Other implicit respondent refusal
2.113
Blank questionnaire returned
(mail)
2.1131*
Selected respondent (known to be
eligible) set appointment but did
not keep it (appended phone or In-
person)
2.1132*
Selected respondent (known to be
eligible) opted out of SMS
communication (SMS with
appended phone)
2.1133*
40
Table 2.1. Valid Eligible, No Interview (non-response) Dispositions for Samples of Unnamed Addresses
Description
Value
Notes & Examples
Break-off
2.12
The selected respondent began the interview, web survey, or
questionnaire but opted to terminate it or returned it with too
many missing items before completing enough of it to be
considered a partial complete (see Introduction for guidance on
classification of partial interviews).
Non-contact
2.20
Selected respondent unavailable
2.21
Household is confirmed as eligible but selected respondent never
available or unable to complete during the field period.
Phone answering device
(appended phone)
2.22
No contact has been made with a human, but a phone answering
device (e.g., voicemail or answering machine) is reached that
includes a message confirming it is the number for the named
sample member. This code is only used if no further screening is
necessary.
This code cannot be used for ABS final status unless phone number
is confirmed to be associated with sampled address
No message left (appended phone)
2.221
Message left (appended phone)
2.222
The interviewer left a message, alerting the household that it was
sampled for a survey, that an interviewer will call back, or with
instructions on how a respondent could call back.
Other non-contact
2.23
Quota filled (in released replicate)
2.231*
No one reached at housing unit
(In-person)
2.24
Can only be used if address is confirmed to be a residence and no
further screening is required to confirm eligibility
Inability to gain access to sampled
housing unit (In-person)
2.241*
Can only be used if address is confirmed to be a residence and no
further screening is required to confirm eligibility
Questionnaire
completed/returned too late
(outside of field period)
2.27
Other
2.30
Selected respondent died before
completing survey
2.31
-This is not common for ABS of unnamed respondents.
-This should not include USPS code of "deceased" for ABS of
unnamed respondents
-Must be able to determine that selected respondent was eligible
on the survey status date and died subsequently
Physically or mentally
unable/incompetent
2.32
The selected respondent's physical and/or mental status makes
them unable to do an interview. This includes both permanent
conditions (e.g., senility) and temporary conditions (e.g.,
pneumonia) that prevailed whenever attempts were made to
conduct an interview. With a temporary condition, the respondent
could be interviewed if re-contacted later in the field period.
Language or Technical Barrier
2.33
These can only be used if address is confirmed to be a residence
and no further screening is required to confirm eligibility
41
Table 2.1. Valid Eligible, No Interview (non-response) Dispositions for Samples of Unnamed Addresses
Description
Value
Notes & Examples
No one in the household speaks a
language in which the interview is
offered
2.331
The selected respondent does not
speak a language in which the
interview is offered
2.332
No available interviewer with
appropriate language skills at the
time of contact/Wrong language
questionnaire sent
2.333
The language spoken in the household or by the respondent is
offered, but an interviewer with appropriate language skills cannot
be assigned to the household/respondent at the time of contact.
Wrong language questionnaire sent - unable to send appropriate
questionnaire within the field period.
(Matched phone) Inadequate
audio quality
(Mailed or push to web) Literacy
problems
2.34
This can only be used if address is confirmed to be a residence and
no further screening is required to confirm eligibility
Location/Activity not allowing
interview
2.35
This can only be used if address is confirmed to be a residence and
no further screening is required to confirm eligibility
Example: matched cell phone reached while person is driving (no
screening required and address eligibility confirmed); gated
community (in-person); natural disaster disrupted mail (mail)
Someone other than respondent
completes questionnaire or
interview
2.36
Eligibility status of actual respondent must be known
Someone other than respondent
completes questionnaire or
interview - Full questionnaire
completed
2.361
Someone other than respondent
completes questionnaire or
interview - Partial questionnaire
completed
2.362
Wrong number (appended phone)
2.37
Eligibility of address/respondent must be confirmed via another
source. Unlikely to be common for ABS
Miscellaneous (eligibility
confirmed)
2.90
Examples: vows of silence, lost records, faked cases invalidated
later on
Table 2.2. Valid Unknown Eligibility, Non-Interview Dispositions for Samples of Unnamed Addresses
Description
Value
Notes & Examples
Unknown Eligibility, Non-Interview
3.0
Unknown if housing unit
3.10
No info known about sampled address/housing unit.
42
Table 2.2. Valid Unknown Eligibility, Non-Interview Dispositions for Samples of Unnamed Addresses
Description
Value
Notes & Examples
Not attempted or worked
3.11
Examples:
- No invitation sent
- Questionnaire never mailed
- No contact attempt made
- Address not visited
Note, all cases in unassigned replicates (i.e., replicates in which no
contact has been attempted for any case in the replicate) should be
considered ineligible (Table 2.4), but once interviewers attempt to
contact any address in a given replicate, all cases in the replicate have
to be individually accounted for.
Unreachable, unknown if
phone/email connects to sampled
address/residence, no other
information about housing unit
available (appended phone, email,
or SMS)
3.12
The codes under this heading apply if there is no indication of
whether the phone number or email address is associated with the
sampled address
Always busy (appended phone)
3.121**
No answer (appended phone)
3.122**
Answering device (appended
phone)
3.123**
Telecommunication technological
barriers, e.g., call-blocking (no
indication if phone connects to
sampled address/residence)
(appended phone)
3.124**
Call-screening, call-blocking, or other telecommunication technologies
that create barriers to getting through to a number.
Technical phone problems
(appended phone)
3.125**
Examples: phone circuit overloads, bad phone lines, phone company
equipment switching problems, phone out of range (AAPOR Cell
Phone Task Force, 2008 & 2010b; Callegaro et al., 2007).
Ambiguous operator’s message
(appended phone)
3.1251**
An ambiguous operator’s message does not make clear whether the
number is associated with a household. This problem is more
common with cell phone numbers since there are both a wide variety
of company-specific codes used and these codes are often unclear
(AAPOR Cell Phone Task Force, 2010b).
Inadequate audio quality
(appended phone)
3.1252*
Location/Activity not allowing
interview (appended phone)
3.1253*
Example: cell phone reached while person is driving
Fax/Data line (appended phone)
3.1254*
Non-working/ disconnected
number (appended phone)
3.1255*
Reached a person, unable to
confirm matched address
(appended phone, SMS, or email)
3.126*
Address confirmation refusal
(appended phone, SMS, or email)
3.1261*
43
Table 2.2. Valid Unknown Eligibility, Non-Interview Dispositions for Samples of Unnamed Addresses
Description
Value
Notes & Examples
Address confirmation unreached
(appended phone, SMS, or email)
3.1262**
Phone number or email address not
associated with the sampled
physical address (appended phone,
SMS, or email)
3.1263*
A respondent was reached at the phone number but does not live at
the sampled address (meaning that the number was wrongly matched
to the address).
SMS Text undeliverable (appended
cell phone SMS)
3.13*
The codes under this heading apply if there is no indication of
whether the phone number is associated with the sampled address.
Carrier blocked message (appended
cell phone SMS)
3.131*
Message failed to send (appended
cell phone SMS)
3.132*
Device does not support text
messages (appended cell phone
SMS)
3.133*
Device unreachable (appended cell
phone SMS)
3.134*
Device powered off (appended cell
phone SMS)
3.135*
Unknown SMS error (appended cell
phone SMS)
3.136*
(Matched email): Email invitation
returned undelivered (appended
email)
3.14*
This code applies if there is no indication of whether the email
address is associated with the sampled address.
Interviewer unable to reach
housing unit and cannot verify
address (In-person)
3.17
Includes situations where it is unsafe for an interviewer to attempt to
reach a housing unit.
Interviewer unable to locate
housing unit/address (In-person)
3.18
If the unit does not exist, this would be an Ineligible (4) code.
Nothing ever returned/no
information about address
3.19
Web link never opened (appended
email or cell phone SMS)
3.191*
This code applies if there is information that a web link was never
opened.
No reply received (appended cell
phone SMS)
3.192*
No reply to an SMS.
Nothing returned or completed
(mailed survey)
3.199*
Household exists; unknown if
eligible respondent
3.20
There is sufficient information to determine whether the address is
associated with a housing unit, but insufficient information to
determine whether the housing unit/resident is eligible.
No screener completed
3.21
For non-general population survey in which a screening interview is
required to determine eligibility.
Even if the failure to complete the screener were the result of a
“refusal,” it would classified here unless the existence of an eligible
respondent were known or could be inferred.
44
Table 2.2. Valid Unknown Eligibility, Non-Interview Dispositions for Samples of Unnamed Addresses
Description
Value
Notes & Examples
USPS Category: Refused by
Addressee [REF] (mailed survey)
3.211
Screener required
USPS category: Returned to Sender
due to Various USPS Violations by
Addressee (mailed survey)
3.212
Screener required
USPS Category: Cannot be
Delivered [IA] (mailed survey)
3.213
-Address must be confirmed occupied/screener required
SEE APPENDIX FOR LIST OF POSSIBLE USPS CODES
NOTE: This is unlikely to be common with ABS since it is unlikely the
unit would be known occupied but received returns from USPS.
USPS Category: Returned to Sender
(mailed survey)
3.214
Address confirmed eligible,
screener required but not
completed (appended phone)
3.215**
Note: These codes are likely to be rare since they require that the
phone number has been confirmed to be associated with the sampled
address, which would usually be a part of screening.
Address confirmed eligible,
screener required always busy
appended phone)
3.2151**
No answer (appended phone)
3.2152**
Household confirmed, screener required
Phone answering device (appended
phone)
3.2153**
Household confirmed, screener required
Telecommunication technological
barriers, e.g., call-blocking
(appended phone)
3.2154**
Household confirmed, screener required
Technical phone problems
(appended phone)
3.2155**
Household confirmed, screener required
Ambiguous operator’s message
(appended phone)
3.2156**
Household confirmed, screener required
Non-working/ disconnected
number. Includes Fax/Data line
(appended phone)
3.216*
(In-person) Interviewer unable to
screen housing unit (In-person)
3.217**
Housing unit confirmed as occupied but interviewer is unable to
complete a screener with the household
Includes situations where it is unsafe for an interviewer to attempt to
reach a housing unit.
Email invitation returned
undelivered (appended email or cell
phone)
3.219**
Likely to be rare since it requires that the email address has been
confirmed to be associated with the sampled address, which would
usually be a part of screening.
Other unknown eligibility
3.90
This should only be used for highly unusual cases in which the
eligibility of the household/respondent is undetermined and for which
the outcome does not clearly fit into one of the above designations.
Example: High levels of item nonresponse in the screening interview
prevents eligibility determination.
45
Table 2.3. Valid Not Eligible Dispositions for Samples of Unnamed Addresses
Description
Value
Notes & Examples
Sample Unit Not Eligible
4.0
Selected Respondent Screened Out of
Sample/ Ineligible
4.10
Housing unit determined to be eligible but selected respondent is not
eligible.
This is not likely to be common for ABS because typically selection
would only occur among screened eligible respondents. If household
has no eligible respondents, this should be coded as 4.70.
Housing unit ineligible
4.30
Sampled address does not exist.
Address not workable
4.31
No such address
4.313
USPS Category: No Such Number
[NSN] (mailed survey)
4.3131
Note that the USPS may make their own misclassification in mail return
codes.
USPS Category: No Such Post Office in
State (mailed survey)
4.3132
USPS Category: No Such Street [NSS]
(mailed survey)
4.3133
USPS Category: Postal Box Closed
(mailed survey)
4.3134
Not a housing unit
4.50
Sampled address is not a within-scope housing unit.
Business, government office, other
organization
4.51
Institution
4.52
Group quarters
4.53
Code does not apply if group quarters are within scope.
Vacant address
USPS Category: Vacant [VAC] (mailed
survey)
4.60
Sampled address is vacant.
Regular, vacant residences
4.61
Seasonal/Vacation/Temporary
residence
4.62
Code may not apply if seasonal/vacation/temporary residences are
within scope.
Other vacant
4.63
No eligible respondent in household
4.70
Sampled address is a within-scope housing unit but does not include any
persons in the target population.
Quota filled (in unreleased sample
replicate)
4.80
Duplicate listing
4.81
Other
4.90
*New disposition code
**Updated disposition code
2.1 Mail/Web-push surveys of Randomly Selected Addresses
Eligible, No Interview (Non-response)
Eligible cases for which no completion is obtained consist of three types of nonresponse: a) refusals and
break-offs (2.1), b) non-contacts (2.2), and c) others (2.3 and 2.9).
46
Refusals and break-offs include cases in which some contact has been made with an eligible sampled
address, and someone at the address has declined to complete the questionnaire. Furthermore,
someone has communicated that the questionnaire will not be completed (2.11) or a questionnaire is
returned with too few items completed to be treated as a partial response (2.12).
Refusal codes distinguish between household-level (or proxy) refusals (2.111) and known respondent
refusals (2.112). Household-level or proxy refusals (2.111) occur when the researcher knows that the
household contains eligible persons, but the refusal comes from someone other than a specifically-
selected respondent; for surveys of minors, refusal by a parent or guardian represents a special case of
this situation (2.1111). Known respondent refusals (2.112) occur when a specific person at the address
has been selected as the designated respondent and refuses to participate.
In mail surveys of unnamed persons, entirely-blank questionnaires are sometimes mailed back in the
return envelope without any explanation as to why the questionnaire was returned blank. This should
be treated as an “implicit refusal” (2.1131) unless eligibility cannot be inferred (in which case it should
be treated as “No screener completed”, or 3.21) or there is another good reason to apply a different
code. An analogous scenario in a web-push survey would be when a respondent logs into the online
instrument but fails to complete any items. In some instances, when a noncontingent cash incentive was
mailed to the respondent, the incentive was mailed back along with the blank questionnaire.
Researchers may want to create a set of unique disposition codes to differentiate different types of
nonresponse from the outcome in which no incentive was returned. Subcodes should be mutually
exclusive and can be reported in a logical grouping along with other subcodes as appropriate when
describing the survey response.
Known non-contacts (2.2) in mail or web-push surveys of unnamed persons include cases in which
researchers receive notification that the eligible respondent was unavailable to complete the
questionnaire during the field period (2.21). There also may be instances in which the questionnaire
was completed and mailed back too late after the field period has ended to be eligible for inclusion
(2.27), thus making the case a “non-interview” instead of a refusal.
A related situation occurs in surveys that employ quotas when returned questionnaires are not treated
as part of the final dataset because the quota for their subgroup has already been filled (2.231). Code
2.231 should be used when a unit meets the sample’s eligibility criteria. Otherwise, it would have been
included in the final dataset if they had responded earlier before the quota was met. Applying a quota
this way is akin to ending the field period early for subgroups whose quota has been filled. This differs
from a situation in which a sample replicate is released with the intention of only accepting responses
from particular subgroups to meet quotas for those subgroups. In such situations, respondents from
that replicate who are outside of the target subgroups(s) for the replicate would be assigned code 4.80
because they do not meet the eligibility criteria for the replicate for which they were sampled.
The guiding principle when applying quotas is that eligibility criteria must be established when a unit is
sampled and should not change based on how long it takes a unit to respond. Otherwise, eligible units
excluded from the final dataset solely because of a late response (whether “late” means after the end of
the field period or after a quota was filled) are correctly coded as eligible nonrespondents, not ineligible
cases. For example, suppose a survey set separate quotas for Black and Hispanic respondents. If the
survey used only one sample release and stopped accepting responses from Hispanic respondents after
their quota was met, any Hispanic responses after this point would be assigned code 2.231 because they
47
were eligible at the time of sampling. In contrast, if the survey met the Hispanic quota but not the Black
quota in the first sample release and released a second replicate for which only Black respondents were
eligible, Hispanic respondents to the second replicate would be assigned code 4.80. In all cases, what
the quotas are and how they are to be filled must be clearly defined, and whether survey responses
received after quotas have been met are accepted and included in the final data set should be clarified
in survey documentation.
Of note, category 2.2 is reserved for those cases where some indication is received that the selected
respondent is eligible. The more common scenario of simply receiving no response to the invitation, and
no indication of whether the invitation was received, is classified under “unknown eligibility” below.
Other cases (2.3) represent instances in which the respondent within the household is selected and
eligible and does not refuse to complete the questionnaire, but no completion is obtainable because of:
a) deaths (2.31); b) respondent physically or mentally unable to do the questionnaire (2.32); c) language
(2.33) or literacy (2.34) problems; d) location/activity not allowing interview (2.35); or e) someone other
than the designated respondent completes all or some of the questionnaire (2.36). In mail surveys of
unnamed persons particularly ones in which mail is the only contact mode this subset of
dispositions (Other, 2.3) would typically occur only if the researchers received unsolicited information
about the respondent that allowed for classification as an eligible nonrespondent.
In surveys of unnamed addresses, death constitutes an eligible nonresponse if a respondent at the
sampled address had previously been confirmed to be eligible but dies before the full questionnaire is
completed, which is likely to be rare. Whether a deceased sample member is an eligible nonresponse or
an ineligible respondent depends on fieldwork timing. Surveys must define a date on which eligibility
status is determined. This would usually be either the first day of the field period or the first day a
particular case was mailed requesting participation in the survey. Thus, for example, if a person were
alive and selected as the respondent on this status date but died before a questionnaire was completed,
the case would be classified as a nonresponse due to death (2.31). However, in cases where a
respondent is not randomly selected but rather a most knowledgeable person is selected for
participation, the researchers may choose to re-approach the sampled unit to determine if a newly-
eligible respondent is now capable of completing the questionnaire. For example, in a survey that any
responsible household member is asked to complete on behalf of a household, and if one responsible
household member who was alive at the time the household was first contacted dies during the field
period, a different household member could become the eligible respondent for the sampled
household. If this is done, the final outcome of the case would be determined by what happens during
the effort to gain cooperation from a newly-eligible respondent. Similar time rules would apply to other
statuses.
Of note, 2.31 would not be used for the USPS “deceased” return code in a survey of unnamed persons.
Since this code pertains to a person rather than an address, it would typically only be encountered with
an unnamed frame if a name were appended to the address and included on the mailing. As discussed
above, even when an appended name is used on the mailings, the sampled unit remains the address,
not that person. Therefore, the appropriate code in this situation would be unknown eligibility due to
undeliverability (3.213) unless the researcher had other information indicating that there were no other
living persons at the address, in which case the “no eligible respondent” code (4.70) would be
appropriate.
48
Selected eligible respondents who are physically or mentally unable to complete the questionnaire
(2.32) would include both permanent conditions (e.g., senility, blindness, paralysis) and temporary
conditions (e.g., pneumonia, drunkenness) that prevailed throughout the field period. With a temporary
condition, it is possible that the respondent could/would complete the questionnaire if recontacted
later in the field period or if the field period were later extended. But again, physical or mental barriers
may cause the original eligible respondent to no longer be eligible. In these instances, researchers could
choose to re-approach the sampled unit and try to gain cooperation from the newly-eligible person. If
this is done, the outcome of the case would be determined by what happens during the subsequent
effort to gain cooperation from a newly-eligible respondent.
Language problems (2.33) include cases in which no one at the address speaks or reads a language in
which the interview is offered (2.331) or the specific designated respondent does not speak or read this
language (2.332). It also would include instances (2.333) in which interviews are available in the
language the eligible respondent can speak. Still, this language was not offered to the eligible
respondent (e.g., the questionnaire is printed in that language, but that version was never sent to the
respondent). In contrast, literacy problems (2.34) would apply to cases in which the selected eligible
respondent could speak/read one of the languages in which interviews were offered but could not read
it well enough to comprehend the meaning of the questions.
While location/activity not allowing an interview (2.35) mostly applies to interviewer-administered
surveys, an example with a mailed survey would be a natural disaster that disrupts the mail in a
particular area during the survey’s field period.
When the sample design requires the designation of a single, specific respondent per sampled address,
and the researcher learns that someone other than the designated respondent (or a qualified proxy, if
proxy responses are permitted) completed the questionnaire, the unit should be classified as an eligible
nonresponse (2.36). Distinctions between full (2.361) and partial (2.362) completions can be made.
Again, in this scenario, the researcher could choose to re-approach the sampled unit to gain cooperation
from the correct person. In this case, the outcome would be determined by what happens during that
subsequent effort.
The miscellaneous designation (2.90) would include cases involving some combination of other reasons
(2.30) or special circumstances (e.g., lost records or faked cases invalidated later on).
As noted below, some USPS undeliverable codes (classified as subcodes of 3.2) suggest that an address
exists but provide no information about the characteristics of the person(s) at an address. These codes
may be encountered when names are appended to the address file and included in the address on
mailings; in such cases, mail may be returned as undeliverable if it cannot be delivered to the specific
person or household it is addressed. Researchers may choose to resend the mailing with a generic
salutation (e.g., “Postal Customer”). But if mail is returned and no more attempts to reach that address
are made, the proper classification of these codes depends on whether eligibility can be inferred.
Suppose only a specific type of respondent is eligible for the survey. In that case, (given that no
screening at the address was completed), these codes should be classified as unknown eligibility
because the person whose name was appended to the address is not necessarily the selected/eligible
respondent. However, if the existence of the address is, by itself, sufficient to confirm the address’s
eligibility, these codes should be classified as eligible nonresponse. In all cases, in mail surveys of
49
unnamed persons, no attempt should be made to forward the envelope to a new address for the person
whose name was included in the address.
Unknown Eligibility, Non-Interview
Cases of unknown eligibility (3.0 and following) include situations in which nothing is known about
whether the mailed questionnaire or invitation ever reached, or could have reached, the sampled
address to which it was mailed (3.1); situations in which the address is confirmed to exist, but it is
unknown if any eligible person is present at the address (3.2, relevant to studies that require screening);
and other situations (3.9).
For mail or web-push surveys, the unknown-eligibility subset in which nothing is learned about whether
the mailing could or did reach the sampled respondent is broken down further into cases in which: a)
the questionnaire was never mailed (3.11) and b) absolutely no information ever reaches researchers
about the outcome of the mailing (3.199). This latter disposition often occurs with high frequency in
mail surveys.
Failure to complete a required screener (3.21) is a case of unknown eligibility. When screening is
required, several USPS undeliverable codes would be classified as subcodes of 3.21. These could include
situations in which the mailing is refused or unclaimed by the resident of the address (3.211); returned
due to USPS violations by the addressee (3.212); or any other situation in which the mailing cannot be
delivered, but the address may be occupied, which is likely to be rare (3.213). Such codes will typically
be classified as unknown eligibility unless the address’s existence is sufficient to establish eligibility; in
such situations, these would be more appropriately classified as eligible nonresponse due to non-
contact (see discussion above).
The miscellaneous unknown eligibility code (3.9) should be used only for unusual situations in which it
cannot be determined whether an address includes eligible persons that do not fit into the above
categories. An example would be if high levels of item nonresponse in the screening items precluded an
eligibility determination.
Not Eligible
For mail surveys of unnamed persons, code 4.1 should be used when the sampled address is determined
to be eligible; an individual respondent is selected. Still, that individual respondent is later determined
to be ineligible, and no other eligible persons are at the address. This situation is likely to be rare with
ABS since respondents would typically be selected only among residents confirmed to be eligible (e.g.,
via a roster).
For the more common scenario in which it is confirmed up-front that an address contains no eligible
persons, code 4.7 is more appropriate.
USPS undeliverable codes indicating that the address does not exist can be assigned to the appropriate
subcode of 4.3 (housing unit ineligible). These can include “no such number” (4.3131), “no such post
office in state” (4.3132), “no such street” (4.3133), and “postal box closed” (4.3134).
For studies of households, the appropriate sub-code of 4.5 (not a housing unit) can be applied to
addresses that are confirmed to be non-residential and out of scope for the survey, such as businesses
(4.51), institutions (4.52), or group quarters (4.53).
50
Addresses confirmed to be vacant (e.g., via the USPS “vacant” undeliverable code) should be classified
under 4.6. If sufficient information is obtained, these can be further disaggregated between regular
(4.61), seasonal/vacation (4.62), and other vacant addresses (4.63). Cases should be assigned to a “not
eligible code” for ABS samples where some or all mail has been returned with one of the USPS
undeliverable codes, and no more definitive information has been received (e.g., a complete or active
refusal), regardless of the USPS “vacancy” indicator on the sample frame. According to USPS guidelines,
an address must be unoccupied for 90 days to be classified as vacant (Harter et al., 2016). For this
reason, researchers should not rely on frame information to determine case dispositions. A sampled
address may have become vacant within the 90 days and has yet to be classified as such on the frame.
Alternatively, addresses classified on the frame as vacant may be occupied by eligible respondents
during data collection.
As noted previously, in surveys that use quotas, code 4.8 can be used for subgroups that are pre-
designated as ineligible for a given sample replicate owing to their quotas having already been filled. In
contrast, if a respondent would have been eligible at the time of sample release, but their response is
not accepted due to a quota being filled in the interim, the eligible nonresponse code (2.231) is more
appropriate.
A final specific type of “ineligibility” occurs in surveys of unnamed persons when the sample frame
includes duplicates, such as those using a large “mailing list” as the sampling frame. When the same unit
inadvertently appears more than once in the sampling frame, and both records are sampled, this may be
recognized only after the respondent returns mailings (e.g., when a respondent mails back a completed
questionnaire and a blank one with a note that s/he received two questionnaires). In such cases, all but
one of the returns should be coded as not eligible due to duplicate listings (4.81). Of course, researchers
should strive to eliminate duplicates from the sample frame before a sample is selected and a survey is
fielded.
Finally, additional reasons for non-eligibility can be coded under Other (4.9). In all cases about final
disposition codes involving ineligibility, definite evidence of the status is needed. When in doubt, a case
should be presumed to be eligible or possibly eligible rather than ineligible unless there is unambiguous
evidence of ineligibility.
2.2 In-person surveys of Randomly Selected Addresses (ABS)
For the language used in this section, an in-person interview is assumed to be one in which housing units
are sampled from an address-based sampling frame of some geopolitical area. The interviews are
conducted in person by a live interviewer.
Many of the classifications discussed in Section 2.1 for mailed surveys also apply to in-person interviews
conducted from an address-based sample of unnamed persons, especially those related to eligibility and
screening. However, there are also unique codes that do not apply to mailed contacts but do apply to
live in-person interviews, especially around cases of eligibility. This includes codes derived from USPS
undeliverable codes if an advance letter is sent prior to attempting in-person contact.
51
Eligible, No Interview (Non-response)
Several specific types of eligible nonresponse may arise with in-person data collection. Implicit refusal
occurs when the selected (and known eligible) respondent sets but does not keep an appointment with
the interviewer (2.1132).
Two types of non-contact specific to the in-person collection may come from an inability to reach
anyone at the housing unit (2.24). If this is specifically due to an inability to gain access to the sampled
housing unit it can be coded 2.241. The denied-access cases include guarded or restricted access
apartment buildings or homes behind locked gates. For a case to fall into this category, researchers must
determine that the sample unit is an occupied unit with an eligible respondent and no contact with
members of the housing unit is achievable.
13
The same is true in the no-one-at-residence dispositions, in
which no contact is made with a responsible household member. Still, the presence of an eligible
household member is ascertained.
14
Finally, location/activity not allowing interviews (2.35) may occur more commonly for in-person than for
mailed surveys.
Unknown Eligibility, Non-Interview
As usual for surveys of unnamed persons, eligible nonresponse codes should only be used if an address
is known to contain eligible persons. An appropriate unknown eligibility code should be used if this
cannot be ascertained (e.g., because screening is required and not completed).
A housing unit is confirmed to exist and be occupied. If an in-person interviewer cannot verify whether a
housing unit exists, code 3.17 should be used. If the housing unit is known to exist based on other
information, but the interviewer cannot reach it to confirm that it is occupied and eligible, code 3.18
should be used. A sub-code of 3.21 (3.217) can be used when screening is required. Still, the interviewer
is unable to screen the household to determine eligibility.
Not Eligible
On the other hand, if the interviewer can ascertain that the address does not exist at all (4.313), does
not correspond to a residence (4.50), is vacant (4.60), or does not contain an eligible person (4.70), the
appropriate ineligible sub-code should be used.
2.3 Email surveys of Randomly Selected Addresses
While perhaps less common than email surveys of named individuals, a growing movement exists to
match email addresses to physical addresses for data collection purposes. In such instances, we do not
have a named person to contact but only a physical address with a potentially-associated email.
Many of the considerations above, focusing on web-push surveys, apply to randomly-selected addresses
with matched emails, but there are different considerations for determining eligibility. In particular,
13
Refusal by a security guard or tenants’ council to grant access does not constitute a “refusal” since these are not
representatives of the targeted housing unit. However, if a request for an interview were conveyed to a responsible household
member by such an intermediary and a message of a refusal returned to the interviewer, then this should be classified as a
refusal.
14
Further distinctions could distinguish cases involving temporary absences (e.g., family away on vacation) and other reasons
for non-contact.
52
most refusals and non-contacts arising from email invitations are likely to be unknown eligibility, rather
than eligible nonresponse, unless it was first confirmed that the email address is associated with the
sampled physical address.
Because appending contact information entails some error, it cannot be assumed that the appended
email address was correctly associated with the sampled address; therefore, the screening for
respondents recruited via email should include confirmation that they live at the sampled address.
Suppose a respondent is recruited via email but it cannot be confirmed that the respondent lives at the
sampled address. In that case, the unknown eligibility codes 3.1261 (if the respondent refused the
address confirmation items) or 3.1262 (if the respondent broke off before the address confirmation
items) should be used. If the respondent reports that they do not live at the sampled address (or it is
otherwise determined that the email address was incorrectly matched to the physical address), code
3.1263 should be used.
Concerning eligible nonresponse codes, email-specific subcodes of 2.112 (known respondent refusal) are
available but likely to be rarely used. Code 2.1121 applies to circumstances in which the respondent logs
into the web survey (implying that they received the email and clicked the link to the survey) but did not
complete any items. Code 2.1122 applies to circumstances where the researcher receives some other
indication that the email was opened (such as a read receipt), but the survey was never accessed. To be
used in an ABS study with appended email addresses, these codes require some confirmation that the
email address was associated with the sampled address (and that the address contains an eligible
respondent if screening is required), which is why they are likely to be rare.
If there is no independent confirmation that the email address is associated with the sampled address,
the unknown eligibility codes 3.14 (email invitation returned undelivered) or 3.191 (web link never
opened) are most appropriate. On the other hand, if the address has previously been confirmed to exist,
but further screening is required to determine whether it contains eligible persons, an undelivered email
should be assigned code 3.219.
2.4 Phone Surveys of Randomly-Selected Addresses
This section covers surveys based on sampling frames of unnamed households where sample members
are contacted only by phone. Surveys using random digit dial methodology (RDD) are covered separately
in section 3.0. Phone surveys included here are those in which the sampling frame was address-based
with unnamed persons with appended phone numbers most likely from a vendor. The following
section will cover interviews conducted using text messages or SMS surveys.
As is the case with matched email addresses, eligible nonresponse codes should be used with phone
contacts in an unnamed-persons design only if it has been confirmed that the sampled address is
eligible. One phone-specific refusal scenario occurs when an eligible respondent at the address is
selected and sets an appointment to complete the interview later but does not keep the appointment
(2.1132). Phone-specific non-contact scenarios include when a phone answering device is reached
(2.22), which can be distinguished by whether a message was left (2.222) or not (2.221). Other potential
phone-specific scenarios include inadequate audio quality (2.34), location/activity not allowing an
interview (2.35), and wrong numbers (2.37). Again, these should be used only if the eligibility of the
53
address has already been verified and therefore are likely to be rare in ABS designs with matched phone
numbers.
Cases of unknown eligibility are likely to be more common. Subcodes of 3.12 should be used if it the
phone number has not been confirmed to be associated with a sampled address containing an eligible
person. The subcodes delineate various reasons for non-contact in a phone survey, including busy
signals (3.121); no answer (3.122); answering devices (3.123); and call screening or other blocking
technologies (3.124). Various technical problems can be coded under 3.125. These include instances
where an ambiguous operator’s message does not make it clear whether the number is associated with
a household, which is particularly common with cell phone numbers (3.1251); inadequate audio quality
(3.1252); location/activity not allowing interview (3.1253); fax/data lines (3.1254); and non-working or
disconnected numbers (3.1255).
Codes 3.1261 and 3.1262 can be used when a person is reached at the number. Still, it is not confirmed
whether the number is associated with the address due to a refusal of the address confirmation items
(3.1261) or a breakoff prior to the address confirmation (3.1262). If the respondent on the phone
reaches the address confirmation item and indicates that they do not live at the sampled address (or it is
otherwise determined that the phone number was incorrectly matched to the sampled address), code
3.1263 should be used.
Code 3.215 can be used with phone contacts when confirmed that the number corresponds to the
sampled address. However, further screening necessary to determine the presence of eligible persons
has not been completed. This is likely to be rare for ABS designs since confirmation of the address would
typically be a part of screening. Again, subcodes can be used to delineate the reason for the screener
non-completion, including no answer (3.2152), answering device (3.2153), call blocking technology
(3.2154), technical phone problems (3.2155), ambiguous operator’s messages (3.2156), and non-
working/disconnected numbers (3.216).
As with other modes, the appropriate ineligibility code (one of the codes listed under 4.0) should be
used only if specific information is obtained indicating that the physical address sampled for the study
(and/or the household living at that address) is ineligible. For example, in an ABS study targeting
residential addresses, a business phone number would be assigned code 4.51 only if it was confirmed
that the phone number was correctly associated with the sampled address and that address was a
business address. If the number was a business number incorrectly matched to the sampled address,
code 3.1263 is more appropriate.
This means that when dialing phone numbers matched to a sample of addresses, many codes
considered ineligible in a random digit dial (RDD) survey are, in most cases, more properly considered
unknown eligibility. As noted above, these include fax/data lines and non-working or disconnected
numbers. In the absence of additional information about the sampled address, these outcomes are
usually unknown eligibility in an ABS design because they provide information only about the quality of
the appended phone number, not the eligibility of the sampled address.
2.5 SMS (Short Message Service) or “Text Message” Surveys
An SMS survey is one in which a mode of contact is a text message sent to a mobile phone number. This
section will use the terms “text message” and “SMS” interchangeably. In an SMS survey, the text
54
message may serve as the survey invitation, and respondents may be asked to answer questions via
back-and-forth messaging or a link to a web survey. In the back-and-forth scenario, respondents are sent
a question via SMS, reply to the question via SMS, and are then sent the next question via SMS. This
continues until all questions have been asked.
SMS may be used as part of a design incorporating multiple contact modes. In samples of randomly
selected addresses, SMS would most likely be used as a secondary contact mode for addresses to which
cell phone numbers are appendedfor such addresses. SMS might be used to send prenotification
messages and/or as an additional mode of contact with a survey invitation to the web survey. In cases
where the SMS is used as a secondary form of contact, readers should reference the primary mode of
contact section.
In some cases, research is subject to the United States TCPA requirements, and it may be necessary first
to obtain consent from respondents to send them a text message if technology that complies with TCPA
is not used. Because of the current regulatory environment, SMS surveys of unnamed persons are not
common. However, SMS surveys to unnamed persons (and with no consent) are conducted using new
technologies believed to meet requirements for the manual sending of text messages. They are also
conducted outside the United States (where regulations differ from country to country). If regulations in
the US were to change or technologies that allowed for compliant manual dialing were widely adopted,
this methodology could expand.
If consent is required, the request for consent would usually be part of an initial screening phase in
which some other modes, such as mail, make contact. Respondents who consent to be texted could
then be contacted via SMS to complete additional questions. This approach is analogous to a two-phase
design (see discussion above); therefore, disposition codes should be assigned, and response rates
reported separately for the two phases. Dispositions for the screening phase would use the rules
described above for surveys of unnamed persons via whatever mode(s) were used for that phase;
dispositions for the SMS contacts in the second phase (and any other modes used in that phase) would
be assigned using the rules for surveys of named persons, described in Section 1.
The SMS-related codes shown in Tables 2.1, 2.2, and 2.3 apply when SMS invitations or notifications are
sent to a number appended to an address-based sample without a separate consent phase. Due to
regulatory requirements, this approach is likely to be rare in the U.S., but these codes are included in the
tables for completeness. Researchers should comply with the applicable regulatory requirements,
including the TCPA. Including disposition codes for unnamed persons (meaning that consent has not
been given) in the Standards does not reflect an AAPOR endorsement or opinion of the legality of
sending non-consented text messages.
55
Section 3: Phone Samples, Random-Digit Dial (RDD)
This section focuses on randomly-selected phone numbers independent of names or addresses. For the
purposes of the language used in this section, a random-digit-dial (RDD) phone survey is one in which a
random number that is the length of a phone number (e.g., 10-digits in the U.S.) is generated and then
dialed to see if the number is a working phone number associated with a household.
A common example of an RDD survey of unnamed persons would be a survey that draws two samples
one of landline phones and one of cell phones. This type of design is often referred to as a Dual-frame
RDD design (DFRDD). In many countries, a phone number is made up of four components:
+1 222 333 4444
Country code Area Code Exchange Code Line Number
In the U.S., RDD sample vendors know which area codes are affiliated with which geographies and
whether they are used to assign landline numbers, cell numbers, or both. Similarly, they also know
whether an exchange is associated with landline or cell numbers.
15
This allows them to subset the list of
all potential 10-digit numbers to those starting with valid area and exchange codes and then select the
last four digits randomly. In this case, the number 111-111-1111 would never be sampled since 111 is
not a valid area code, but 312-965-1234 could be sampled since 312 is a valid area code (Chicago) 965 is
a valid exchange within the 312 area code. The sample vendor would auto-generate the last four digits
(1234).
While the above example is a common RDD sampling approach, some vendors further restrict the
sample frame before drawing a sample or may further refine the sample once it is drawn to improve
efficiency. Researchers need to describe in detail how the RDD sample is drawn. This should include
mention of whether the sample was:
1. Restricted to blocks or banks of numbers with a certain minimum number of listed phone
numbers;
2. Limited to numbers flagged as “active” or “previously active,” or employed any other activity
codes;
3. Purged of business numbers by cross-reference to databases such as the Yellow Pages;
4. Screened of non-productive numbers before the sample was released to interviewers; or
5. Modified or cleaned in any other way.
In situations in which multiple RDD frames were used (e.g., landline and cell RDD frames), researchers
must describe how each frame is constructed, how each sample is drawn, and how the frames are
blended to create a single set of results (e.g., the proportion of each frame).
The section below covers RDD phone surveys conducted via landline phones, cell (mobile) phones, or a
combination. Section 3.1 discusses RDD surveys conducted over the phone using live interviewers or
15
In the U.S., it is possible to port a number after it is assigned. This means that a number may have originally been associated
with a landline (and therefore have a landline-associated area and exchange code) but the user migrated the number to be
used for a cell phone.
}
}
}
}
56
interactive voice response (IVR) software. In contrast, Section 3.2 delves into disposition codes specific
to using SMS or text messaging of RDD sampled numbers.
This section does not cover surveys conducted via phone but uses other types of sampling frames (e.g.,
registered voter files, address-based samples with phone number appends). It also does not cover non-
residential surveys.
Similar to ABS designs, an RDD phone number’s eligibility can be decomposed into two considerations:
Whether the phone number itself is eligible is whether it is working and belongs to an individual
who lives in a household (as opposed to a business, for example).
Whether the person(s) reached via the phone number is eligible, that is, whether (conditional on
the number being eligible) the phone number is used by at least one person in the survey’s
target population.
In most cases, phone numbers for which an automated telephony signal (e.g., ‘This number is not in
service.) or human contact has not been made are considered "Unknown Eligibles". Since it cannot be
confirmed if the phone number is working or associated with an individual who resides within a housing
unit. Even for general-population studies in which the target population consists of all adults, RDD
surveys require contact with an individual or an automated telephony signal to determine whether they
are eligible since many minors have a cell phone.
Within-Unit Selection
Researchers must consider whether the within-household selection is needed for their survey. Because
RDD samples are not samples of individuals, landline RDD numbers can often reach households that
contain multiple eligible individuals. For individual-level surveys in which it is desirable to collect
information about a single person within the household, within-household selection might be
appropriate for landline samples. This may be done via a Kish selection procedure (1965), one of the
birthday methods, or another appropriate procedure. For surveys of households (i.e., where any adult
household member could reasonably answer questions about the household), within-household
selection may not be required.
Because cell phones are primarily individual devices in the U.S., researchers have found within-
household selection among cell RDD samples unnecessary (Carley-Baxter, Peytchev, and Black 2010).
For surveys in which multiple household members may be selected, researchers will need to classify
each phone number into predefined categories and classify each sampled individual separately. In such
instances, one must distinguish between household and member-level response, with the household
considering “any” member participating.
The researcher should check landline and cell phone frame coverage in the target geographic area and
design the sampling approach accordingly. Design decisions include, but are not limited to, whether to
use a dual frame (i.e., both landline and cell samples), what proportion of the sample should be
achieved from each frame, and sampling adjustments to address the risk of over or under-coverage
within the frame(s). The risk of coverage error is significant for cell phone samples, given the possibility
that the individual tied to the sampled number does not reside in the target geography, creating over-
coverage. Conversely, an individual who does live in the sampled geography may have a cell phone that
57
would not be included in the frame because the area code, billing zip code, or some other feature falls
outside the target geography. This type of exclusion creates under-coverage.
Dual-frame (DFRDD) Samples
Dual-frame RDD (DFRDD) samples have become a widely used form of RDD samples. This entails
combining interviews achieved from a sample of landline phone numbers with those from a sample of
cell phone numbers (without screening either frame for phone service usage) to provide nearly
complete coverage of all U.S. households.
16
Researchers should compute two response rates for DFRDD
surveys, one for the landline sample and one for the cell phone sample. Reporting these two rates is
optional, but it has the advantage of providing the ability to compare outcome rates to make
comparisons across dual-frame surveys. Regardless of whether the researcher chooses to report the
individual frame rates, they must report an overall rate. This can be calculated using the weighted
average between the two rates based on the proportion of the sample in each frame compared to the
total sample. Those formulas and an example of how to apply them are delineated below.
Combining dual-frame samples to estimate population characteristics presents many post-data
collection challenges (Carley-Baxter, Peytchev, and Black, 2010). Calculating single-sample and overall
outcome rates from such endeavors also can be daunting. AAPOR recommends using rates computed to
account for differential outcomes, such as refusal rates, from the screening process and the actual
survey of the intended respondent. This step should be done before calculating overall outcome rates
for the combined sample. This can be done using modified outcome rate formulas for different
eligibility levels during screening and survey administration.
Until additional research is done examining different methods of calculating outcome rates, AAPOR
recommends using the method in the section dealing with outcome rates for RDD samples for
computing outcome rates for dual-frame samples. Before applying that formula, one should calculate
rates that take into account nonresponse during the screening process using the method below. AAPOR
also encourages survey practitioners to carry out and share these comparisons in the spirit of
scholarship and transparency.
Example: The example below
17
can be used to calculate AAPOR RR3 for dual-frame samples when one
(or both) of the samples have interviews completed using a screener. Other outcome rates (i.e.,
cooperation, refusal, and contact rates) can use the same formula example.
The following formulae are equivalent to AAPOR RR3: for landline and cell phone samples:


 
    

 

16
This is based on the most recent data from the National Health Interview Survey. However, AAPOR advises that any
parameter estimates of phone service usage in the U.S. not based on the Decennial Census or the American Community Survey
be used with caution. https://www.cdc.gov/nchs/data/nhis/earlyrelease/wireless202111.pdf
17
This example is derived from Ezzati-Rice, Frankel, Hoaglin, Loft, Coronado, and Wright’s (2000) CASRO version of the rate
utilized in the U.S. National Immunization Survey.
58


 
  

 

Where e
1
= Estimated Percentage of Screener Eligibility (i.e., the proportion of known households
without a completed screener estimated to have an eligible respondent residing there) and e
2
=
Estimated Percentage of Household Eligibility (i.e., the proportion of phone numbers that are
estimated to be households). This is why cases that are not known to be households (Unknown
Households UH) are multiplied by both factors. In contrast, cases of unknown eligibility (Unknown
Other, UO) are multiplied only be e
1
.
In short, e
2
is for all known units (i.e., all known households / [all known households + all known non-
households]), and e
1
is for all known households whose eligibility status at the household level is known
(all known households eligible to do the full survey / [all known households eligible to do the full survey
plus all known households not eligible to do the survey]). A basic question for DFRDD surveys is
estimating these eligibility rates for cases of unknown eligibility, or “e.” We note elsewhere in this
document that e-rates may consist of separate estimates for sub-components of a survey. This would
typically be the case for DFRDD surveys. Cell phone samples usually are used to reach a specific person
(the one who uses the phone), whereas landline samples usually are used to reach households from
which a “designated” respondent is then selected. In a typical adult sample of those aged 18 and older,
the cell sample will have to screen whether the cell phone answerer is 18 or older. While no age
screening usually is needed for landline/household samples since almost all contain someone age 18 or
older. Other operational differences between cell and landline samples also contribute to the likely
necessity of calculating separate e-rates. In calculating e-rates, “one must be guided by the best
available scientific information on what share eligible cases make up among the unknown cases, and
one must not select a proportion to boost the response rate.” See the AAPOR document on calculating
e-rates for more information.
The following formula should be used to calculate response rates for dual-frame surveys:
Combined response rate = [(RR
LL
*K
LL
) + (RR
CP
*(1-K
LL
))]/100
Where RR
LL
is the landline response rate, K
LL
is the proportion of the total number of completed
interviews from the landline frame, and RR
CP
is the cell phone response rate.
For example, if 60% of the completed interviews were dialed on landlines with a response rate of 22%,
and 40% of completed interviews were dialed on cell phones with a response rate of 18%, then the
weighted average will be [(22*60) + (18*40)]/100 = [1320+720]/100 = 20.4%. This would be a
combined AAPOR RR3 following the AAPOR convention.
Table of disposition codes
Tables 3.1, 3.2, and 3.3 provide eligible nonresponse, unknown eligibility, and ineligible codes
(respectively) that are applicable when sampling randomly selected phone numbers. As in earlier
sections, a single asterisk identifies a new disposition code; a disposition changed from the prior version
59
of the AAPOR Standard Definitions is indicated by two asterisks. Please refer to the introduction of this
report for a discussion of general principles related to identifying (fully or partially) completed surveys,
which apply regardless of frame.
Since the phone is usually the primary mode of contact when sampling random phone numbers, the
definition section begins with dispositions that apply to surveys conducted via phone (via an interviewer
or IVR), most of which are also applicable when using SMS. Additional subsections then provide more
information specific to interviews conducted by SMS.
Table 3.1. Valid Eligible, No Interview (non-response) Dispositions for RDD Samples
Description
Value
Notes & Examples
Eligible, Non-response
2.0
To use any of these codes, the sampled phone number must have been
confirmed to be associated with an occupied residence (landline) or
with a person who lives in a household (cell). If further screening for
eligibility is required, confirmation that the phone number is associated
with at least one eligible person must be determined.
Refusal and break-off
2.10
Some contact has been made with the individuals associated with the
phone number, and they refuse or break-off.
Refusal
2.11
Household-level (or proxy) Refusal
2.111
A member of the household of the selected sample member has
declined to do the interview for the entire household.
Another individual from named entity explicitly refuses to allow
participation
No screening or confirmed eligibility required
Parent or guardian Refusal
2.1111*
The parent or guardian of selected minor respondent refuses to allow
participation
Known respondent refusal
2.112
Selected respondent or entity directly refuses to participate
Other implicit respondent refusal
2.113
Selected respondent (known to be
eligible) set appointment but did
not keep it
2.1132*
Selected respondent (known to be
eligible) opted out of SMS
communication (SMS Only)
2.1133*
Break-off
2.12
The selected respondent began the interview but terminated it before
completing enough of it to be considered a partial complete (see
Introduction for guidance on classification of partial interviews).
Non-contact
2.20
Selected respondent unavailable
2.21
Household is confirmed as eligible but selected respondent never
available or unable to complete during the field period.
60
Table 3.1. Valid Eligible, No Interview (non-response) Dispositions for RDD Samples
Description
Value
Notes & Examples
Phone answering device
2.22
No contact has been made with a human, but a phone answering
device (e.g., voicemail or answering machine) is reached that includes a
message confirming it is the number for the selected sample member.
This code is only used if all sample members are eligible (i.e., no
additional screening is necessary).
Example: “You have reached John Smith. Please leave a message”.
No Message left
2.221
The interviewer left a message, alerting the household that it was
sampled for a survey, that an interviewer will call back, or with
instructions on how a respondent could call back.
Message left
2.222
Other non-contact
2.23
Quota filled (in released replicate)
2.231*
Other
2.30
Selected respondent died before
completing survey
2.31
Must be able to determine that selected respondent was eligible on
the survey status date and died subsequently
Physically or mentally
unable/incompetent
2.32
The selected respondent's physical and/or mental status makes them
unable to do an interview. This includes both permanent conditions
(e.g., senility) and temporary conditions (e.g., pneumonia) that
prevailed whenever attempts were made to conduct an interview.
With a temporary condition, the respondent could be interviewed if re-
contacted later in the field period.
Language or Technical Barrier
2.33
No one in the household speaks a
language in which the interview is
offered
2.331
No one in the household speaks a language in which the interview is
offered (no screening required)
The selected respondent does not
speak a language in which the
interview is offered
2.332
The selected respondent does not speak a language in which the
interview is offered (no screening or respondent eligibility confirmed).
No available interviewer with
appropriate language skills at the
time of contact/Wrong language
questionnaire sent
2.333
The language spoken in the household or by the respondent is offered,
but an interviewer with appropriate language skills cannot be assigned
to the household/respondent at the time of contact (no screening or
respondent eligibility confirmed).
Inadequate audio quality
2.34
Location/Activity not allowing
interview
2.35
Example: cell phone reached while person is driving (no screening
required or eligibility confirmed)
Someone other than respondent
completes questionnaire or
interview
2.36
Eligibility status of actual respondent must be known
61
Table 3.1. Valid Eligible, No Interview (non-response) Dispositions for RDD Samples
Description
Value
Notes & Examples
Someone other than respondent
completes questionnaire or
interview - Full questionnaire
completed
2.361
Someone other than respondent
completes questionnaire or
interview - Partial questionnaire
completed
2.362
Miscellaneous (eligibility
confirmed)
2.90
Examples: vows of silence, lost records, faked cases invalidated later on
Table 3.2. Valid Unknown Eligibility, Non-Interview Dispositions for RDD Samples
Description
Value
Notes & Examples
Unknown Eligibility, Non-Interview
3.0
Unknown if housing unit
3.10
There is insufficient information to determine whether the phone
number is associated with a housing unit.
Not attempted or worked
3.11
- The phone number is in an assigned replicate but was never dialed.
Note, all cases in unassigned replicates (i.e., replicates in which no
contact has been attempted for any case in the replicate) should be
considered ineligible (Section 4), but once interviewers attempt to
contact any number in a given replicate, all cases in the replicate have
to be individually accounted for.
Unreachable, unknown if
phone/email connects to sampled
address/residence, no other
information about housing unit
available
3.12
Unreachable, unknown if working residential number
Always busy
3.121**
No answer
3.122**
Answering device
3.123**
- The phone number connected to an answering device (e.g., voicemail
or answering machine), but the automated message did not
conclusively indicate whether the number is for a residential
household.
Example: You have reached Jane Doe. I am not available to answer the
phone right now. Please leave a message.
Telecommunication technological
barriers, e.g., call-blocking (no
indication if phone connects to
residence)
3.124**
Call-screening, call-blocking, or other telecommunication technologies
that create barriers to getting through to a number
Technical phone problems
3.125**
Examples: phone circuit overloads, bad phone lines, phone company
equipment switching problems, phone out of range (AAPOR Cell Phone
Task Force, 2008 & 2010b; Callegaro et al., 2007).
62
Table 3.2. Valid Unknown Eligibility, Non-Interview Dispositions for RDD Samples
Description
Value
Notes & Examples
Ambiguous operator’s message
3.1251**
An ambiguous operator’s message does not make clear whether the
number is associated with a household. This problem is more common
with cell phone numbers since there are both a wide variety of
company-specific codes used and these codes are often unclear
(AAPOR Cell Phone Task Force, 2010b).
Inadequate audio quality
3.1252*
Location/Activity not allowing
interview
3.1253*
Example: cell phone reached while person is driving
SMS Text undeliverable
3.13*
This is unknown eligibility if the number is known to be in service but
unable to be texted (e.g., attempted to text a landline). If the number
is out of service, it would receive a 4.X code.
Carrier blocked SMS message
(known working number)
3.131*
SMS Message failed to send (known
working number)
3.132*
Device does not support SMS
messages (known working number)
3.133*
Device unreachable by SMS (known
working number)
3.134*
Device powered off (known
working number)
3.135*
Unknown SMS error (known
working number)
3.136*
Nothing ever returned/no
information about address (SMS)
3.19
Web link never opened (SMS)
3.191*
Applicable in situations where SMS is used to send a web link to which
the respondent should click and complete the survey
No reply received (SMS)
3.192*
Applicable in situations where SMS is used to send survey questions
and receive responses via SMS
Household exists; unknown if
eligible respondent
3.20
There is sufficient information to determine whether the phone
number is associated with a housing unit/individual, but insufficient
information to determine whether the housing unit or individual is
eligible.
No screener completed
3.21
For non-general population survey in which a screening interview is
required to determine eligibility.
Even if the failure to complete the screener were the result of a
“refusal,” it would classified here unless the existence of an eligible
respondent were known or could be inferred.
63
Table 3.2. Valid Unknown Eligibility, Non-Interview Dispositions for RDD Samples
Description
Value
Notes & Examples
Screener refused
3.211
Phone number working and
connected to household, screener
required but not completed
3.215**
Phone number confirmed eligible, screener required but not
completed.
Note: These codes are unlikely since housing unit eligibility must be
confirmed
No answer
3.2152**
Phone answering device
3.2153**
Phone answering device (household confirmed, screener required)
The phone number connected to an answering device (e.g., voicemail
or answering machine), but the automated message did not
conclusively indicate whether the number is for the specifically named
individual or household.
Telecommunication technological
barriers, e.g., call-blocking
3.2154**
Telecommunication technological barriers, e.g., call-blocking
(household confirmed, screener required)
Call-screening, call-blocking, or other telecommunication technologies
that create barriers to getting through to a number
Technical phone problems
3.2155**
Examples: phone circuit overloads, bad phone lines, phone company
equipment switching problems, phone out of range (AAPOR Cell Phone
Task Force, 2008 & 2010b; Callegaro et al., 2007).
Ambiguous operator’s message
3.2156**
An ambiguous operator’s message does not make clear whether the
number is associated with a household. This problem is more common
with cell phone numbers since there are both a wide variety of
company-specific codes used and these codes are often unclear
(AAPOR Cell Phone Task Force, 2010b).
Other unknown eligibility
3.90
This should only be used for highly unusual cases in which the eligibility
of the number is undetermined and which does not clearly fit into one
of the above designations.
Example: High levels of item nonresponse in the screening interview
prevents eligibility determination.
64
Table 3.3. Valid Not Eligible Dispositions for RDD Samples
Description
Value
Notes & Examples
Sample Unit Not Eligible
4.0
Selected Respondent Screened Out of
Sample/ Ineligible
4.10
Households outside the sampling area’s geographical boundary. This
often happens when using RDD to sample relatively small areas (e.g.,
counties, towns) or when sampling a cell number when the owner has
relocated their residency to a new geographic area.
Fax/data line
4.20
Non-working/disconnected number
4.30
Non-working number
4.31
(SMS) SMS bounceback due to non-
working/not in service number
4.311*
Disconnected number
4.32
Temporarily out of service
4.33
Special technological circumstances
4.40
Number changed
4.41
Call forwarding
4.43
Forwarded: residence to residence
4.431
Forwarded: Nonresidence to
residence
4.432
Pagers
4.44
Cell phone
4.45
This code is limited to use among landline-only RDD samples in which
the interviewer encountered a cell phone number. It is not used for
dual-frame (landline/cell) RDD.
Landline phone
4.46
This code is limited to use among cell-only RDD samples in which the
interviewer encountered a landline phone number. It is not used for
dual-frame (landline/cell) RDD.
Not a household residence
4.50
Sampled phone number is not a within-scope residence
Business, government office, other
organization
4.51
Only those numbers that are solely business numbers belong in this
category. A number linked to both a household and business should be
considered eligible and be coded elsewhere.
Institution
4.52
Group quarters
4.53
Phone reached not household
resident (cell phone or SMS)
4.54
This code only applies when sampling multiple individuals associated
with a cell phone number and one (or more) of the individuals lives in a
different household.
No eligible respondent in household
4.70
Phone respondent completes screener and is not eligible and/or no
eligible respondents in household.
Quota filled (in unreleased sample
replicate)
4.80
Duplicate listing
4.81
Other
4.90
*New disposition code
**Updated disposition code
65
3.1 RDD Phone Surveys
As previously mentioned, most RDD surveys are conducted via phone. In this scenario, an interviewer
may dial the sampled number and attempt to conduct the interview. The number may be auto-dialed,
and an interview may be attempted through IVR. In either case, the potential outcome codes are similar.
Eligible, No Interview (Non-response)
Eligible cases for which no completion is obtained consist of three types of nonresponse: a) refusals and
break-offs (2.1), b) non-contacts (2.2), and c) others (2.3 and 2.9). To be considered in this set of codes,
the phone number must be working, associated with a household, and associated with at least one
eligible person. If eligibility is unknown and cannot be assumed, please use the codes in Table 3.2.
Refusals and break-offs include cases in which some contact has been made with an eligible phone
number, and someone at that number has communicated that the survey will not be completed (2.11)
or the selected respondent stopped the interview with too few items completed to be treated as a
partial interview (2.12).
Refusal codes distinguish between household-level (or proxy) refusals (2.111) and those where the
respondent refused (2.112). Household-level or proxy refusals (2.111) occur when the researcher knows
that the household contains eligible persons, but the refusal comes from someone other than a
specifically-selected respondent; for surveys of minors, refusal by a parent or guardian represents a
special case of this situation (2.1111). Known respondent refusals (2.112) occur when a specific person
at the address has been selected as the designated respondent and refuses to participate.
In RDD phone surveys, a selected individual can set an appointment for a later date but fail to be
available at the set time. This should be treated as an “implicit refusal” (2.1132).
Known non-contacts (2.2) in RDD phone surveys include cases in which researchers receive notification
that the eligible respondent is unavailable to complete the questionnaire during the field period (2.21).
Alternatively, the call may have connected to an answering device that provides enough information to
deduce eligibility (e.g., ‘You have reached the Smith residence.’) (2.22) but for which a human has yet to
be reached. This non-contact type may be further subdivided based on whether a message was left
(2.222) or not (2.221).
A related situation occurs in surveys that employ quotas when completed questionnaires are not
treated as part of the final dataset because the quota for their subgroup has already been filled (2.231).
Code 2.231 should be used when a unit meets the sample’s eligibility criteria. Otherwise, it would have
been included in the final dataset if they had responded earlier before the quota was met. Applying a
quota this way is akin to ending the field period early for subgroups whose quota has been filled. This
differs from a situation in which a sample replicate is released only to accept responses from particular
subgroups to meet quotas for those subgroups. In such situations, respondents from that replicate who
are outside of the target subgroups(s) for the replicate would be assigned code 4.80 because they do
not meet the eligibility criteria for the replicate for which they were sampled. The guiding principle
when applying quotas is that eligibility criteria must be established when a unit is sampled and should
not change based on how long it takes a unit to respond. Otherwise-eligible units excluded from the
final dataset solely because of a late response (whether “late” means after the end of the field period or
after a quota was filled) are properly coded as eligible nonrespondents, not ineligible cases. For
66
example, suppose a survey set separate quotas for Black and Hispanic respondents. If the survey used
only one sample release and stopped accepting responses from Hispanic respondents after their quota
was met, any Hispanic responses after this point would be assigned code 2.231 because they were
eligible at the time of sampling. In contrast, if the survey met the Hispanic quota but not the Black quota
in the first sample release and released a second replicate for which only Black respondents were
eligible, Hispanic respondents to the second replicate would be assigned code 4.80. In all cases, what
the quotas are and how they are to be filled must be clearly defined, and whether survey responses
received after quotas have been met are accepted and included in the final data set should be clarified
in survey documentation.
Other cases (2.3) represent instances in which the respondent within the household is selected and
eligible and does not refuse to complete the questionnaire, but no completion is obtainable because of:
a) death (2.31); b) physical or mental limitations prevent completion (2.32); c) language (2.33); d) poor
audio quality (2.34); e) location/activity not allowing interview (2.35); or f) someone other than the
designated respondent completes all or some of the questionnaire (2.36).
In RDD phone surveys, death constitutes an eligible nonresponse if a respondent at the sampled number
had previously been confirmed eligible but dies before the full questionnaire is completed, which is
likely rare. Whether a deceased sample member is an eligible nonresponse or an ineligible respondent
depends on fieldwork timing. Surveys must define a date on which eligibility status is determined. This
would usually be either the first day of the field period or the first day a particular number was dialed.
Thus, for example, if a person were alive and selected as the respondent on this status date but died
before a questionnaire was completed, the case would be classified as a nonresponse due to death
(2.31). However, in some cases, the researchers may choose to re-approach the sampled unit to
determine if a newly-eligible respondent can complete the questionnaire. For example, in a survey that
any responsible household member is asked to complete on behalf of a household, and if one
responsible household member who was alive at the time the household was first contacted dies during
the field period, a different household member could become the eligible respondent for the sampled
household. If this is done, the outcome of the case would be determined by what happens during the
effort to gain cooperation from a newly-eligible respondent. Similar time rules would apply to other
statuses.
Selected eligible respondents who are physically or mentally unable to complete the questionnaire
(2.32) would include both permanent conditions (e.g., senility, deafness) and temporary conditions (e.g.,
pneumonia or drunkenness) that prevailed throughout the field period. With a temporary condition, it is
possible that the respondent could/would complete the questionnaire if recontacted later in the field
period or if the field period were later extended. But again, physical or mental barriers may cause the
original eligible respondent to no longer be eligible. In these instances, researchers could choose to re-
approach the sampled unit and try to gain cooperation from the newly-eligible person or to gain an
interview from a proxy respondent who would answer on behalf of the incapacitated respondent. If this
is done, the outcome of the case would be determined by what happens during the subsequent effort.
Language problems (2.33) include cases in which no one reachable at the sampled number speaks a
language in which the interview is offered (2.331) or the specific designated respondent does not speak
this language (2.332). It also would include instances (2.333) in which interviews are available in the
language the eligible respondent can speak. Still, this language was not offered to the eligible
67
respondent (e.g., an interviewer who spoke the language was unavailable). In contrast, poor audio
quality (2.34) would apply to cases where a bad connection or calls were consistently dropped.
Location/activity not allowing an interview (2.35) mostly applies to cell phone sample where the
respondent is driving. However, other situations exist, such as a natural disaster that disrupts phone
towers in a particular area during the survey’s field period.
When the sample design requires the designation of a specific respondent per sampled number, and the
researcher learns that someone other than the designated respondent (or a qualified proxy, if proxy
responses are permitted) completed the questionnaire, the unit should be classified as an eligible
nonresponse (2.36). Distinctions between full (2.361) and partial (2.362) completions can be made.
Again, in this scenario, the researcher could choose to re-approach the sampled unit to gain cooperation
from the correct person. In this case, what happens during that subsequent effort would determine the
final outcome.
The miscellaneous designation (2.90) would include cases involving some combination of other reasons
(2.30) or special circumstances (e.g., vows of silence).
Unknown Eligibility, Non-Interview
Cases of unknown eligibility (3.0 and following) include situations in which nothing is known about
whether a phone number is working or associated with a household (3.1); situations in which the
number is known to be working and associated with a household but it is unknown if any eligible person
is associated with the number (3.2); and other situations (3.9).
The unknown household subset of codes (3.10) for RDD phone surveys is used when nothing is known
about whether a phone number is working and associated with a household. A number may be an
unknown household because it was never dialed (3.11). Note that only undialed numbers in released
replicates should be coded as 3.11. If the entire replicate was not dialed, all numbers in that replicate
should be coded as ineligible for other reasons (4.90).
A researcher may also not know if a number is associated with a household if it is unreachable (3.12).
This could be due to a number a) being busy (3.121); b) ringing without being answered (3.122); c) going
to an answering device for which household status cannot be determined (3.123); or d) having a call
blocker associated with it (3.124). Technical issues (3.125), such as ambiguous operator messages
(3.1251), inadequate audio quality (3.1252), and location/activity limitations (3.1253), may also prevent
researchers from being able to tell whether a number is associated with a household.
Incomplete screeners can be further subdivided. Eligibility may still be unknown even for numbers for
which household status has been determined. This is the case for any survey that requires screening and
for which the screener has not been completed (3.21). Unknown eligibility is more common in cell
samples than landline samples. This is because more screening questions are typically required for cell
samples to determine whether a cell number is within the target geography and whether an adult uses
the cell number. Screening for eligibility may be prevented by an individual who refuses to complete the
screener (3.211) or by various reasons for non-contact such as a) ringing without being answered
(3.2152), b) going to an answering device for which eligibility cannot be determined (3.2153); c) a call
blocker (3.2154); d) technical issues (3.2155); or e) ambiguous operator messages (3.2156).
68
Note that several of the subcodes associated with unknown eligibility (3.20) appear similar to those
associated with unknown households (3.10) and those associated with eligible non-interviews (2.0). The
difference is a matter of how much information is available. For example, a phone call reaches a phone
answering device. If no information as to household status is available (e.g., ‘The person you are trying
to reach is not available. Please leave a message.’), the number would be coded as an unknown
household (3.122). However, if the message says, ‘You have reached the Smith residence’, one has
enough information to know that the number is associated with a household. How to code the case
would be determined by whether screening was required or had been completed on a previous call. If
no additional information were necessary to determine eligibility, the number would be coded as an
eligible non-interview (2.22). If additional screening was required, it would be coded as an unknown
eligible (3.2153).
The miscellaneous unknown eligibility code (3.9) should be used only for unusual situations in which it
cannot be determined whether a phone number includes eligible persons and that does not fit into any
of the above categories. An example would be if an individual completed the screening survey but
refused to answer several required questions to determine eligibility.
Not Eligible
Code 4.10 should be used for RDD phone surveys when the sampled number is determined to be
outside the sampled geography. This often occurs when sampling small geographies for which area code
and survey geographies do not perfectly overlap. Cell numbers are also common since individuals may
retain their number but move outside of the survey geography.
Because RDD samples are random strings of numbers, several numbers will be determined to be
nonworking (4.31), disconnected (4.32), or temporarily out of service (4.33). Researchers should
consider when eligibility is determined. Some researchers may determine eligibility based on the date
the number was first released. In this case, a number out of service (temporarily or otherwise) would be
considered a final code. In other circumstances, the researcher may include any phone numbers
associated with a household at any point in the field period. In this case, numbers that are temporarily
out of service may be redialed at a later point in the field period. Their final disposition would depend on
the later contact attempts.
Other special technological circumstances (4.40) may also result in an ineligible number. These include if
a number has been changed (4.41), forwarded from a residence to another residence (4.431), or
forwarded from a nonresidence to a residence (4.432). Changed numbers and those that have been
forwarded are considered out of scope. Attempts should not be made to dial the new number or
interview individuals at the forwarding number. Doing so will change the sampling probabilities and
make it difficult to create sampling weights. Other instances of special technological circumstances
include numbers that are linked to a pager (4.44) or determined to belong to a device that is
inconsistent with the sampling frame (a cell phone in a landline frame (4.45) or a landline in a cell frame
(4.46)).
Similarly, a number may be associated with a fax or data line (4.20). Numbers should only be considered
ineligible if the line is only used for this purpose. In these situations, the number is ineligible regardless
of whether it is found to be in a household.
69
A number may be working but may connect to a nonresidence (4.50), such as a business, office, or other
organization (4.51), institution (4.52), or group quarters (4.53). Numbers should only be considered
ineligible and coded in this section if the number is solely a nonresidence and assuming that these types
of nonresidences (e.g., group quarters) are out of scope for the survey.
In some situations, multiple individuals who live in different households may share a cell phone. In the
rare instance that a researcher attempts to interview multiple individuals at a given cell phone number
and a given person does not reside in the same household, the researcher will need to determine which
household should be considered eligible. The phone number should be considered eligible, and any
sampled individuals associated with the phone number and chosen household should also be eligible.
However, any individuals associated with the phone number that does not live in the household should
be considered ineligible (4.54).
Another common situation that makes a phone number ineligible is when no eligible individual is
associated with that number (4.70).
As noted previously, in surveys that use quotas, code 4.8 can be used for subgroups that are pre-
designated as ineligible for a given sample replicate owing to their quotas having already been filled. In
contrast, if a respondent would have been eligible at the time of sample release, but their response is
not accepted due to a quota being filled in the interim, the eligible nonresponse code (2.231) is more
appropriate.
Finally, additional reasons for non-eligibility can be coded under Other (4.9). In all cases regarding final
disposition codes concerned with ineligibility, definite evidence of the status is needed. When in doubt,
a case should be presumed to be eligible or possibly eligible rather than ineligible unless there is
unambiguous evidence of ineligibility.
3.2 SMS/Text Messaging
While most RDD surveys are contacted by dialing the sampled number, it is possible to attempt contact
by sending an SMS or text message (these terms will be used interchangeably in this section). SMS may
send pre-notification messages for a phone survey or use the RDD frame to generate a list of randomly
selected cell phone numbers, which are then sent as an SMS survey invitation. When sending the survey
invitation, respondents may be asked to answer questions via back-and-forth messaging or a link to a
web survey. In the back-and-forth scenario, respondents are sent a question via SMS, reply to the
question via SMS, and are then sent the next question via SMS. This continues until all questions have
been asked.
Research in the United States is subject to the United States TCPA requirements when conducting
surveys. It may be necessary to obtain consent from respondents to send them a text message if
technology that complies with TCPA is not used.
Consent would usually be requested as part of an initial screening phase in which some other modes,
such as phone, make contact. Respondents who consent to be texted could then be contacted via SMS
to complete additional questions. This approach is a two-phase design; survey with screening and main
interview phases; therefore, disposition codes should be assigned and response rates reported
separately for the two phases. Dispositions for the screening phase would use the rules described above
for surveys of unnamed persons via whatever mode(s) were used for that phase; dispositions for the
70
SMS contacts in the second phase (and any other modes used in that phase) would be assigned using
the rules for surveys of named persons, described in Section 1.
The codes described in the remainder of this section apply when SMS invitations or notifications are
sent to a sample from an RDD frame without a separate consent phase. Researchers should comply with
the applicable regulatory requirements, including the TCPA. The inclusion of disposition codes in this
guide does not reflect an AAPOR endorsement or opinion of the legality of sending non-consented text
messages.
Because outcome codes are determined based on the frame, not the mode, researchers should still use
the codes in this section for SMS surveys that use an RDD frame. This statement holds even if the SMS
included a link to have the individual complete the survey online. SMS technology platforms can provide
the disposition of each message sent, although the available disposition information can vary greatly by
provider. There are unique considerations with SMS surveys, as respondents may stop responding to
questions before all questions have been sent. Unlike a push-to-web survey, where the respondent has
access to all questions at the same time, sending questions via SMS may present unique scenarios due
to possible time delays between each distribution. For example, when some questions are sent, the
mobile phone number may be in working order, and the sender and receiver receive the messages.
However, depending on the length of the field period and the time it takes the respondent to reply to
the message, the respondent number may be disconnected during later messages. In the case of an SMS
invitation with a push to a web survey, the final disposition is typically the most advanced outcome
achieved. For back-and-forth messaging, each message sent will have a disposition, and temporary and
final disposition codes can be assigned based on the series of messages (see the section on temporary
and final disposition codes for more information).
Usually, codes used for RDD phone surveys apply to RDD SMS surveys. However, some situations are
unique to RDD surveys conducted via SMS. For example, a respondent may terminate a survey they
started via SMS back-and-forth in the same manner that they may hang up on an interviewer. In both
situations, these may be considered breakoffs (2.12). The following text is limited to these special
circumstances and codes.
Eligible, No Interview (Non-response)
Eligible cases for which no interview is obtained consist of three types of non-response: a) refusals and
break-offs (2.10), b) non-contacts (2.20), and c) others (2.30). However, note that to be considered in
one of these categories, they must first have been determined to be eligible.
In the case of SMS, a refusal may come as a request to opt out of future messages (2.1133). Most text
messages include an option for the respondent to text “STOP” (or some equivalent phrase), which opts
them out of future messages.
Unknown Eligibility, Non-Interview
For SMS RDD surveys, text messages may be sent, but the researcher never receives a response (3.19).
These cases may be further subdivided depending on how much information is available on the delivery
status of the text message. Some researchers receive delivery receipts confirming a working number but
not providing enough information to clarify household status (3.191). This is still more information than
if the text was sent and no receipt information is available (3.192).
71
Messages may also go undelivered. The reason that the message is undeliverable will determine
whether the number should be categorized as not eligible (4.311) or unknown eligibility (3.13). Only
messages which imply that the number is working but cannot receive text messages should be
categorized as unknown eligibility. Reasons for an undeliverable message include cases for which a
message was blocked by the carrier (3.131), failed to send (3.132), reached a device that does not
support SMS (3.133), sent to an unreachable device (3.134), sent to a device that is powered off (3.135),
or undeliverable for unknown reasons (3.136). The SMS provider may use categories slightly different
from the specific examples provided here but should be able to provide details to classify why the
message could not be delivered.
Not Eligible
As previously mentioned, SMS messages may bounce back because the number does not work and is
not in service. If this is the case, the number should be coded as ineligible (4.311).
72
Section 4: Online Panel Surveys
In recent years, the survey research industry has seen a rise in the availability and utilization of samples
or panels of respondents maintained by companies for research purposes. It is important to distinguish
between online panels with participants recruited through probability sampling and opt-in or access
panels (see AAPOR, 2010a) or unrestricted self-selected surveys (for a review, see Couper, 2000), which
do not involve probability sampling. As with other frames, online samples vary greatly in the populations
they cover and the nature and quality of the sample frames.
In this section, we mainly focus on online panels recruited using probability sampling methods, as many
of the standard definitions are not applicable or even calculable for non-probability or opt-in samples.
However, we provide guidance about reporting data collected using non-probability online samples. For
a comprehensive discussion of the computation of response rates in probability-based panels and other
data quality metrics available to researchers using both probability- and nonprobability-based panels,
see the forthcoming AAPOR Task Force Report on Assessing Data Quality in Online Panels (McPhee et al.
2022).
Probability-Based Internet Panels
Probability-based Internet panels use probability sampling methods to select and recruit participants to
a panel. In some cases, the panel may be restricted to Internet users only (i.e., the population is defined
as Internet users); in other cases, Internet access is provided to panel members as needed, or panelists
are contacted via alternative modes (e.g., phone), to ensure broader coverage of the population. Panel
members are sent invitations to specific surveys at agreed-upon intervals. Individual surveys may be
sent to all panel members or a subset of eligible members. These panels, therefore, have two main
stages at which nonresponse may occur the initial recruitment into the panel and the invitation to a
particular survey. In practice, there are a number of additional steps involved in recruiting and
maintaining online panels (see AAPOR, 2010a; Callegaro and DiSogra, 2008; and Couper et al., 2007).
Full details of the various metrics used for such panels are described by Callegaro and DiSogra (2008).
This document provides a brief overview of some key metrics.
The first stage in a pre-recruited probability-based panel is the initial recruitment interview. Historically
this was done by phone, but other modes of recruitment (specifically mail) have become more widely
used. It is important to understand the sampling frame(s) used for panel recruitment to calculate
response rates accurately. The response rate to this initial interview should be calculated normally for
the particular frame used, as described elsewhere in this document. A series of screening questions
may be asked to determine eligibility for the panel based on predetermined criteria such as age,
language, and Internet access or geographic area. Eligible persons are asked to consent to join the
panel.
An initial recruitment rate (RECR) can be computed as follows:
Recruitment rate (RECR)
=




Where IC is the initial consent rate, the remaining terms are defined elsewhere in this document for the
particular frame or frames used for recruitment. The initial recruitment rate should be computed
73
separately for each different sampling frame used for recruitment and each different recruitment effort.
Following the agreement to join the panel, potential panelists may be provided with equipment and
instructions to complete the surveys.
Many panels consider a panelist “enrolled” only after completing one or more initial profile surveys.
Where complete and partial interviews refer to the status of the profile survey(s), and the denominator
for the profile rate includes anyone who agreed to be empaneled based on the recruitment effort. Thus,
a profile rate (PROR) can be computed as follows:
Profile rate (PROR) =



Using AAPOR RR5 (counting completes only) or RR6 (counting completes and partials), where all the
terms in the expression are used elsewhere in this document.
Finally, a completion rate (COMR) can be computed for response to a particular survey invitation sent to
eligible panel members, again using AAPOR RR5 or RR6:
Completion rate (COMR) =



While the formula for the completion rate is the same as that for the profile rate (PROR) described
above, the denominator for the COMR is based on eligible panel members who have completed the
profile survey(s) and are currently active panelists at the time of sampling for the study.
The table of disposition codes described below may be used for this stage of the calculation, but it is
important to recognize that AAPOR standards require reporting a cumulative response rate (CUMRR)
when such sample frames are employed. Based on these three components, a cumulative response rate
can be computed as follows:
Cumulative response rate (CUMRR) = RECR × PROR × COMR
In practice, there may be several more steps involved. First, recruitment to such panels is often done on
an ongoing basis, and the panel's composition changes over time. The initial recruitment rate may thus
be a composite measure based on a number of different rates. Further, screening questions often
determine eligibility for a particular survey (if the criteria cannot be determined from the profile
questions). This necessitates a further step in the computation. Finally, panel attrition is essential if
employing a longitudinal design to study responses across surveys or time. These issues are discussed
by Callegaro and DiSogra (2008) and McPhee et al. (2022).
Table of disposition codes
Tables 4.1, 4.2, and 4.3 provide eligible nonresponse, unknown eligibility, and ineligible codes
(respectively) that are applicable when using sample selected from an online probability panel. Please
refer to the Introduction of this report for a discussion of general principles related to the identification
of (fully or partially) completed surveys, which apply regardless of frame. As in earlier sections, a single
asterisk identifies a new disposition code; a disposition that has been changed from the prior version of
the AAPOR Standard Definitions is indicated by two asterisks.
74
Typically, online probability panels use email and/or SMS messaging to contact selected panelists for a
given survey. However, panels may use other mechanisms, including phone or mail, to reach panelists
who prefer not to respond to survey invitations online. In general, probability panel frames function
similarly to list frames of named individuals described in section 1: List Samples. Therefore, users should
refer to Section 1 for a detailed explanation of the dispositions below.
Table 4.1. Valid Eligible, No Interview (non-response) Dispositions for Samples from Online Probability Panels
Description
Value
Notes & Examples
Eligible, Non-response
2.0
To be considered in this category, a case must first have been
determined to be eligible. This may be already determined by panel
profile variables.
Refusal and break-off
2.10
Refusal
2.11
Proxy refusal
2.111
This is not common for probability panels, but may occur if the
panelist is the gateway to another household respondent
- No screening or confirmed eligibility required
Parent or Guardian refusal
2.1111*
The parent or guardian of panel respondent refuses to allow
participation
Respondent refusal
2.112
Logged on to survey, did not complete
any item
2.1121
Read receipt confirmation, refusal
2.1122
Other implicit respondent refusal
2.113
Panel respondent set appointment but
did not keep it (phone or in-person)
2.1132*
Opted out of communications (SMS)
2.1133*
Break off
2.12
The selected respondent began the interview, web survey, or
questionnaire but opted to terminate it or returned it with too many
missing items before completing enough of it to be considered a
partial complete (see Introduction of Standard Definitions v10 for
guidance on classification of partial interviews).
Non-contact
2.2
Respondent never available
2.21**
Respondent unavailable during field period
Phone answering device (phone)
2.22
No contact has been made with a human, but a phone answering
device (e.g., voicemail or answering machine) is reached that includes
a message confirming it is the number for the panel sample member.
This code is only used if all sample members are eligible (i.e., no
additional screening is necessary).
Example: “You have reached John Smith. Please leave a message”.
Answering machine - no message left
(phone)
2.221
No message left
Answering machine - message left
(phone)
2.222
The interviewer left a message, alerting the respondent that he/she
was sampled for a survey, that an interviewer will call back, or with
instructions on how a respondent could call back.
Other non-contact
2.23*
Quota filled (in released replicate)
2.231*
75
Table 4.1. Valid Eligible, No Interview (non-response) Dispositions for Samples from Online Probability Panels
Description
Value
Notes & Examples
Completed questionnaire, but not
during field period
2.27
Other non-interview
2.3
Deceased respondent
2.31
Panel respondent is deceased. Must be able to determine that
respondent was eligible on the survey status date and died
subsequently
Physically or mentally
unable/incompetent
2.32
The respondent's physical and/or mental status makes them unable
to do an interview. This includes both permanent conditions (e.g.,
senility) and temporary conditions (e.g., pneumonia) that prevailed
whenever attempts were made to conduct an interview. With a
temporary condition, the respondent could be interviewed if re-
contacted later in the field period.
Language barrier
2.33
This would be very uncommon for panel respondents
Inadequate audio quality or literacy
issues (phone interview)
2.34
Inadequate audio quality (no screener or eligibility confirmed)
Location/Activity not allowing
interview (phone interview)
2.35
Example: cell phone reached while person is driving (no screening
required or eligibility confirmed)
Someone other than respondent
completes questionnaire or interview
2.36
Someone other than respondent completes questionnaire or
interview and later determined ineligible (eligibility status of actual
respondent must be known)
Someone other than respondent
completes questionnaire or interview -
Full questionnaire completed
2.361
Someone other than respondent completes questionnaire or
interview and later determined ineligible (eligibility status of actual
respondent must be known)
Someone other than respondent
completes questionnaire or interview -
Partial questionnaire completed
2.362
Someone other than respondent completes questionnaire or
interview - Partial questionnaire completed
Wrong Number (phone interview)
2.37
Eligibility of panelist confirmed but the number dialed is incorrect for
the person
Miscellaneous non-interview
2.9
Examples: vows of silence, lost records, faked cases invalidated later
on
Table 4.2. Valid Unknown Eligibility, Non-Interview Dispositions for Samples from Online Probability Panels
Description
Value
Notes & Examples
Unknown Eligibility, Non-Interview
3.0
Unknown if eligible respondent
3.2
No screener completed, unknown if sampled person is eligible
respondent
- Refusals where screening is required
- Undeliverable or unanswered where screening is required
Unreachable/screener not
completed
3.21
USPS: Refused by addressee (mailed
survey)
3.211**
USPS Category: Refused by Addressee [REF] (screener required)
USPS: Returned to sender (mailed
survey)
3.212**
USPS category: Returned to Sender due to Various USPS Violations by
Addressee (screener required)
USPS: Cannot be delivered (mailed
survey)
3.213**
USPS Category: Cannot be Delivered [IA] (screener required)
76
Table 4.2. Valid Unknown Eligibility, Non-Interview Dispositions for Samples from Online Probability Panels
Description
Value
Notes & Examples
USPS: Returned to sender with
forwarding information (mailed
survey)
3.214**
NOTE: This can only be a final disposition for preidentified sample if a
screener is required and invitation is not forwarded."
Unreachable by phone (phone)
3.215**
Screener required for eligibility determination
Always busy (phone)
3.2151**
Always busy (Screener required)
Ring no answer (phone)
3.2152**
No answer (Screener required)
Phone answering device (phone)
3.2153**
Phone answering device (unknown if named respondent & screener
required)
The phone number connected to an answering device (e.g., voicemail
or answering machine), but the automated message did not
conclusively indicate whether the number is for the specific panelist
Telecommunication/Technological
barriers (phone)
3.2154**
Telecommunication technological barriers, e.g., call-
blocking (unknown if panel respondent & screener required)
Call-screening, call-blocking, or other telecommunication technologies
that create barriers to getting through to a number
Technical phone problems (phone)
3.2155**
Technical phone problems (unknown if panel respondent & screener
required)
Examples: phone circuit overloads, bad phone lines, phone company
equipment switching problems, phone out of range (AAPOR Cell
Phone Task Force, 2008 & 2010b; Callegaro et al., 2007).
Ambiguous operator’s message
(phone)
3.2156**
Ambiguous operator’s message (unknown if panel respondent &
screener required)
An ambiguous operator’s message does not make clear whether the
number is associated with a household. This problem is more
common with cell phone numbers since there are both a wide variety
of company-specific codes used and these codes are often unclear
(AAPOR Cell Phone Task Force, 2010b).
Non-working/ disconnected number
(phone)
3.216*
Includes Fax/Data line (Unknown if panel respondent & screener
required)
Interviewer unable to reach housing
unit/address (in-person)
3.217**
Includes situations where it is unsafe for an interviewer to attempt to
reach a housing unit (screener required)
Interviewer unable to locate
housing unit/address (in-person)
3.218**
Interviewer unable to locate housing unit/address (screener required)
Invitation returned undelivered (e-
mail or SMS)
3.219*
Email or SMS invitation returned undelivered (screener required)
Message blocked by carrier (SMS)
3.2191*
Message failed to send (SMS)
3.2192*
Device unreachable (SMS)
3.2193*
Device not supported (SMS)
3.2194*
77
Table 4.2. Valid Unknown Eligibility, Non-Interview Dispositions for Samples from Online Probability Panels
Description
Value
Notes & Examples
Device powered off (SMS)
3.2195*
Unknown error (SMS)
3.2196*
Nothing ever returned
3.22**
Not attempted or worked
3.23
Not attempted or worked
- No invitation sent
- Questionnaire never mailed
- No contact attempt made
- Address not visited
Note, all cases in unassigned replicates (i.e., replicates in which no
contact has been attempted for any case in the replicate) should be
considered ineligible (Code 4), but once interviewers attempt to
contact any number in a given replicate, all cases in the replicate have
to be individually accounted for.
Other unknown eligibility
3.9
This should only be used for highly unusual cases in which the
eligibility of the respondent/household/phone number is
undetermined and which does not clearly fit into one of the above
designations.
Example: High levels of item nonresponse in the screening interview
prevents eligibility determination.
Returned from an unsampled email
address (e-mail)
3.91
Table 4.3. Valid Not Eligible Dispositions for Samples from Online Probability Panels
Description
Value
Notes & Examples
Not eligible
4.0
Selected Respondent Screened Out
of Sample
4.1
The panelist is reached but they are determined to be ineligible based
on screening criteria.
Deceased
4.11*
Panelist is deceased prior to survey start (status day)
Quota filled
4.8
Ineligible in current replicate because quota filled in unreleased sample
replicate
Duplicate listing
4.81
Other ineligible
4.9
*New disposition code
**Updated disposition code
78
Online Non-Probability Samples
For non-probability samples, response rate calculations make little sense, given that selection
probabilities are unknowable for these samples, leading to larger inferential concerns. Further, for
many of these surveys, the denominator is unknown, making the calculation of response rates
impossible (cf. Callegaro and DiSogra, 2008).
Like probability-based panels, non-probability online samples are recruited through multiple steps and
often multiple methodologies. A key difference is that the first step, recruitment into the panel, is not
based on a known sampling frame with known probabilities of selection. The population thus cannot be
clearly defined. Various recruitment methods are used to build such samples (see AAPOR Task Force,
2010). Some are recruited to be part of a constantly updated pool of potential respondents that an opt-
in panel vendor can select for specific studies. A variety of self-selected online surveys are also
employed. These include river sampling
18
and using social media (e.g., Facebook) to recruit survey
participants.
Although the number of people who join a panel is usually known, the number of people who were
exposed to the invitation, and the number of invitations to which they were exposed, are not known.
The population of interest is not well defined. For some of these nonprobability samples, the number of
panel members invited to a particular survey and the number who respond to the invitation and
complete the survey can be known. This latter rate should not be referred to as a “response rate”
because, unlike for probability-based samples, a high response rate does not necessarily mean the risk
of bias is reduced. Following the AAPOR Task Force (2010) and ISO 26362 Access Panels in Market,
Opinion, and Social Research (2009), some practitioners refer to this rate as a “participation rate,” which
is a term specific to non-probability samples and defined as the number of respondents who have
provided a usable response divided by the total number of initial personal invitations requesting
participation.
19
We caution that this rate may be driven by factors unrelated to the quality of the final
data for a study using such samples and should be interpreted cautiously.
Although a participation rate can be calculated for the completion of a particular survey by previously-
recruited panel members or those recruited through some river-sampling mechanism, using such a rate
as an indicator of possible nonresponse error makes little sense; however, the participation rate may
serve as a valuable indicator of panel or sample vendor efficiency. This rate is influenced by the
particular panel management strategies employed. For example, if “inactive” panel members (however
defined) are removed from the panel, the participation rate is likely to be higher. The participation rate
indicates how much effort is required to recruit panel members to a particular survey and how many
need to be invited to get a targeted number of completed surveys. Given varying practices in panel
management, the participation rate may have little utility as a comparative measure across panels. We
thus caution strongly against the computation and presentation of any metrics discussed in this
18
River sampling recruits [from the internet] using banner ads, pop-up ads and similar instant “capture” promotions. Individuals
who volunteer to participate are screened for their reported demographic characteristics and then “randomly assigned” to the
appropriate survey. Hence the metaphor of being captured from the flowing river of online persons (DiSogra, 2008).
19
Of note, Callegaro and DiSogra (2008) refer to this as a “completion rate.”
79
document for such sources. Such “samples” should be clearly identified as non-probability or self-
selected samples.
80
Section 5. Conclusion
As Tom Smith stated in the ninth edition, good survey research practice rests on a foundation of solid
methodology. Our goal with Standard Definitions has been to make it easier for researchers to follow
guidelines in reporting survey outcomes. Following the same outcome codes and appropriate rate
calculations makes our work more comparable, repeatable, and sustainable. Standard Definitions also
help researchers comply with the AAPOR Transparency Initiative and related external reporting.
To further quote Tom Smith from the ninth edition,
AAPOR urges all survey researchers to adopt these final disposition codes and related outcome
rates and to make them available as part of the documentation accompanying any report of
survey results. The AAPOR Code of Minimal Disclosure requires researchers to provide “the
response rates computed according to AAPOR Standard Definitions. At a minimum, a summary
of disposition of sample cases should be provided so that response rates could be computed.”
AAPOR believes researchers who use the survey designs covered in this booklet should include in
reports about their surveys the outcome rates outlined above when such rates can be calculated.
Those kinds of surveys include those using random or full-probability samples such as RDD phone
surveys. For surveys with sample designs that do not use such samples (e.g., block quota
samples), appropriate outcome rates using the number of attempted cases, the number of
completed cases and the number of refusals should be reported. The AAPOR Council has
stressed the importance for survey researchers to disclose all their methods, including outcome
rates. Council ruled that all disclosure elements, not just selected ones, are important and
should be reported. Researchers will meet the code’s requirements if they report final disposition
codes as they are outlined in this book. The Council also cautioned that there is no single number
or measure that reflects total survey quality, and all elements should be used to evaluate survey
research (AAPOR 2016).
We have restructured this version to provide guidance in line with current survey practice while
maintaining consistency with our core dispositions and calculations. We hope that researchers find this
helpful guide in their work, knowing it will be revised regularly to integrate new methodological norms.
81
Section 6. References
American Association for Public Opinion Research, Transitions from Phone Surveys to Self-Administered
and Mixed-Mode Surveys. Deerfield, IL: American Association for Public Opinion Research, 2019.
The American Association for Public Opinion Research. 2016. Standard Definitions: Final Dispositions of
Case Codes and Outcome Rates for Surveys. 9th edition. AAPOR.
American Association for Public Opinion Research, AAPOR Report on Online Panels. Deerfield, IL:
American Association for Public Opinion Research, 2010a.
American Association for Public Opinion Research, AAPOR Cell Phone Task Force, 2010, New Guidelines
and Considerations for Survey Researchers When Planning and Conducting RDD and Other Phone
Surveys in the U.S. with Respondents Reached via Cell Phone Numbers, 2010b.
American Association for Public Opinion Research, Best Practices for Survey and Public Opinion Research
and Survey Practices that AAPOR Condemns. May, 1997.
American Association for Public Opinion Research, Guidelines and Considerations for Survey
Researchers When Planning and Conducting RDD and Other Phone Surveys in the U.S. with Respondents
Reached via Cell Phone Numbers. 2008.
Brick, J. Michael et al., “Cell Phone Survey Feasibility in the U.S.: Sampling and Calling Cell Numbers
Versus Landline Numbers,” Public Opinion Quarterly, 71 (2007), 23-39.
Brick, J. Michael et al., “Nonresponse Bias in a Dual Frame Sample of Cell and Landline Numbers,” Public
Opinion Quarterly, 70 (2006), 780-793.
Brick, J. Michael; Edwards, W. Sherman; and Lee, Sunghee, “Sampling Phone Numbers and Adults,
Interview Length, and Weighting in the California Health Interview Survey Cell Phone Pilot Study,” Public
Opinion Quarterly, 71 (2007), 793-813.
Brick, J. M., & Williams, D. (2013). Explaining Rising Nonresponse Rates in Cross-Sectional Surveys. The
ANNALS of the American Academy of Political and Social Science, 645(1), 3659.
https://doi.org/10.1177/0002716212456834
Callegaro, M., and DiSogra, C. (2008), “Computing Response Metrics for Online Panels.” Public Opinion
Quarterly, 72 (5): 1008-1032.
Callegaro, Mario et al., “Fitting Disposition Codes to Mobile Phone Surveys: Experiences from Studies in
Finland, Slovenia, and the USA,” Journal of the Royal Statistical Society, A,170 (2007), 647-670.
Carley-Baxter, Lisa R.; Peytchev, Andy; and Black, Michele C., “Comparison of Cell Phone and Landline
Surveys: A Design Perspective,” 22 (2010), 3-15.
Chearo, David and Van Haitsman, Martha, “Standardized Attempt Codes for Unified Multi-Mode Case
Histories,” Survey Practice, (October, 2010), at http://surveypractice.org/2010/10/27/standardized-
codes-for-multi-mode/
82
Couper, M.P. (2000), "Web Surveys: A Review of Issues and Approaches." Public Opinion Quarterly, 64
(4), 464-494.
Couper, M.P., Kapteyn, A., Schonlau, M., and Winter, J. (2007), “Noncoverage and Nonresponse in an
Internet Survey.” Social Science Research, 36 (1): 131-148.
Currivan, Douglas B. and Roe, David J., “Using a Dual-Frame Sample Design to Increase the Efficiency of
Reaching Population Subgroups in a Phone Survey,” Paper presented to the American Association for
Public Opinion Research, Phoenix, May, 2004.
Curtin, Richard, Stanley Presser, and Eleanor Singer. 2005. “Changes in Phone Survey Nonresponse
Over The Pat Quarter Century”. Public Opinion Quarterly 69 (1), 87-98.
Davis, James A.; Smith, Tom W.; and Marsden, Peter V., General Social Surveys, 19722006: Cumulative
Codebook. Chicago: NORC, 2007.
de Leeuw, E. D. (2018). Mixed-Mode: Past, Present, and Future. Survey Research Methods, 12(2), 75-89.
https://doi.org/10.18148/srm/2018.v12i2.7402
de leeuw, Edith & Heer, W. & Groves, Robert & Dillman, Don & Eltinge, J.L. & Little, R.J.. (2002). Trends
in Household Survey Nonresponse: A Longitudinal and International Comparison. Survey Nonresponse
(Chapt. 3. 41-54.
Dillman, D. A. (2017). The promise and challenge of pushing respondents to the Web in mixed-mode
surveys. Survey Methodology, 43(1), 3-31.
DiSogra, C. (2008), River Sampling: A Good Catch for Researchers?
http://www.knowledgenetworks.com/accuracy/fall-winter2008/disogra.html
Disogra, Charles, and Mario Callegaro. 2016. "Metrics and Design Tool for Building and Evaluating
Probability-Based Online Panels." Social Science Computer Review 34 (1): 26-40.
Dutwin, David and Paul Lavrakas. 2016. “Trends in Phone Outcomes, 2008-2015. Survey Practice, 9 (3).
https://doi.org/10.29115/SP-2016-0017
Elliott, Michael R.; Little, Roderick J.A.; and Lewitsky, Steve, "Subsampling Callbacks to Improve Survey
Efficiency," Journal of the American Statistical Association, 95 (2000), 730-738.
Ezzati-Rice, T. M., Frankel, M. R., Hoaglin, D. C., Loft, J. D., Coronado, V. C., & Wright, R. A. (2001). An
alternative measure of response rate in random-digit-dialing surveys that screen for eligible
subpopulations. Journal of Economic and Social Measurement, 26(2), 99109.
Frankel, Lester R., “The Report of the CASRO Task Force on Response Rates,” in Improving Data Quality
in a Sample Survey, edited by Frederick Wiseman. Cambridge, MA: Marketing Science Institute, 1983.
Frey, James H., Survey research by phone (2nd ed.). Newbury Park, CA: Sage, 1989.
Groves, Robert M., Survey Errors and Survey Costs. New York: John Wiley & Sons, 1989.
Groves, Robert M. and Lyberg, Lars E., “An Overview of Nonresponse Issues in Phone Surveys,” in Phone
Survey Methodology, edited by Robert M. Groves, et al. New York: John Wiley & Sons, 1988.
83
Hansen, M.H. and Hurwitz, W.N., "The Problem of Non-response in Sample Surveys," Journal of the
American Statistical Association, 41 (1946), 517-529.
Hidiroglou, Michael A.; Drew, J. Douglas; and Gray, Gerald B., “A Framework for Measuring and
Reducing Nonresponse in Surveys,” Survey Methodology, 19 (June, 1993), 81-94.
International Organization for Standardization (2009), ISO 26362:2009 Access Panels in Market, Opinion,
and Social Research Vocabulary and Service Requirements. Geneva: ISO.
Kviz, Frederick J., “Toward a Standard Definition of Response Rate,” Public Opinion Quarterly, 41
(Summer, 1977), 265-267.
Lavrakas, Paul J., Phone survey methods: sampling, selection, and supervision (2nd ed.). Newbury Park,
CA: Sage, 1993.
Lavrakas, Paul J., ed., “Special Issue: Cell Phone Numbers and Phone Surveying in the U.S.,” Public
Opinion Quarterly, 71 (2007), 703-854.
Lessler, Judith and Kalsbeek, William D., Nonsampling Error in Surveys. New York: John Wiley & Sons,
1992.
Link, Michael W. et al., “Reaching the U.S. Cell Phone Generation: Comparison of Cell Phone Survey
Results with an Ongoing Landline Phone Survey,” Public Opinion Quarterly, 71 (2007), 814-839.
Madow, William G.; Nisselson, Harold; and Olkin, Ingram, eds., Incomplete Data in Sample Surveys, Vol.
I, Report and Case Studies. New York: Academic Press, 1983.
Massey, James T., “Estimating the Response Rate in a Phone Survey with Screening,” 1995 Proceedings
of the Section on Survey Research Methods. Vol. 2. Alexandria, VA: American Statistical Association,
1995.
McCarty, Christopher, "Differences in Response Rates Using Most Recent Versus Final Dispositions in
Phone Surveys," Public Opinion Quarterly, 67 (2003), 396-406.
McPhee, C., Barlas, F., Brigham, N., Darling, J., Dutwin, D., Jackson, C., Jackson, M., Kirzinger, A., Little,
R., Lorenz, E., Marlar, J., Mercer, A., Scanlon, P.J., Weiss, S., Wronski, L. (2022). “Data Quality Metrics for
Online Samples: Considerations for Study Design and Analysis.” American Association For Public Opinion
Research Task Force Report.
Montgomery, Robert, Michael Dennis, and Nada Ganesh. 2016. “Response Rate Calculation
Methodology for Recruitment of a Two-Phase Probability-Based Panel: The Case of AmeriSpeak.”
Rawlings, Steve W., “Household and Family Characteristics: March, 1993,” Current Population Reports,
P20-477. Washington, DC: Bureau of the Census, 1994.
Respondent Cooperation and Industry Image Survey. Port Jefferson, NY: Council for Marketing and
Opinion Research, 1996.
Rookey, B.D., Hanway, S., and Dillman, D.A. (2008), “Does a Probability-Based Household Panel Benefit
from Assignment to Postal Response as an Alternative to Internet-Only?” Public Opinion Quarterly, 72
(5): 962-984.
84
Scherpenzeel, A.C. and Das, M. (2010), “True Longitudinal and Probability-Based Internet Panels:
Evidence from the Netherlands.” In M. Das, P. Ester, and L. Kaczmirek (eds.), Social Research and the
Internet. New York: Taylor and Francis, chapter 4.
Shapiro, Gary; Battaglia, Michael P.; Camburn, Donald P.; Massey, James T.; and Tompkins, Linda I.,
“Calling Local Phone Company Business Offices to Determine the Residential Status of a Wide Class of
Unresolved Phone Numbers in a RandomDigit-Dialing Sample,” 1995 Proceedings of the Section on
Survey Research Methods. Vol. 2. Alexandria, VA: American Statistical Association, 1995.
Smith, Tom W. (2003), “An Experimental Comparison of Knowledge Networks and the GSS.”
International Journal of Public Opinion Research, 15 (2): 167-179.
Smith, Tom W., "A Revised Review of Methods to Estimate the Status of Cases with Unknown Eligibility,"
Report of the Standard Definitions Committee for the American Association for Public Opinion Research,
September, 2009.
Tortora, R. (2009), “Attrition in Consumer Panels.” In P. Lynn (ed.), Methodology of Longitudinal
Surveys. New York: Wiley, pp. 235-249.
Tucker, Clyde; Brick, J. Michael; and Meekins, Brian, “Household Phone Service and Usage Patterns in
the United States in 2004: Implications for Phone Samples,” Public Opinion Quarterly, 71 (2007), 3-22.
United States Postal Service. (2000). “USPS Endorsements for Mail Undelivered as Addressed; (Exhibit 4-
1),” in Domestic Mail Manual Issue 55, p. F-3. Washington DC: GPO.
U.S. Bureau of the Census, The Current Population Survey: Design and Methodology. Technical Paper
No. 40. Washington, DC: GPO, 1978.
U.S. Bureau of the Census, 1990 Census of Population and Housing Guide. 1990 CPHR-1A&B.
Washington, DC: GPO, 1993.
Wiseman, Frederick and McDonald, Philip, The Nonresponse Problem in Consumer Phone Surveys.
Report No. 78-116. Cambridge, MA: Marketing Science Institute, 1978.
Wiseman, Frederick and McDonald, Philip, Towards the Development of Industry Standards of Response
and Nonresponse Rates. Report 80-101. Cambridge, MA: Marketing Science Institute, 1980.
85
Section 7. Calculating Outcome Rates from Final
Disposition Distributions
Calculating Outcome Rates from Final Disposition Distributions
In calculating and reporting outcome rates according to the rules and formulas below, researchers must
precisely define the rates used. For example, a statement that “the response rate is X” is unacceptable.
One must report exactly which rate was used, such as “Response Rate 2 was X.” In addition, a table
showing the final disposition codes for all cases should be prepared for the report and made available
upon request.
As defined by CASRO (Frankel, 1983) and other sources (Groves, 1989; Hidiroglou, et al., 1993; Kviz,
1977; Lessler and Kalsbeek, 1992; Massey, 1995), the response rate is the number of complete
interviews with reporting units divided by the number of eligible reporting units in the sample. Using
the final disposition codes described above, several response rates are described below.
RR = Response rate
COOP = Cooperation rate
REF = Refusal rate
CON = Contact rate
I = Complete interview (1.1)
P = Partial interview (1.2)
R = Refusal and break-off (2.10)
NC = Non-contact (2.20)
O = Other (2.30, 2.90)
UH = Unknown if household/occupied HU (3.10)
UR = Unknown if sampled unit is eligible/housing unit contains an eligible respondent (3.20)
UO = Unknown, other (3.90)
e = Estimated proportion of cases of unknown eligibility that are eligible
Response Rates

 
  
   
Response Rate 1 (RR1), or the minimum response rate, is the number of complete interviews divided by
the number of interviews (complete plus partial) plus the number of non-interviews (refusal and break-
off plus non-contacts plus others) plus all cases of unknown eligibility (unknown if housing unit, plus
86
unknown, other). RR1 is often calculated as a “lower bound” response rate but isn’t as typically
reported as RR3 described below.

  
 
  
   
Response Rate 2 (RR2) counts partial interviews as respondents.

 
   
   
Response Rate 3 (RR3) estimates what proportion of cases of unknown eligibility is eligible and can be
considered the most-common AAPOR response rate in reporting. In estimating e, one must be guided
by the best available scientific information on what share eligible cases make up among the unknown
cases. One must not select a proportion to boost the response rate.
20
The basis for the estimate must
be explicitly stated and detailed. It may consist of separate estimates (Estimate 1, Estimate 2) for the
sub-components of unknowns (3.10 and 3.20) and/or a range of estimators based on differing
procedures. In each case, the basis of all estimates must be indicated.
21

 +
 
  
   
Response Rate 4 (RR4) allocates cases of unknown eligibility as in RR3 but also includes partial
interviews as respondents as in RR2.

 
  

  
 
  
Response Rate 5 (RR5) is either a special case of RR3 in that it assumes that e=0 (i.e., there are no
eligible cases among the cases of unknown eligibility) or the rare case in which there are no cases of
unknown eligibility. Response Rate 6 (RR6) makes that same assumption and includes partial interviews
as respondents. RR5 and RR6 are only appropriate when it is valid to assume that none of the unknown
cases are eligible or when there are no unknown cases. RR6 represents the maximum response rate.
20
For example, different values of e would be appropriate in a survey requiring screening for eligibility (e.g., sampling adults 18-29 years old).
Two different e’s might be used for confirmed households that refused to complete the screener (for which we need an estimate of the
likelihood of one or more household members being 18-29) and units that were never contacted (for which we need an estimate of the
proportion that are households and an estimate of those with someone aged 18-29)
21
For a summary of the main methods for estimating e in surveys (1) minimum and maximum allocation, 2) proportional allocation, 3)
allocation based on disposition codes, 4) survival methods, 5) calculations of number of phone households, 6) contacting phone business
offices, 7) linking to other records, and 8) continued calling), see Smith, 2009 and forthcoming second edition
87
Cooperation Rates
A cooperation rate is the proportion of all cases interviewed of all eligible units ever contacted. There
are both household-level and respondent-level cooperation rates. The rates here are household-level
rates. They are based on contact with households, including respondents, rather than contacts with
respondents only. Respondent-level cooperation rates could also be calculated using only contacts with
and refusals from known respondents.

 
   
Cooperation Rate 1 (COOP1), or the minimum cooperation rate, is the number of complete interviews
divided by the number of interviews (complete plus partial) plus the number of non-interviews that
involve the identification of and contact with an eligible respondent (refusal and break-off plus other).

  
 
   
Cooperation Rate 2 (COOP2) counts partial interviews as respondents.

 
 

  
 
 
Cooperation Rate 3 (COOP3) defines those unable to do an interview as incapable of cooperating and
excluded from the base. Cooperation Rate 4 (COOP4) does the same as Cooperation Rate 3 but includes
partials as interviews.
Refusal Rates
When considering all potentially eligible cases, a refusal rate is the proportion of cases in which a
housing unit or respondent refuses to do an interview or breaks-off an interview.

 
  
   
Refusal Rate 1 (REF1) is the number of refusals divided by the interviews (complete and partial) plus the
non-respondents (refusals, non-contacts, and others) plus the cases of unknown eligibility.

 
  
   
Refusal Rate 2 (REF2) includes estimated eligible cases among the unknown cases similar to Response
Rate 3 (RR3) and Response Rate 4 (RR4) above.

 
  
88
Refusal Rate 3 is analogous to Response Rate 5 (RR5) and Response Rate 6 (RR6) above. As in those
cases, the actual situation must fully justify eliminating the unknowns from the equation. Non-contact
and other rates can be calculated in a manner similar to refusal rates. Refusal, non-contact, and other
rates will sum to equal the non-response rate.
Contact Rates
A contact rate measures the proportion of all cases in which some responsible member of the housing
unit was reached by the survey. The rates here are household-level rates. They are based on contact
with households, including respondents, rather than contacts with respondents only. Respondent-level
contact rates could also be calculated using only contact with and refusals from known respondents.

+


 
  
   
Contact Rate 1 (CON1) assumes that all cases of indeterminate eligibility are eligible.

+


 
  
   
Contact Rate 2 (CON2) includes in the base only the estimated eligible cases among the undetermined
cases.

+


 
  
Contact Rate 3 (CON3) includes in the base only known eligible cases.
Some Complex Designs
When surveys use complex designs, reporting responses and other outcome rates becomes more
complicated. Complex designs often require that the principles given in more than one of these sections
be combined to report rates. Here guidelines are presented for three general situations: 1) a design
selected in stages, 2) a design selected with unequal probabilities of selection, and 3) a two-phase
design that subsamples nonrespondents. The third design is relatively specific but is included because
subsampling nonrespondents and using more intensive methods to encourage them to respond is an
important special case.
Multistage Sample Designs
In multistage designs, the rates for the units that are sampled at the last stage should incorporate
nonresponse at the earlier stages. For example, suppose a sample of households is selected in the first
stage, and a sample of persons is selected in the second stage, or schools are samples at the first stage
and students at the second stage. In those cases, response rate calculations should include first-stage
nonresponse (household or school) and second-stage nonresponse (person or student).
Example: As an example, consider a design that attempts to interview all persons aged 18-44 in each
sample household. The rates for the first stage (i.e., household-level rates) are computed as noted
above. The person-level rates are computed estimating the number of 18-44 year-olds missed in
nonrespondent households.
89
For example, if households are selected with equal probabilities, RR1-RR6 should be based on counts of
persons 18-44 sampled in respondent and nonrespondent households. Typically the number of persons
18-44 in nonrespondent households is not fully known, so to compute

 
  
   
some person counts must be estimated. I, P, R, NC, and O are numbers of nonrespondent persons 18-44
in the households where some persons responded and are usually known. On the other hand, the term
e(UH + UO) is an estimate of the number of sampled persons 18-44 in sample households that were
completely nonrespondent (e.g., there was a refusal before a listing of persons in the household was
attained). (UH + UO) is the estimated total number of persons in those nonrespondent households, and
e is the estimated proportion of persons in the nonrespondent households that are 18-44 and eligible
for the sample.
A common practice is to estimate RR1-RR6 as a product of a screening rate and an interview rate. The
screening rate is the percentage of occupied housing units with 18-44 year-olds that provided a
household listing (i.e., determination of eligibility). The interview rate is the percentage of sampled
persons who provided an interview. Multiplying the rates implicitly assumes that the distribution of
persons 18-44 in the nonrespondent sample households is the same as in the respondent sample
households. It is recommended that some investigation of this assumption be conducted if this
computation is utilized.
However, the definition of RR1 and RR2 necessitate a more conservative approach. All unknown cases at
all stages should be maintained in the base, and this naturally lowers the response rate compared to the
multiplicative approach just described.
Single-Stage Samples with Unequal Probabilities of Selection
In single-stage designs where the units are sampled with unequal probabilities, the rates should be
weighted by base weights that are the inverse of the selection probabilities or a number that is
proportional to the inverse. In other words, the counts of cases used in computing rates should be
replaced by the sums of the base weights of the completed cases. For example, the numerator in RR1,
the count of the number of completed interviews, should be replaced by the sum of the weights of
completed cases. When reporting this response rate, it should be noted that the response rate was
weighted. Unweighted response rates are useful as productivity measures between and across sampling
strata.
Example: Suppose a sample of persons is selected with unequal probabilities, where the selection
weight for person i is w
i
(the reciprocal of the probability of the sampling rate for that person in the
survey). The numerator for RR1 should be the sum of the w
i
for all the persons that completed the
interview. The denominator contains the corresponding weighted counts. This response rate estimates
the percentage of persons in the frame that would respond if invited.
For example, RR1 becomes

 

 
 


90
where the subscript w reflects the use of weighting, that is, the I in the simple RR1 is the total number of
interviews (i.e., I = ΣI
i
, where I
i
= 0 if the i
th
sample case is not an interview and I
i
= 1 if the i
th
sample case
is an interview). In the RR1
w
, I
w
is the weighted sum of the I
i
or I
w
= Σw
i
I
i.
Similarly, P
w
= Σw
i
P
i
, and so
on for R
w
, NC
w
, O
w
, UH
w
, UR
w,
and UO
w
.
Two-Phase Sample Designs
In two-phase designs that subsample nonrespondents, the rates should be computed using weights that
account for the probability of the subsampling. Two-phase designs draw a probability sample of
nonrespondents after completing a first-phase effort. They may apply a different recruitment protocol
for those sampled into the second phase. Survey estimates are based on weighted counts of
respondents from the first and second phases combined. The general idea of such designs is that at
some point in the survey, the units that have not responded are subsampled, and the remaining efforts
are only used to get these units to respond.
22
In this case, the unweighted count is replaced by a
weighted count where the weight is the base weight for the units that are not subsampled (e.g., those
that complete the interview before subsampling is implemented) and is the product of the base weight
and the inverse of the subsampling rate for the units that are subsampled. Note that the weights for the
units eligible for subsampling but not subsampled are set equal to zero, which generally makes the
unweighted and weighted rates very different.
Example: Suppose a sample of households is selected, and the base weight for household i is w
i
. The
nonresponding households are subsampled so that each nonrespondent has a 50% chance of being
subsampled. The weight for computing response rates is w
i
for households that were not eligible for
subsampling, 2w
i
for the households that were subsampled, and 0 for the households that were eligible
for subsampling but not included. The expressions for the response rates are essentially the same as
those for single-stage samples with unequal selection probabilities. For example, RR1 becomes

 

 
 


where the subscript w reflects the fact that the total I is a weighted total. The I in the simple RR1 is the
total number of interviews (i.e., I = ΣI
i
, where I
i
= 0 if the i
th
sample case is not an interview and I
i
= 1 if
the i
th
sample case is an interview). In the RR1w , I
w
is the weighted sum of the I
i
or I
w
= Σw
i
I
i.
Similarly,
P
w
= Σw
i
P
i.
, and so on for R
w
, NC
w
, O
w
, UH
w
, UR
w
, and UO
w
.
22
For more discussion of these types of designs see Hansen and Hurwitz, 1946 and Elliot, Little, and Lewitzky, 2000.