The main reasons for not adopting these types of methods seem to be partly practical
limitations of time and resources, but also that users of cognitive interviewing to pretest
questionnaires are satisfied with the widely-used, informal methods. Willis (2005), after
providing a thorough review of these different approaches, seems to find little
justification to pursue more formal analysis methods.
6. Issues of Verbal Report Analysis in Other Fields
Boren and Ramey(2000) have observed uses of verbal reports in usability testing that are
very similar in some ways to what had occurred in survey pretesting. Their paper
acknowledges that “usability professionals may contribute positively to a system’s
development,” but then charge that “usability practices are not systematic or rigorous
enough to merit the distinction of being called a method (a defined practice with clear
rules for correct performance) [their bold type].” The particular usability practice they
target is “thinking aloud.” They describe in detail many ways in which usability
applications of the think aloud procedure differ from Ericsson and Simon’s work, even
though that is the single most cited source of justification for the use of thinking aloud.
As for actual applications, Boren and Ramey note that in their study of analysis methods
note that “Only rarely did practitioners say they analyzed verbalizations closely, and then
only for particularly problematic segments of action; in other words, there was no
protocol analysis.” Boren and Ramey then propose an alternative theoretical basis more
in tune with the way verbal reports are actually used in usability testing. It may be useful
to explore similar solutions for survey cognitive interview practices.
Beyond implications for the validity of verbal reports collected by methods at variance
with the Ericsson and Simon model, dispensing with protocol analysis has contributed to
the use of an array of diverse practices. “Lacking the guidance of unifying principles,
such changes [that they describe practitioners employing] vary greatly in degree and
kind, not only from Ericsson and Simon’s model, but from each other.” This aspect of the
usability experience, as described by Boren and Ramey, also bears a pronounced
resemblance to survey cognitive interviewing: a menu of variations and deviations from
Ericsson and Simon that, while having strong face validity, have become untethered from
any underlying theory, and produced a set of practices differing from one researcher or
project to another. Does this matter? If less formal procedures produce useful results at
lower costs, there may be no need for concern. However, if there is evidence that those
results are sometimes unreliable, it may be that some deviations from a theory-based
methodology— in this instance a methodology for data analysis— have a hidden cost.
Chi (1997), motivated by the lack of a guide for the “analysis of verbal data more
generally,” proposed her own guide to the analyses of verbal data, when the general goal
is to understand representation of knowledge used in cognitive performances. While she
notes that one way to develop such a guide would be, as we are planning, “...to survey the
literature, identify all those studies that have used some kind of qualitative analysis of
verbal data, then describe, analyze, and synthesize all the various methods,” she takes a
different path. Her “verbal analysis” method is intended to quantify the qualitative coding
of the contents of verbal reports. Chi’s guide is based on methods from her verbal
analysis research at that time. Her approach is different from VPA in that it is less
concerned with the processes of problem solving and more on representation of the
knowledge that the problem solver has. This different perspective complements the VPA
method in a way compatible with some cognitive interview objectives. (In more recent
later work (Chi 2006), she also employs laboratory methods that are similar to some
Section on Survey Research Methods – JSM 2010
3743