01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 1
Executive Summary
For many years, high school grade-point average (HSGPA) and standardized test scores (i.e.
SAT/ACT) have served as the two most important and relied upon college admissions criteria.
However, there is constant debate about whether HSGPA and SAT/ACT are fair measures of
college preparation and valid predictors of college success, and whether SAT/ACT should
remain as a requirement for college admissions.
Supporters of standardized tests argue that SAT/ACT evaluates college-preparedness on an equal
standing despite inconsistent grading systems throughout the nation’s high schools. While
HSGPA compares a student to the rest of his/her school, SAT/ACT scores compare the student
to the rest of the country. Opponents of standardized tests argue that SAT/ACT is biased towards
white, upper middle class students from high performing schools. Focusing on HSGPA in
college admissions may help foster racial and socioeconomic diversity in the college student
population.
Previous research generally treats standardized test results as measures of college preparation.
The literature surrounding the predictive validity of standardized test scores on student success is
mixed. Many studies found a statistically significant association between SAT/ACT scores and
college outcomes including first-year college GPA, first-year retention, four-year graduation, and
graduation GPA. On the other hand, research also shows that demographic predictors weakened
the ability of SAT/ACT scores to be effective predictors of college success.
In light of previous research, this report presents the results of separate studies on the
relationship among HSGPA, standardized tests, and demographics, and explanatory power of test
scores for college success based on UC’s application and enrollment data. Findings from this
study are generally consistent with those from prior research.
Analysis of the relationship among standardized tests and high school GPA, UC’s Analytical
Writing Placement Exam (AWPE), and demographics shows:
Demographics are stronger predictors of SAT/ACT scores than of HSGPA. The
share of variance in SAT/ACT scores accounted for by demographic factors (parental
income, parental education, and ethnicity) increased from a low of 26 percent in the late
1990’s to 43 percent in 2016. The share of variance in HSGPA accounted for by the same
demographic factors increased from 5 percent in the late 1990’s to 11 percent in 2016.
About one-third of the increase for SAT/ACT scores can be explained by disparities
between CA high schools, with the remainder explained by changes in the composition of
applicants to UC. Campus-specific estimates do not support increasing racial bias on the
tests as an explanation for increase.
Standardized test scores are positively correlated with high school GPA but at
moderate level. When controlling for socioeconomic factors (family income and parental
education), the correlations between SAT/ACT scores and high school GPA fall between
.30 to .51 across high schools and fall between .31 to .56 within high schools. The lowest
correlations happen between SAT/ACT writing scores and high school GPA in 2018.
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 2
From 2005 to 2018, there has been an increase in the correlations between SAT
Reading/Math as well as ACT Composite with high school GPA. Overall, the SAT and
ACT writing tests are less correlated with HSGPA than SAT Reading/Math or ACT
Composite.
New SAT EBRW (Evidence-Based Reading and Writing) and ACT ELA (English
Language Arts) scores have a strong association with passing UC’s Analytical
Writing Placement Exam (AWPE), but SAT Essay scores have only a moderate
association with passing AWPE. The analysis of the relationship between the new SAT
and new ACT and UC’s Analytical Writing Placement Exam (AWPE) shows that high
scores on the SAT EBRW and ACT ELA are associated with passing the AWPE.
However, on the SAT Essay, even at the top score of 24, only 69 percent passed the
AWPE. Based on the results of data analysis, UCOPE has accepted the new ACT ELA
and SAT EBRW scores but not the new SAT Essay scores as methods of meeting UC’s
Entry Level Writing Requirement (ELWR).
Analyses of the relationship between standardized tests and college success show that
standardized tests add value to the prediction of college outcomes beyond HSGPA alone.
SAT/ACT scores and HSGPA are both moderate predictors of student college
GPAs, and weak to moderate predictors of student retention and graduation.
Between 2001 and 2015, SAT Reading/Math scores account for 13 to 21 percent of the
variance in freshman GPA, and 15 to 17 percent of the variance in graduation GPA. ACT
Composite scores generally account for 14 to 22 percent of the variance in freshman
GPA, and 17 to 19 percent of the variance in graduation GPA. In comparison, HSGPA
accounts for 13 to 21 percent of the variance in freshman GPA, and 15 to 18 percent in
graduation GPA.
Without controlling for student demographics, SAT/ACT scores are a stronger predictor
of freshman GPA when compared to HSGPA, but have almost the same explanatory
power of graduation GPA, first year retention and graduation. After controlling for
student demographics, HSGPA and test scores have the same explanatory power of the
freshman GPA for 2015, the latest year included in this study, but HSGPA is a stronger
predictor of the first year retention, graduation GPA and four-year graduation.
Supplementing HSGPA with SAT/ACT scores increased the explanatory power of
pre-admission measures on college success metrics. Models that combined both
SAT/ACT and HSGPA account for an additional 5 to 11 percent of the total variance of
first-year GPA when compared to models that only use HSGPA scores. Similarly,
combined HSGPA and SAT/ACT models account for an additional 3 to 11 percent of
variance associated with UC graduation GPA when compared to models that only use
HSGPA.
Adding SAT/ACT writing to SAT/ACT scores does not increase the explanatory
power of pre-admission measures on college success. SAT and ACT writing scores
account for an additional 1 to 2 percent of the variance associated with most student
success metrics (i.e., freshman GPA and freshman retention), with the exception of
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 3
graduation GPA, where SAT and ACT writing scores account for just as much, if not
more variance associated with graduation GPA, when compared to SAT Reading/Math
and ACT composite.
HSGPA and SAT scores are associated with course performance. They are
independently important explanatory factors for first-year students’ course performance
relative to their peers in similar courses, with the latter growing in relative importance as
SAT predictive validity improves and HSGPA variation declines.
In summary, this report presents what literature and UC data show about the validity of
standardized test scores in terms of academic preparation and college success. Should UC decide
to continue to use standardized test scores as part of its admissions evaluation criteria? To
answer this question, a series of simulations will have to be done to further examine what
admissions outcomes would look like based on new criteria to evaluate students’ academic
achievements. Institutional Research and Academic Planning (IRAP) at UCOP will continue to
work on analyses and simulations.
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 4
I. Introduction
In 1968, the University of California (UC) introduced a new policy that required all freshman
applicants to submit scores from the SAT I or the ACT and three College Board Achievement
Tests (later called SAT II: Subject Tests and now called SAT Subject Tests). The original
purposes of this requirement were to identify applicants’ strengths and weaknesses, to improve
UC’s ability to identify students that are likely to succeed at the university, and to adjust for
differences in high school grading practices.
1
Since then, standardized test scores have played an
important role in evaluating applicants’ academic preparation for college and predicting
students’ success at UC. One important use of test scores at UC is to create a sliding scale
eligibility index along with high school Grade Point Average (HSGPA). Beginning with the
Class of 1979, such an index has been used to select the top 12.5 percent of the California public
high school graduating class in order to meet the guidelines of the California Master Plan for
Higher Education.
2
In 2000, UC adjusted the eligibility index to include SAT II scores in the
index and to weigh them more heavily relative to the SAT I/ACT scores effective for the fall
2003 entering freshman class because the SAT I was a relatively weaker predictor of academic
performance in the freshman year based on UC’s 1996 study.
3
UC, however, has never stopped
examining the validity of standardized tests in evaluating applicants’ academic preparation and
predicting college success. Based on a series of studies and evaluations, UC has revised the
requirement for the SAT and the ACT several times to meet its admissions goals over the years.
One of the most important revisions since 2000 was eliminating the requirement for the SAT
Subject Tests (formerly called SAT II: Subject Tests)
effective for the 2012 entering freshman cohort.
The College Board and ACT have also changed their test
structures and content several times during the time period
from 2001 to 2018 covered in this study, which may have
had an impact on how universities decided to use these test
scores in admissions. In 2005, the College Board announced
a major change to the SAT effective in 2006 to reflect the
importance of clear and succinct writing, which is not only
considered a skill to be used in college courses, but also one
necessary for success in a wide range of careers. To meet
this goal, the College Board changed the SAT I to the SAT
Reasoning, which included three tests, Critical Reading,
Math, and Writing (formerly a subject test called SAT II
Writing), and changed the SAT II: Subject Tests to SAT
Subject Tests. These changes were mainly attributed to a
1
Dorothy A. Perry, Michael T. Brown, & Barbara A. Sawrey. (2004). Rethinking the Use of Undergraduate
Admissions Tests: The Case of the University of California. In Rebecca Zwick (Ed.), Rethinking the SAT: The future
of standardized testing in university admissions. (pp. 103-124). New York and London: RoutledgeFalmer.
2
University of California Office of the President, Institutional Research and Academic Planning. California Master
Plan for Higher Education. Retrieved on August 31, 2018 from https://www.ucop.edu/institutional-research-
academic-planning//content-analysis/academic-planning/california-master-plan.html.
3
University of California Eligibility and Admissions Study Group. Final Report to the President. (2004, April).
https://senate.universityofcalifornia.edu/_files/committees/boars/documents/studygroup_final0404.pdf.
Versions of the SAT
UC Academic Years 2001-2005
SAT I: Verbal and Math
SAT II : Writing
UC Academic Years 2006-2016
SAT Reasoning
Critical Reading
Math
Writing
UC Academic Years 2017-2018
Evidence-Based Reading and
Writing
Math
Essay
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 5
series of debates over aptitude versus achievement tests in college admissions sparked by UC
President Richard C. Atkinson’s advocacy to eliminate the SAT I in UC admissions.
4
After about
a decade, the College Board made changes in the content, format, and scoring of the SAT
Reasoning again in 2016. The new SAT includes three parts: 1) Evidence-Based Reading and
Writing (EBRW), 2) Math, and 3) Essay with three subscores: Reading, Analysis and Writing.
The new test was first administered in the spring of 2016, and currently costs $64.50 with the
essay and $47.50 without it, though some school districts subsidize these fees.
5
A validity study
by the College Board based on a sample of participants from 15 four-year institutions show that
the redesigned SAT is as predictive of college success as the prior SAT, that redesigned SAT
scores improve the ability to predict college performance beyond high school GPA alone, and
that there is a strong, positive relationship between redesigned SAT scores and grades in
matching college course domains.
6
However, the 2016 changes were not done without
controversy. For example, the new exam’s wordy math questions may unfairly penalize students
because of their language burden.
7
When commenting on this, Jamal Abedi, a UC Davis
professor who specializes in educational assessments stated,
The problem is going to mostly
affect English-language learners.
ACT added an option of a 30-minute direct writing test to
their examination beginning in February 2005. Based on
an early study focusing on the value of using the ACT
Writing test in placing students in composition and
related courses, the writing test score added value to the
accuracy of course placement decisions over and above
the ACT English test.
8
Performance on the writing test in
combination with performance on the English test was
incorporated into the Combined English/Writing (or
English with Writing) score. Ten years later, in
September 2015, ACT introduced changes to the design
of this writing test, with modifications to the writing task,
scoring rubric, and score reports. ACT states that many
elements of the writing task remain similar to those of the
previous task, both emphasizing argumentative writing skills that are essential for college and
career success, but the new writing task is more consistent with the writing skills emphasized in
4
Zwick, R. (2004). College Admissions Testing in California: How Did the California SAT Debate Arise. In
Rebecca Zwick (Ed.), Rethinking the SAT: The future of standardized testing in university admissions. (pp. 101-
102). New York and London: RoutledgeFalmer.
5
College Board. (2018.) SAT Suite of Assessments: Test Fees. Retrieved on October 26, 2018 from
https://collegereadiness.collegeboard.org/sat/register/fees.
6
Shaw, E., Marini, J., Beard, J., Shmueli, D., Young, L., and Ng, H. (2016). The Redesigned SAT Pilot Predictive
Validity Study: A First Look. Retrieved on August 31, 2018 from
https://collegereadiness.collegeboard.org/pdf/redesigned-sat-pilot-predictive-validity-study-first-look.pdf.
7
Dudley, R. (2016). Despite warnings, College Board redesigned SAT in way that may hurt neediest students.
Reuters. Retrieved on August 31, 2018 from https://www.reuters.com/investigates/special-report/college-sat-
redesign/.
8
ACT. ACT Writing Test Technical Report. (2009). Retrieved on August 31, 2018 from
https://www.act.org/content/dam/act/unsecured/documents/TechReport.pdf.
Versions of the ACT
UC Academic Years 2001-2005
ACT Composite
UC Academic Years 2006-2015
ACT Composite
ACT English with Writing
UC Academic Years 2016- 2018
ACT Composite
ACT English Language Arts
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 6
the Common Core State Standards (CCSS) and other standards developed by ACT.
9
The new
writing test score is combined with scores on the English and Reading tests and reported as the
English Language Arts (ELA) score. The current test costs $67.00 with the writing section and
$50.50 without it, but some school districts subsidize these fees.
10
Despite the efforts the College Board and ACT made over the years to improve the validity of
their standardized tests in predicting college success, the number of colleges using Test Optional
Policies (TOPs) in higher education admissions has dramatically increased in recent years.
According to a study by the National Association for College Admission Counseling (NACAC),
more than 1,000 institutions had officially deemphasized standardized tests in admissions by
spring 2018 when the report was released.
11
The report further emphasizes that the momentum
indicates that undergraduate admissions is moving away from heavy reliance on measures
increasingly deemed to provide a narrow assessment of human potential. The study also
indicates that almost all institutions included in the study increased representation of
underrepresented groups (URGs) among applicants and enrollees and there are no signs of
academic slide in terms of college GPAs and graduation rates.
In June 2018, the University of Chicago announced that it would not require domestic freshman
applicants to submit standardized test scores any more. There were several reasons for this
decision, which, according to an article published in the Chicago Tribune, “marks a dramatic
shift for the South Side university...”
12
The University’s leaders have long wanted to increase
diversity and hoped this decision would prevent students from assuming that anything less than
an outstanding test score automatically takes them out of the running. The Undergraduate
Admissions Dean at the University also said that there was a big industry of test preparation,
which served higher-income students very well.
Although few universities (e.g., University of New England) have followed the University of
Chicago’s decision to stop requiring ACT and SAT scores for prospective undergraduates, many
universities eliminated the requirement for the SAT and the ACT Writing since spring 2019.
Harvard University announced it was dropping the requirement in March, followed by
Dartmouth in April, Yale and the University of San Diego in June, then Princeton, Stanford,
Brown, Duke, and the University of Michigan in July.
13
By November 2018, only 12 universities
9
The ACT College and Career Readiness Standards available online at
http://www.act.org/content/dam/act/unsecured/documents/ACT_RR2015-4.pdf and the 2011 NAEP Writing
Framework available online at
https://www.nagb.gov/content/nagb/assets/documents/publications/frameworks/writing/2011-writing-
framework.pdf.
10
ACT. (2018). The ACT Test: Current ACT Fees and Services. Retrieved on October 26, 2018 from
http://www.act.org/content/act/en/products-and-services/the-act/registration/fees.html.
11
Syverson, S., Franks, V., Hiss, W. (2018). Defining Access: How Test-Optional Works. Retrieved on August 31,
2018 from https://www.nacacnet.org/globalassets/documents/publications/research/defining-access-report-2018.pdf.
12
Rhodes, D. (2018.) University of Chicago to stop requiring ACT and SAT scores for prospective undergraduates.
Retrieved on August 31, 2018 from http://www.chicagotribune.com/news/local/breaking/ct-university-chicago-sat-
act-20180614-story.html.
13
The Princeton Review. (2018). Who Requires SAT and ACT Essays (and why they shouldn’t). The Score (blog).
Retrieved on August 28, 2018 from https://princetonreview.blog/2018/03/18/it-is-time-to-eliminate-the-sat-and-act-
optional-essays/.
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 7
still required the SAT Essay score for admissions, among which nine were UC’s campuses,
according to a list published by CompassPrep.
14
The other three schools requiring the SAT Essay
were the United States Military Academy, Martin Luther College, and Soka University of
America.
What decision should UC make? An article published by Inside Higher Ed in July, 2018 used an
eye-catching title to emphasize that For fate of SAT Writing Test, watch California. Is this
true? Given the fact that all other research universities dropped the SAT/ACT writing
requirement for admission and only three non-UC institutions continue to require the SAT or
ACT Writing, there is no doubt that if UC drops this requirement, the list of institutions requiring
the writing tests will be quite short. However, as Henry Sanchez, the former chair of the Board of
Admissions and Relations with Schools (BOARS), noted in an interview with Inside Higher Ed,
UC has a different situation from some of the colleges dropping the requirement recently. Nearly
200,000 high school seniors apply to UC now with UCLA receiving more than 113,000
applications for fall 2018 freshman admission, the largest number among all colleges and
universities throughout the country. To make a wise and evidence-based decision about whether
to sustain or drop this requirement for admissions, UC would benefit from a better understanding
of what existing research tells us and what UC data show about SAT and ACT scores in terms of
measuring college preparation and predicting student success nationwide and at UC specifically
and whether or not SAT and ACT scores can help UC to admit students to meet its educational
philosophy.
In this report, Institutional Research and Academic Planning (IRAP) at the University of
California Office of the President (UCOP) reviews current research on the validity of the SAT
and ACT and the ability of the tests to predict college success, and uses application and
enrollment data from academic years 2001 to 2018 to address two sets of questions as follows:
1. How do SAT and ACT scores relate to other measures of applicants’ academic
preparation such as high school grades? Do socioeconomic factors unrelated to a
student’s academic potential affect SAT and ACT scores?
2. How well do SAT and ACT scores predict college success as measured by freshman
GPA, first-year retention, four-year graduation, and graduation GPA at UC? Do the SAT
and ACT writing tests add any explanatory power to predict college success at UC,
above and beyond what is predicted by HSGPA and SAT Reading and Math or ACT
composite? Does the ability of SAT and ACT scores to predict college success vary by
student characteristics?
This report summarizes the findings of current research on relationships among standardized
tests (the SAT and ACT), high school grades, and demographics, and explanatory power of the
tests on college students’ success, and also analyzes UC data to examine what they show about
applicants’ college preparation and student success in terms of the test scores. The report is
organized into five sections. Section I describes the research methodology. Section II
summarizes findings of the existing research on the validity of the SAT and ACT. Section III
14
CompassPrep. (2018). ACT Writing and SAT Essay Requirements. Retrieved on May 13, 2019 from
https://www.compassprep.com/act-writing-and-sat-essay-requirements/.
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 8
examines relationships between the SAT/ACT scores and high school academic preparation
measured by high school weighted, capped GPA and UC’s Analytical Writing Placement Exam
(AWPE) scores. Section IV discusses the relative validity of the SAT and ACT in predicting
student success measured by the freshman GPA, one-year retention, four-year graduation, and
graduation GPA at UC. The last section summarizes the major findings of this study.
II. Methodology
This study uses UC’s application and enrollment data to examine the relationship between the
SAT/ACT and other college preparation measures of applicants to UC and college performance
of students enrolled at UC. The analysis focuses on California resident applicants and enrollees.
The SAT and ACT scores are derived from official and self-reported scores. If official scores are
missing, self-reported scores are used.
The current study also examines differential impacts of both SAT and ACT by campus, intended
discipline (both broad area and STEM/Non-STEM), family income, parental education level,
Pell status, and high school ranking based on the Academic Performance Index (API). These
analyses are conducted as previous research
has indicated that the predictive validity of
SAT/ACT varies by student demographics.
As noted earlier, UC accepts both SAT and
ACT scores. To understand how related each
of these two tests is to college preparation and
performance, this study analyzes the SAT and
ACT separately. In other words, two separate
sets of statistical models are built, one set
with the SAT predictors and the other with the
ACT predictors. Also, to examine what
additional variance of student success at UC
the SAT and ACT Writing can explain
beyond the SAT Math and Reading scores, or the ACT Composite scores, separate models in
both sections are estimated with a predictor of the sum of the SAT Math and Reading or the ACT
composite, the SAT/ACT Writing, or two predictors of the sum of the SAT Math and Reading
and the SAT Writing, or ACT Composite and ACT Writing. The College Board and ACT have
made significant changes to their test structure over time. Therefore, a longitudinal analysis was
conducted to better understand how the changes of test structures, content, and scoring may have
affected their validity.
To examine the relationships between standardized test scores and other measures of college
preparation, this study first provides a profile of the SAT and ACT takers among UC applicants
from 2001 to 2018 (see Appendix A for details), then examines the percent of variance
associated with SAT/ACT scores and high school GPA that can be attributed to factors students
have no control over (i.e., family income, parental education and race/ethnicity). Finally, a series
of partial correlations were calculated to examine the relationship between SAT/ACT scores and
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 9
high school grades, controlling for selected demographic factors. Analyses throughout this report
use high school weighted and capped GPA
15
for high school grades.
The methodology used to analyze relationships between standardized test scores and college
performance at UC is similar to that used in a paper by Saul Geiser and Roger Studley published
in 2002, which examined predictive validity and differential impact of the SAT I and SAT II at
UC.
16
The findings in Geiser’s research will be described later in this report. However, since
their research included enrollment data from 1996 to 1999, this study does not replicate their
research; instead, it conducts similar analyses using enrollment data from 2001 to 2015 in order
to compare results with their findings. As many other studies (see Section III for a summary of
previous research findings) claim that freshman GPA (or first-year GPA) is by far the most
frequently used outcome variable measuring college success, this study employs the freshman
GPA as a criterion to measure college success, and also examines how well the SAT/ACT
predicts first-year retention, four-year graduation, and graduation GPA.
This report presents some results for all years from 2001 to 2018, and other results, especially
those about relationships of the SAT and ACT to college preparation and performance for
selected years including 2001, 2005, 2007, 2012, 2015, and 2018. These years were chosen
because 2001 was the first year UC adopted the Eligibility in the Local Context (ELC) program,
2005 was the last year students took the old SAT and ACT, 2007 was the second year students
submitted new SAT and ACT test scores, 2012 was the first year UC revised the admissions
eligibility structure with top nine percent of high school graduates from a school qualified for a
guaranteed admission, 2015 was the last year before the current version of the ACT, and 2018 is
the third year students submitted new ACT scores and the second year students submitted new
SAT scores.
III. What Previous Research Findings Show about the SAT and ACT
Although there is no single definition or measure of college preparation or college readiness,
researchers, national associations of education, and test developers usually suggest that
standardized test scores in conjunction with other measures available during high school can act
as proxies for performance in college courses and careers. Among the possible measures are
SAT/ACT test scores, high school degree completion, high school GPA, taking challenging high
school courses, and performance in high school courses.
17
15
High school Grade Point Average (HSGPA) used in this analysis is an honors-weighted, capped GPA where
extra points up to eight semesters, no more than four in the 10th grade are added to the GPA. The UC’s
admission’s website provides more detailed information about how a weighted, capped GPA is calculated,
http://admission.universityofcalifornia.edu/freshman/requirements/gpa-requirement/index.html.
16
Geiser, S., & Studley, R. (2002). UC and the SAT: Predictive validity and differential impact of the SAT I and
SAT II at the University of California. Educational Assessment, 8(1), 1-26.
17
Maruyama, Geoffrey. (2012). Assessing College Readiness: Should We Be Satisfied With ACT or Other
Threshold Scores? Educational Researcher, 41:7, 252 261. Retrieved on August 28, 2018, from
http://journals.sagepub.com/doi/abs/10.3102/0013189X12455095.
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 10
Green and Winters developed a measure of public high school college readiness to reflect the
minimum standards of the least selective four-year college.
18
The standard includes earning a
regular high school diploma, completing a minimum set of course requirements, and being able
to read at a basic level (scoring at or above the basic level on the National Assessment of
Educational Progress [NAEP] reading assessment).
The National Center for Educational Statistics (NCES) measured college readiness based on a
student’s high school GPA, senior class rank, National Education Longitudinal Study (NELS)
1992 test scores, and SAT/ACT college entrance scores.
19
The ACT is designed to measure academic skills and knowledge taught in high school and
required for first year college courses.
20
Therefore, it is not surprising that previous studies (by
ACT and others) have found that factors such as high school coursework and high school GPA
affect ACT scores. ACT scores are influenced by achievement in core subject areas of high
school courses.
21
High school GPA accounts for 31% of the variance in ACT scores, more than
high school coursework, high school characteristics, non-cognitive factors, and demographic
characteristics. Non-cognitive factors such as parental involvement or perceptions of education
affect ACT scores via their impact on high school GPA.
22
Completing more AP courses is
associated with higher ACT scores.
23
Taking an AP math course and taking more AP courses are
associated with meeting benchmark scores on the ACT.
24
Taking and passing AP exams is
associated with higher ACT scores, but simply enrolling in AP courses is not.
25
The SAT is also
moderately correlated with HSGPA.
26
18
Green, J.P., & Winters, M.A. (2005). Public high school graduation and college-readiness rates: 1991-2002.
Manhattan Institute. Retrieved on August 10, 2018, from https://www.manhattan-institute.org/html/public-high-
school-graduation-and-college-readiness-rates-1991-2002-5911.html.
19
Berkner, L., & Chavez, L. (1997). Access to postsecondary education for the 1992 high school graduates. (NCES
98-105). Washington, DC: U.S. Department of Education, National Center for Education Statistics.
20
Ferguson, Richard. (2004). Achievement versus Aptitude in College Admissions. In Zwick, Rebecca. Rethinking
the SAT. New York and London: RoutledgeFalmer.
21
Allen, J. (2015). Influence of Achievement in Core High School Courses on ACT Scores. Retrieved on August 28,
2018 from https://www.act.org/content/dam/act/unsecured/documents/2015-Tech-Brief-Influence-of-
Achievement.pdf.
22
McNeish, D., Radunzel, J., Sanchez, E. (2015). A Multidimensional Perspective of College Readiness: Relating
Student and School Characteristics to Performance on the ACT®. Retrieved on August 28, 2018 from
http://www.act.org/content/dam/act/unsecured/documents/ACT_RR2015-6.pdf.
23
Anderson, K. (2016). The Effectiveness of Advanced Placement Courses in Improving ACT® Scores for
High School Students. Retrieved on August 28, 2018 from
http://www.kylestevenanderson.com/uploads/7/0/5/8/70582975/kyle_anderson_edd_dissertation_pdf.pdf.
24
Mo, L., Yang, F., Hu, X., Calaway, F., & Nickey, J. (2011). ACT test performance by Advanced Placement
students in Memphis City schools, The Journal of Educational Research, 104, 354359. Retrieved on August 28,
2018 from https://www.tandfonline.com/doi/abs/10.1080/00220671.2010.486810.
25
Warne, R., Larsen, R., Anderson, B., Odasso, A. (2015). The Impact of Participation in the Advanced Placement
Program on Students' College Admissions Test Scores, The Journal of Educational Research, 108, 400-416.
https://www.tandfonline.com/doi/full/10.1080/00220671.2014.917253.
26
Shaw, E., Marini, J., Beard, J., Shmueli, D., Young, L., and Ng, H. (2016). The Redesigned SAT Pilot Predictive
Validity Study: A First Look. Retrieved on August 31, 2018 from
https://collegereadiness.collegeboard.org/pdf/redesigned-sat-pilot-predictive-validity-study-first-look.pdf.
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 11
Recently, a paper was published by researcher Saul Geiser, where he examined the correlation
between demographic predictors (i.e., parental education, family income and race/ethnicity) and
SAT/ACT scores in UC California resident freshman admissions from 1994-2011.
27
Geiser
found that demographic predictors accounted for more than a third of the variance associated
with SAT/ACT scores among UC applicants, and that race/ethnicity had become the strongest of
those three predictors. This report includes results of the analysis replicated based on slightly
different methods than Geiser used for his analysis.
A large volume of research has also examined relationships between standardized tests and
student success in college. However, findings are not always consistent across research. This
could be because different research might use different measures to evaluate college success,
utilize different methods to analyze data, examine different samples, and control different
demographic variables in modeling. Most of the research on standardized tests and academic
success focuses either on the SAT alone or combines SAT and ACT scores. Very few studies
look at the tests separately. Combining scores on the two tests is often done as SAT and ACT
scores are highly correlated. Based on UC’s application data, the Pearson correlation coefficient
between the sum of SAT Reading and Math scores and the ACT composite scores was 0.91 in
2001, 0.89 in 2006, and 0.93 in 2016.
Many studies that use first-year GPA (FYGPA) in college as a measure of college success have
found a strong correlation between test scores and FYGPA. In 1960, Fishman and Pasanella
reviewed 147 studies that included the SAT as a predictor of FYGPA, finding that the correlation
between SAT scores and high school record with FYGPA ranged from moderate to strong (0.34
to 0.82).
28
In 1989, Morgan of the College Board analyzed the predictive validity of the SAT on
first-year GPA and found that the correlation between SAT scores and FYGPA declined over the
years, but there was less change for private institutions, small institutions, and more selective
institutions.
29
More recently, Hezlett and colleagues performed a meta-analysis of approximately
3,000 validity studies, with more than one million students. They found that the SAT is a valid
predictor of FYGPA, with correlations ranging from moderate to strong (0.44 to 0.62).
30
A report published in 2011 by the National Bureau of Economic Research found that the English
and math sections of the ACT were more strongly correlated with student success than were the
reading and science sections.
31
Specifically, the ACT math and English sections only were found
to be more strongly related to first-year GPA than the science and reading sections only. While
27
Geiser, S. (2015). The Growing Correlation Between Race and the SAT Scores. UC Berkeley Center for Studies
in Higher Education. Retrieved from https://cshe.berkeley.edu/publications/growing-correlation-between-race-and-
sat-scores-new-findings-california-saul-geiser on August 3, 2018.
28
Fishman, J.A., & Pasanella, A.K. (1960). College admission selection studies. Review of Educational Research,
30(4), 298310.
29
Morgan, R. (1989). Analysis of the Predictive Validity of the SAT and High School Grades from 1976 to 1985.
(College Board Research Report No. 1989-7). New York: The College Board.
30
Hezlett, S.A., Kuncel, N., Vey, M.A., Ahart, A.M., Ones, D.S., Campbell, J.P., & Camara, W.J. (2001, April).
The effectiveness of the SAT in predicting success early and late in college: A comprehensive meta-analysis. Paper
presented at the annual meeting of the National Council on Measurement in Education, Seattle, WA.
31
Bettinger, E.P., Evans, B.P., and Pope, D.G. (2011). Improving College Performance and Retention. The Easy
Way: Unpacking the ACT Exam. National Bureau of Economic Research. Retrieved from
http://www.nber.org/papers/w17119.pdf on August 7, 2018.
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 12
not directly challenging these findings, ACT does contest the interpretation of these findings.
They argue that most students score similarly on all four tests, and those that differ may provide
admissions officers with important additional information.
32
As major fields have their own unique characteristics with different grading standards, there are
likely to be differences in the predictive validity of the SAT for cumulative college GPA by
academic program. Shaw and colleagues of the College Board showed that correlations between
SAT and cumulative GPA were of moderate strength for most majors (0.50-0.60). The strongest
correlations tended to be found in STEM (Science, Technology, Engineering, and Mathematics)
fields. The weakest correlations were found for undeclared students (r = 0.42).
33
Bridgeman,
Pollack, and Burton of the College Board further pointed out high school GPA is a slightly
poorer predictor of cumulative college GPA for African American and Hispanic students. SAT
scores are moderately correlated with cumulative college GPA (.50) over four or more years. For
all minority group students, both male and female, SAT scores predict college success about as
well as they do for white students.
34
Some research findings indicate that a strong relationship exists between the SAT score and
college retention, over the past 30 years. By analyzing a national dataset including individual
level data on nearly 150,000 students from 106 colleges and universities, Mattern and Patterson
of the College Board found a strong correlation between SAT performance and retention to
second year.
35
They found that 64 percent of students in the lowest SAT score band returned for
their second year, compared to 96 percent of students in the highest SAT score band.
36
The
relationship between SAT scores and retention to the third and fourth years was also examined,
and similar results were found.
37
,
38
Research has also shown that SAT scores predict graduation as well as HSGPA. Burton and
Ramist
39
of the College Board found that high school record had an uncorrected correlation of
.29 with college graduation, while each of the individual SAT sections had an uncorrected
correlation of .27, and the best combination of the two sections had an uncorrected correlation of
32
Mathews, J. (2011). Report finds 2 of 4 tests in ACT Poor Predictors of College Success. The Washington Post.
Retrieved from https://www.washingtonpost.com/blogs/class-struggle/post/report-finds-2-of-4-tests-in-act-poor-
predictors-of-college-success/2011/07/19/gIQAOiPFOI_blog.html?utm_term=.f1505f48e84a on August 13, 2018.
33
Shaw, E., Kobrin, J., Patterson, B., and Mattern K. (2012). The Validity of the SAT for Predicting Cumulative
Grade Point Average by College Major (College Board Research Report No. 2012-6). New York: The College
Board.
34
Bridgeman, B., McCamley-Jenkins, L., & Ervin, N. (2000). Predictions of freshman grade-point average from the
revised and recentered SAT I: Reasoning Test. (College Board Research Report No. 2000-1). New York: The
College Board.
35
Mattern, K. and Patterson, B. (2009). Is Performance on the SAT Related to College Retention? (College Board
Research Report No. 2009-7). New York: The College Board.
36
Mattern, K. and Patterson, B. (2009). Is Performance on the SAT Related to College Retention? (College Board
Research Report No. 2009-7). New York: The College Board.
37
Mattern, K. D., & Patterson, B. F. (2011a). The relationship between SAT scores and retention to the third year:
2006 cohort (College Board Statistical Report No. 2011-2). New York: The College Board.
38
Mattern, K. D., & Patterson, B. F. (2011b). The relationship between SAT scores and retention to the fourth year:
2006 cohort (College Board Statistical Report No. 2011-6). New York: The College Board.
39
Burton, N., & Ramist, L. (2001). Predicting success in college: SAT studies of classes graduating since 1980
(College Board Research Report No. 2001-2). New York: The College Board.
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 13
.33. Mattern, Patterson and Wyatt
40
of the College Board found that students with higher SAT
scores are more likely to graduate, and graduate in a timely manner (i.e., four years), even after
controlling for HSGPA, institutional characteristics, and institutional selectivity.
Geiser and Studley
41
examined the relationship between SAT scores and freshman GPA based
on the records of 77,893 students who entered UC between Fall 1996 and Fall 1999. The study
found that SAT II subject tests were better predictors of student success at UC when compared to
the SAT I aptitude tests. They suggested that using the achievement tests (SAT II subject tests)
for college admissions could be valuable to the prediction of college success with clarity in
admissions standards and close linkage to the high-school curriculum.
However, other research has questioned the predictive validity of SAT scores as it relates to
student success within the first year, and beyond. Using a dataset containing over 400,000
individual records from 176 colleges with various levels of selectivity, researchers sought to
better understand if the SAT was predictive of first-year GPA.
42
Evidence suggests that the SAT,
while overall predictive of first-year GPA, was less predictive across subgroups. For example,
when comparing African American and White students, the SAT Critical Reading either over- or
under-predicted first-year GPA at 20 percent of colleges and universities.
43
Furthermore, using UC data, Geiser
44
found that demographic predictors weakened the ability of
SAT/ACT scores to be effective predictors of five-year graduation rates, and in contrast, the
explanatory power of HSGPA was not found to be affected by demographic controls. Geiser and
Santelices found that HSGPA was consistently a stronger predictor of student success when
compared to SAT scores, and that its power increased as students progressed through their UC
career.
45
Similar to what was found by Geiser and Santelices, recently, an article posted in Forbes
concluded that HSGPA matters more than SAT/ACT scores when it comes to student success.
46
For example, even students with high SAT/ACT scores (above 1,100, with ACT concorded to
the SAT scale) have a low expected six-year graduation rate (51 percent), if their HSGPA is
40
Mattern, K., Patterson, B. and Wyatt, J. (2013). How Useful Are Traditional Admission Measures in Predicting
Graduation Within Four Years? (College Board Research Report No. 2013-1). New York: The College Board.
41
Geiser, S., & Studley, R. (2002). UC and the SAT: Predictive validity and differential impact of the SAT I and SAT
II at the University of California. Educational Assessment, 8(1), 1-26.
42
Aguinis, H., Culpepper, S.A. & Pierce, C.A. (2016). Differential Prediction Generalization in College Admissions
Testing. Journal of Educational Psychology (7). 1045-1059.
43
Jaschik, S. (2016). Faulty Predictions? Inside Higher Education. Retrieved from
https://www.insidehighered.com/news/2016/01/26/new-research-suggests-sat-under-or-overpredicts-first-year-
grades-hundreds-thousands on October 12, 2018.
44
Geiser, S. (2016). A Proposal To Eliminate the SAT in Berkeley Admissions. UC Berkeley Center for Studies in
Higher Education. Retrieved from https://cshe.berkeley.edu/publications/proposal-eliminate-sat-berkeley-
admissions on August 31, 2018.
45
Geiser, S. & Santelices, M.V. (2007). Validity of High School Grades in Predicting Student Success Beyond the
Freshman Year: High-School Record v. Standardized Tests as Indicators of Four-Year College Outcomes. UC
Berkeley Center for Studies in Higher Education. Retrieved from
https://cshe.berkeley.edu/sites/default/files/publications/rops.geiser._sat_6.13.07.pdf on August 31, 2018.
46
Cooper, P. (2018). What Predicts College Completion? High School GPA Beats SAT Score. Forbes. Retrieved
from https://www.forbes.com/sites/prestoncooper2/2018/06/11/what-predicts-college-completion-high-school-gpa-
beats-sat-score/#3c6910c04b09 on August 17, 2018.
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 14
relatively low (3.00-3.32). In contrast, students with average SAT/ACT scores (900-990) and a
high HSGPA (3.67-4.00) have a much higher expected graduation rate (62 percent). However,
the study used a sample of students who attended a group of less selective four-year public
colleges and universities. This report uses the same method to analyze timely graduation rates
based on UC’s longitudinal data.
In summary, studies that examined relationships between standardized tests and other college
preparation measures such as HSGPA suggested positive relationships between them. The
literature surrounding the predictive power of standardized test scores on student success is
mixed. The College Board and supporters of standardized tests argue that they offer a check
against grade inflation, assess skills needed not just for higher education, but the workplace as
well, and that in combination with HSGPA provide a more “complete” assessment of students’
academic potential. To support their argument, some research does suggest that both the SAT
and the ACT are related to student success within the first year, and that the first year is
important for student success, overall.
IV. What UC’s Data Show about the SAT and ACT
Test Takers and Average Test Scores of UC California Resident Applicants
Appendix A provides a profile of UC California resident applicants who submitted SAT and/or
ACT scores in their applications. From 2001 to 2013, more than 90 percent of UC CA applicants
submitted SAT scores. However, there has been a declining trend in freshman applicants
submitting SAT scores sine earlier 2010s. In 2018 one year after the College Board launched
their new SAT, only about 86 percent of UC CA freshman applicants submitted SAT scores,
including SAT Math, EBRW, and Essay scores. On the other hand, the share of CA applicants
submitting ACT Composite scores increased steadily from 28 percent in 2001 to 47 percent in
2011 then fluctuated between 44 percent and 53 percent in 2012 through 2018. It is also
important to note that the share submitting ACT ELA scores was substantially lower than the
share submitting ACT Composite scores in 2017 and 2018 (at 46 percent and 38 percent
respectively). It is hard to understand why about 15 percent of those who submitted ACT
Composite scores did not submit ACT ELA scores. Were there any reasons for or obstacles to
this? More analyses are needed to answer this question. However, to determine applicant’s
eligibility for guaranteed admissions, UC has to use SAT scores for these applicants. If they did
not provide a full score of SAT tests either, they would be ineligible for guaranteed admissions
under the current ETR (Entitled to Review) policy, which requires a full score of either SAT or
ACT including the writing test.
The average SAT I Verbal/Math scores ranged from 1163 to 1177 and average SAT II Writing
scores ranged from 573 to 581 in 2001 to 2005 (Appendix B). The average SAT Critical
Reading/Math scores dropped from 1161 in 2006 to 1117 in 2016 and the SAT Writing scores
dropped as well from 572 in 2006 to 550 in 2016. Students seem to have higher new SAT
Math/EBRW scores in 2017 and 2018 relative to recent SAT Critical Reading/Math averages
(1203 and 1167 respectively).
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 15
Despite the changes over time to the ACT, the average ACT composite scores have stayed
largely the same, at 24 or 25 in all years between 2001 and 2018 except 2017, when the average
was 26. Excluding 2017 and 2018, when very few students submitted scores, English with
Writing was 23 or 24 in all years (2006 through 2016). The average for the new ELA scores was
25 in 2017 and 2018 and the average writing subscore was 8 (on a 2-12 scale).
The Relationship between Demographic Characteristics and SAT/ACT Scores
Figure 1 below shows an apparently disturbing trend in the proportion of variation in UC
applicants’ SAT scores that are accounted for by fixed student characteristics. The left panel of
the figure shows that annual linear regressions of UC applicants’ SAT scores on three such
characteristicsparental income, parental education, and ethnicityshow that the amount of
demographically-explained variation in applicants’ SAT scores (as measured by R
2
) has
increased from 26 percent in the late 1990s to 43 percent in 2016, suggesting that nearly half of
the variation captured by the SAT score across UC applicants reflects student background
characteristics, not all of which are directly observed by UC’s admissions offices. Meanwhile,
the proportion of explained variation of applicants’ high school GPAs (HSGPAs) has remained
at a far lower level, rising from 5 percent in the 1990s to 11 percent in 2016. Figure 1’s right
panel breaks out each permanent characteristic separately, showing that all three demographics
each explain an additional 10 percentage points of SAT variation since the 1990s.
47
47
A number of modeling assumptions are necessary in the production of this chart, and our choices differ from those
made by the previous version of this brief (September, 2018) as well as from Geiser (2015). First, consider the three
demographic characteristics analyzed in this study:
1. Parental Income: Both our earlier version and Geiser include only log CPI-adjusted parental income as
their measure of income. This technique implicitly drops two important groups of applicants from the
sample: (a) applicants who report 0 parental income, since the log of 0 is non-finite (about 4% of the
sample), and (b) applicants who do not report parental income on their applications, usually because they
do not intend to receive financial aid, indicating high-income households (about 12% of the sample).
Figure 1: Demographics' Explanatory Power for SAT and HSGPA
Note: R
2
from annual OLS regressions of applicants’ SAT score or high school GPA on comprehensive parental
education indicators, ethnicity indicators, and family income (and an indicator whether family income is reported),
combined (left) and one at a time (right). Sample restricted to California-resident freshmen.
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 16
There are at least two well-known interpretations of this important finding. The first is that the
SAT is increasingly racist, classist, or otherwise biased against students from disadvantaged
backgrounds over time, such that despite their unchanged average aptitude over the past 25
years, disadvantaged applicants’ SAT performance has deteriorated (unlike their HSGPA
performance, which may not face the same systematic issues of bias). The second is that
increased residential segregation by ethnicity and class, combined with increased educational
disparities across California secondary schools, have led to actual average declines in college
preparation among disadvantaged groups, as accurately reflected by their declining SAT scores
(but perhaps not by HSGPAs, which are normed within increasingly-disparate high schools).
A third possible explanation is that the trend is driven by changes in the composition of youths in
the estimation sample, most likely as a result of changing UC admissions policies. Expanding
admissions policies that favor disadvantaged applicants, for example, could increase the
explanatory power of demographic characteristics among admits by enlarging the pool of low-
SAT high-disadvantage students in the sample, which would mechanically increase
demographics’ correlation independent of the exam’s bias or changes in applicants’ average
latent aptitude.
Arbitrating between these alternative explanations for the “Demographic Testing Trend”, or
DTT, is crucial to understanding how the SAT’s informativeness has evolved over the past 25
years. Advocates in favor of the first explanation (the “Bias Explanation”) include Saul Geiser,
who argues that the DTT invalidates use of the SAT in the absense of race-based affirmative
Omitting these samples may mechanically decrease the correlation between income and SAT, since they
represent the two extremes of income where the covariance with test scores may be highest. In order to
maintain these samples, this analysis includes three measures of parental income in each regression model:
log CPI-adjusted parental income (replaced as 0 when missing or infinite), an indicator for missing
income, and an indicator for zero income. This change likely explains the higher proportion of SAT
variation explained by the presented estimates.
2. Parental education: Our earlier version included only an indicator for whether one parent has a college
degree, while Geiser may have included an ordered integer measuring the more-educated parent’s highest
level of education. Both of these measures simplify a high-dimensional student featurethe educational
level of their parentinto a highly-parametric summary. This analysis includes indicator variables for
each combination of educational background held by the applicant’s parents, using the full available
information set. This change may partly account for the aggregate increase in explanatory power of
demographics for applicants’ SAT score.
3. Ethnicity: Our earlier version and Geiser include only an indicator for whether the applicant is from an
underrepresented group, including Black, Chicano/Latinx, or Native American. This analysis includes
indicators for every observed ethnicity, or 15 in all. This may also contribute to the general increase in
demographics’ explanatory power for SAT scores.
The added value of including these multi-dimensional measures of students’ background characteristics is that they
more fully specify each student’s background, leading to more explanatory power and avoiding possibly-important
model restrictions that could challenge interpretation (especially in the case of parental income). The disadvantage
of using multi-dimensional measures is that there is no longer a single standardized regression coefficient associated
with each measure, making it impossible to directly compare the degree to which each contributes to their mutual
absorption of SAT variation. As a result, rather than presenting regression coefficients, we show the degree to which
each individual characteristic (as measured multi-dimensionally) alone can explain variation in applicants’ SAT
scores.
Finally, all three analyses use the same definition of applicants’ SAT score: the sum of the mathematics and reading
components of the SAT exam.
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 17
action because it increasingly favors White and Asian applicants (Geiser, 2015). Advocates for
the second explanation (the “Real-Disparities Explanation”) include the College Board, which
argued in its recent meeting with the Task Force that “performance on the SAT differs across
subgroups, which largely reflects educational differences in high schools”. The third explanation
(the “Compositional Explanation”) has drawn less vocal support.
This section presents evidence that approximately one-third of the DTT is explained the Real-
Disparities Explanation, with two-thirds explained by the Compositional Explanation. Figure 1
shows that the Real-Disparities Explanation explains 31 percent of the DTT. Figures 2 and 3
motivate the Compositional Explanation, showing the magnitude of UC compositional changes
over the past 25 years, and Figure 4 shows that the Compositional Explanation fully explains the
remaining net DTT trend. Figure 4 directly challenges the Bias Explanation as even playing a
secondary role in explaining the DTT.
First, we rerun the regressions that estimate the cross-student DTT including high school fixed
effects, which eliminate cross-high-school variation that likely reflects increasing educational
disparities in California secondary schools. We show analysis for both the population of UC
applicants and the population of UC enrollees, restricting the sample to California-resident
freshman applicants. If the Real-Disparities Explanation were to be accurate in the strong sense
(that is, if the real disparities were exclusively across high school, without increasing disparities
within high school as well), then the results would show no trend in demographics’ explanatory
power for applicants’ or enrollees’ SAT scores.
Figure 2 above shows that this is not the case. While far less variation in SAT scores can be
explaned by demographics when only comparing students to others from their same high schools,
there is still a notable upward trend in demographics’ explanatory power, from 11.3 percent at its
trough in 1999 to 16.4 percent in 2016 among UC applicants (representing a 31 percent decline in
the proportional increase in explanatory power since 1996). Demographics’ explanatory power for
Figure 2: Within High School DTT Trend
Note: R
2
(excluding fixed effects) from annual OLS regressions of applicants’ or enrollees’ SAT score or high
school GPA on comprehensive parental education indicators, ethnicity indicators, and family income (and an
indicator whether family income is reported), including fixed effects by origin high school. Sample restricted to
California-resident freshmen.
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 18
HSGPA increased from 4.9 to 7.4 percent in the same period, suggesting even faster proportional
growth (a 51 percent increase, relative to 45 percent). A similar trend holds among UC enrollees.
48
One important feature that these figures share with the original DTT chart is the decline in
demographics’ explanatory power between 1995 and 1999. These declines likely reflect that
period’s phasing out of UC’s affirmative action program, which decreased underrepresented
minorities’ UC enrollment by at least 700 students per year across all campuses (Bleemer, 2019)
and led to parallel declines in UC URM applications. As a result, the Compositional Explanation
clarifies that demographics’ explanatory power in the period declined as the affirmative-acted
studentswho tended to have lower SAT scores that could be explained by their demographic
characteristicsceased enrolling at UC. This observation provides the first circumstantial
evidence supporting the role of student composition in regulating the correlational relationship
between demographics and SAT performance.
In order to further motivate the Compositional Explanation, we next plot the annual variance in
SAT scores and HSGPAs across all UC campuses. Figure 3 shows that the amount of variance in
SAT scores among UC applicants has been swiftly rising since about 2005, while the amount of
variance in HSGPA fell in the late 1990s and early 2000s and has persisted at the lower level.
The trends among UC enrollees are even more pronounced; variation in SAT scores among UC
applicants has increased by more than 10 percent since 1996, while variation in HSGPAs has
fallen by more than 20 percent. These trends likely reflect two important admissions policies
Eligibility in the Local Context and Holistic Reviewthat have substantially replaced
affirmative action since the 1990s in enrolling disadvantaged applicants. As various UC
campuses increase their numbers of low-SAT high-HSGPA students (the latter of which is
measured relative to the lower-preparedness high schools from which the disadvantaged
applicants are pulled), SAT scores are increasingly varying across the campuses’ student bodies.
48
Figures 2 and 5 present ‘projected R
2
” measures from the relevant annual linear regression estimates of SAT on
demographic characteristics. To be more specific, these models include high school fixed effects, which themselves
(importantly) absorb some cross-school variation in SAT performance, and there’s no reason to include that
explanatory power in the reported R
2
. Instead, I merely report the R
2
of projected SAT scores, after differencing out
the high-school-specific averages, regressed on demographic characteristics. This is a standard technique
implemented using the fixed effect linear regression package felm in R.
Figure 3: Change in Annual SAT and HSGPA Standard Deviations Since 1995
Note: Annual standard deviation in SAT and HSGPA of UC applicants and enrollees. SAT and HSGPA are
normed to have standard deviation 1 on average across all years. Plot shows two-year moving averages.
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 19
In other words, the increased variance is by design, an artifact of admissions policies which
intentionally target lower-SAT applicants. We will return below to another explanation for the
increase in SAT variation over time: the expansion of the Riverside and Merced campuses.
The next plot visualizes the proportion of California high school students who apply to at least
one University of California campus. One of the chief successesand political challengesof
UC’s Comprehensive and Holistic Review admissions programs has been their encouragement
of applications from high school graduates who would previously have not applied to UC
because of their poor perceived likelihood of admission. Figure 4, pulled from Douglass and
Bleemer (2018), shows that the proportion of 18-year-olds in California who apply to at least one
UC campus has doubled since 1995, from about 10 percent to about 20 percent. Some of this
increase comes from increasing high school graduation rates, but even among graduates the
proportion of applicants has increased by about 8 percentage points, to almost 25 percent. This
change in application behavior has surely dramatically altered the composition of UC applicants,
and is also reflected in UC’s students as a result of changing admissions policies. These charts
showing increasing SAT variation and increasing broad application behavior by California
youths strongly suggest that compositional changes in UC applicants and enrollees are central
factors in explaining the DTT: after all, UC has spent the past 20 years bolstering admissions
policies that favor the lower-SAT disadvantaged applicants who would mechanically increase
the SAT-demographics correlation.
Finally, and perhaps most consequentially, we replicate Figure 2 by campus (for applicants).
Under either the Bias Explanation or the Real-Disparities Explanation, we would expect that the
predictive power of the SAT has increased consistently at every UC campus, either because of
consistent bias or consistently-varying disparities across applicants. In fact, Figure 5 shows a
Figure 4: Proportion of California Residents Who Apply to UC
Note: The proportion of California 18-year-olds and California high school
graduates who apply to at least one UC campus in each year since 1994. The annual
number of California 18-year-olds is as estimated by the
California Department of
Finance, which also reports the annual number of high school graduates in the state.
0
0.05
0.1
0.15
0.2
0.25
0.3
1993 1998 2003 2008 2013 2018
Prop. CA 18-Year-Olds Apply Prop. CA HS Grads Apply
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 20
very different pattern. Only a single campus, Santa Barbara, manifests any increase in
demographics-SAT correlation, and even there the increase in demographics’ explanatory power
for the SAT is slight (from 14.8 to 15.2 percent). Most other campuses have faced almost no
change in demographics’ explanatory power since the end of affirmative action in the late 1990s
(like Irvine, Davis, and UCLA) or have actually experienced declines in demographics’
explanatory power (San Diego, Riverside, and most notably Merced). What would cause these
patterns?
The answer is the Compositional Explanation. Campuses with high explanatory power
especially Merced, which in the late-2000s had demographic explanatory power in the 30-40
percent range—have grown, in enrollees but especially in applicants (who wouldn’t otherwise
have applied to UC campuses). The end of affirmative action pushed demographics’ explanatory
power down, especially at the Berkeley and UCLA campuses where that program was most
effective, and the end of the old ELC program in 2011 also appears to have compressed
demographics’ explanatory power at the campuses where that program was most effective (San
Diego, Davis, and Irvine). Meanwhile, all of the campuses were growing more selective on
average, compressing their HSGPA distributions, but also instituting disadvantaged-focused
admissions programs that purposely admitted students whose low test scores were offset by
measures of disadvantage that mechanically strengthened the correlation between demographics
and the SAT, both among enrollees and the applicants the the programs encouraged. These results
Figure 5: Within High School DTT Trend by Campus
Note: R
2
(excluding fixed effects) from annual OLS regressions of applicants’ or enrollees’ SAT scores on
comprehensive parental education indicators, ethnicity indicators, and family income (and an indicator whether
family income is reported), including fixed effects by origin high school. Estimated separately for each UC
campus. Sample restricted to California-resident freshmen.
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 21
are very difficult to reconcile with the Bias Explanation, and strongly suggest that the net DTT
after the Real-Disparities Explanation can be fully explained by the Compositional Explanation.
As a result, we conclude that about one third of the DTT can be explained by the Real-Disparities
Explanation, since only comparing students who enrolled at the same high school explains 31
percent of demographics’ explanatory power for SAT scores. The remainder appears to be
explained by the Compositional Explanation, which is motivated by measured compositional
changes within and across UC campuses in their applicant and student bodies and confirmed by
the absense of a DTT trend within 7 out of 8 UC campuses, which nevertheless show patterns
strongly consistent with the implementation of disadvantage-favoring admissions programs.
Indeed, the recently-increasing DTT-demographics correlation at many campuses is best
understood as a manifestation of their admissions policies’ successes in attracting the
disadvantaged lower-SAT students that they are designed to attract.
Standardized Tests and Academic Preparation
This section examines the correlations between SAT/ACT scores and high school GPA as well
as the relationship between SAT/ACT scores and UC’s Analytical Writing Placement Exam
(AWPE) score. SAT Writing refers to the SAT II writing for 2001 through 2005, SAT Writing
for 2006 through 2016, and SAT Essay for 2017 and 2018. SAT Reading/Math refers to SAT I
Verbal and Math for 2001 to 2005, SAT Critical Reading and Math for 2006 to 2016, and SAT
Evidence-Based Reading and Writing (EBRW) and Math for 2017 and 2018. ACT Writing refers
to the ACT Combined English/Writing (CEW) scores for 2006 through 2016 and the writing
(essay) subscore for 2017 and 2018. ACT Composite combines the English, Reading, Math, and
Science multiple choice sections. CEW combined the English multiple choice section with the
writing (essay) section. On the new ACT, the writing subscore is combined with the English and
Reading multiple choice sections to produce an English Language Arts (ELA) score.
Table 1 shows the across and within school correlations between SAT/ACT and high school
GPA controlling for socioeconomic status (family income and parental education). Generally,
the within school correlations between SAT scores and high school GPA are slightly higher than
the across school correlations. This finding is consistent with what Zwick and Green (2007)
found in their analysis.
49
They concluded that one of the primary reasons is that grading
stringency varies across both high schools and courses. This can potentially result in small
between-school variability in high school grades. Their study also indicates that between-school
variance in mean test scores is likely to be substantial than within-school variance. For both
across and within correlations between SAT and HSGPA, we see an increasing trend from 2005
to 2018, except that those between SAT writing and high school GPA dropped in 2018, which
might be due to the new SAT essay (starting in 2017) with a different scoring scale. Across the
years, the correlations between SAT Writing/Essay and HSGPA are consistently lower than
those between SAT Reading/Math and HSGPA. Overall, the correlations between SAT and
HSGPA are moderate at around .50.
49
Zwick, R. and Green, J.G. (2007). New Perspectives on the Correlation of SAT Scores, High School Grades, and
Socioeconomic Factors. Journal of Educational Measurement. Retrieved from
https://onlinelibrary.wiley.com/doi/full/10.1111/j.1745-3984.2007.00025.x, May 3,2019
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 22
Similarly, the within school correlations between ACT and high school GPA are higher than the
across school correlations. Generally, there is an upward trend of correlations between ACT and
HSGPA across years, except for 2018 when there is a drop from 2015. ACT Writing is also less
correlated with HSGPA than ACT Composite.
Table 1. Partial Correlation of Standard Test Scores and High School GPA
2001
2005
2007
2015
2018
SAT Reading/Math
Across schools
0.47
0.45
0.48
0.50
0.51
Within schools
0.53
0.51
0.53
0.55
0.56
SAT Writing (SAT Essay
for 2018)
Across schools
0.44
0.44
0.46
0.48
0.39
Within schools
0.49
0.49
0.50
0.52
0.41
Sum of SAT
Reading/Math and SAT
Writing
Across schools
0.48
0.47
0.50
0.52
n/a
Within schools
0.54
0.54
0.56
0.57
n/a
ACT Composite
Across schools
0.47
0.46
0.46
0.50
0.47
Within schools
0.51
0.51
0.51
0.55
0.53
ACT Writing (ACT ELA
for 2018)
Across schools
n/a
n/a
0.43
0.46
0.30
Within schools
n/a
n/a
0.47
0.49
0.31
Sum of ACT Composite
and ACT Writing
Across schools
n/a
n/a
0.47
0.50
n/a
Within schools
n/a
n/a
0.52
0.55
n/a
Note: All correlations control for family income and parental education.
A previous analysis also looked at the relationship between the new SAT and new ACT and
UC’s Analytical Writing Placement Exam (AWPE) for the Fall 2017 incoming class.
50
The
analysis found that 76 percent of those at or above a threshold of 690 for SAT Evidence Based
Reading and Writing (EBRW) passed the AWPE (Table 2). On the old SAT, more than 75
percent of those meeting the old threshold of 680 on SAT Writing would have passed the
AWPE. On the SAT Essay, even at the top score of 24, only 69 percent passed the AWPE, and
only two percent of test-takers achieved that score.
51
For ACT, 75 percent or more of students
scoring 30 or higher on ACT English Language Arts (ELA) would have passed the AWPE. This
is similar to the pattern for the old threshold using the old ACT Combined English/Writing
scores. The report recommended setting a threshold for passing the Entry Level Writing
Requirement (ELWR) of 690 on SAT EBRW, leaving the threshold of 30 on the ACT ELA in
place, and not setting a new threshold using SAT Essay.
50
University of California Office of the President, Institutional Research and Academic Planning. (2017). New SAT
and ACT tests and the Entry Level Writing Requirement.
51
A later analysis looked at the SAT Essay sub-scores, showing that in Fall 2017, 65% of those with a top Writing
score of eight passed the AWPE and only 5% of test-takers achieved this score; 73% of those with a top Analysis
score of eight passed the AWPE and only 2% of test-takers achieved this score; 65% of those with a top Reading
score of eight passed the AWPE and only 5% of test-takers achieved this score.
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 23
Table 2. Number of test-takers by AWPE score status and SAT EBRW, SAT Essay, or ACT
ELA score, Fall 2017
SAT EBRW score
Failed AWPE
Passed AWPE
Total
>= 690
185
24%
572
76%
757
100%
< 690
6,754
59%
4,778
41%
11,532
100%
Total
6,939
56%
5,350
44%
12,289
100%
SAT Essay score
Failed AWPE
Passed AWPE
Total
= 24
15
31%
34
69%
49
100%
< 24
6,924
57%
5,316
43%
12,240
100%
Total
6,939
56%
5,350
44%
12,289
100%
ACT ELA score
Failed AWPE
Passed AWPE
Total
>= 30
n/a
n/a
n/a
n/a
n/a
n/a
< 30
6,939
56%
5,350
44%
12,289
100%
Total
6,939
56%
5,350
44%
12,289
100%
Note: This table includes all incoming freshmen who submitted a statement of intent to register (SIR) and had both an AWPE
score and an SAT EBRW, SAT Essay, or ACT ELA score (as applicable). Students with a 30 or higher on the ACT ELA did not have
to take the AWPE.
On the basis of the data analysis described above, the University Committee on Preparatory
Education (UCOPE), a committee of UC Academic Senate, approved a new threshold of 680 on
SAT EBRW for passing the Entry Level Writing Requirement (ELWR), left the threshold of 30
on ACT ELA in place, and did not set a new threshold using SAT Essay. Note that SAT EBRW
is based on multiple choice sections of the test while ACT ELA is based on multiple choice
sections and the writing/essay section. SAT Essay is a stand-alone essay section.
In summary, the partial correlations between test scores and HSGPA after controlling for SES
are moderate. This means that standardized test scores and HSGPA can both measure part of
students’ knowledge and skills but there are different aspects of students’ academic preparations
that can only be measured by test scores and HSGPA separately. In addition, the writing tests are
less correlated with HSGPA than Reading/Math or composite tests, which means SAT/ACT
writing tests may provide additional information about students’ qualifications that HSGPA
cannot provide. High scores on the SAT EBRW and ACT ELA are associated with passing the
UC’s AWPE writing exam, but this is not necessarily true for the SAT Essay.
Standardized Tests and Student Success
This section examines the relationship between the SAT/ACT and college success at UC through
four outcome measures: (1) student first-year college GPA; (2) first-year retention; (3) four-year
graduation; and (4) college graduation GPA. The section first presents results of descriptive
analysis and then inferential statistics to show how well standardized test scores predict college
success.
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 24
Figure 6 shows mean first-year GPA, first year retention rates, four-year graduation rates and
mean graduation GPA of four-year graduates by HSGPA and SAT Reading/Math or ACT
Composite quartile. Not surprisingly, students with a higher HSGPA and a higher SAT
Reading/Math score or ACT Composite score (i.e., both in the highest quartiles) tend to have
better performance on all four measures than students in the lowest test score and HSGPA
quartiles. On average, their first-year GPA is higher than the GPA of those in the lowest HSGPA
and test score quartiles by one point, or about one third; their first year retention rate is almost
100 percent, compared to about 90 percent for those in the lowest quartiles. The difference in
graduation GPA between those in the highest and the lowest quartiles is smaller, but it is still
about a half point. On average, students in the highest quartiles are twice as likely to graduate
within four years.
In addition, Figure 6 shows that students in the same HSGPA quartile, but in a higher test score
quartile are more likely to perform better, especially in terms of freshman GPA, first year
retention rates, and four-year graduation rates than those in a lower test score quartile. For
example, 83 percent of students in the highest HSGPA (4.23-4.61) and the highest SAT (1350-
1600) quartiles graduate within four years, compared to 57 percent of those in the same HSGPA
quartile, but with a lower SAT score (620-1070). This indicates that test scores do provide
additional value beyond HSGPA to explain differences in students’ performance at UC.
Results further show that students with a higher test score (1350-1600) and a lower HSGPA
(3.00-3.78) seem to have a better freshman GPA, but a low first year retention rate when
compared to students with a lower SAT/ACT score (620-1070) regardless of their HSGPA
quartile. However, the first year retention rate graphs show the opposite, indicating that students
in the highest HSGPA quartile and the lowest test score quartile are retained at a rate of 95
percent compared to 92 percent for those in the lowest HSGPA quartile and the highest
SAT/ACT quartile. Similar to what was found for first-year GPA and retention rates, on average,
students with higher test scores, but lower HSGPA, tend to have a higher graduation GPA and a
lower four-year graduation rate, compared with those with a higher GPA, but a lower score on
SAT Reading/Math or ACT Composite. This finding suggests that HSGPA may be slightly more
predictive of first-year retention and four-year graduation, while standardized test scores may be
slightly more predictive of both first-year and graduation GPA. Regression models will be
developed in the following sections to further examine if this conclusion based on descriptive
statistics holds and how HSGPA and test scores are related to college success.
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 25
Figure 6. First-Year GPA, First-Year Retention Rates, Four-Year Graduation GPA, and Four
Year-Graduation Rate by HSGPA and SAT Reading/Math or ACT Composite Quartile
By High School GPA and SAT Quartile
By High School GPA and ACT Quartile
HSGPA/SAT Quartile
HSGPA/ACT Quartile
Standardized Tests and Academic Preparation: Aggregate First-Year GPA
Table 3 shows the percent of variance in freshman GPA accounted for by HSGPA and test scores
based on a series of regression models. Results indicate that HSGPA, SAT Reading/Math, SAT
Writing, ACT Composite, and ACT Writing are all moderate predictors of freshman GPA at UC.
HSGPA accounted for 17 to 20 percent of variance in the freshman GPA prior to 2007, and then
15 percent in 2012 and 13 percent in 2015, the lowest ever while the variance accounted for by
test scores has increased over the time from 13 percent in 2001 to 20 percent in 2015.
2.58
2.71
2.82
2.94
2.80
2.97
3.04
3.15
2.98
3.14
3.22
3.34
3.06
3.25
3.42
3.55
3.00-3.78 3.79-4.04 4.05-4.22 4.23-4.61
First-Year GPA
2.56
2.72
2.83
2.92
2.86
2.98
3.05
3.16
2.97
3.14
3.24
3.38
3.12
3.30
3.40
3.54
3.00-3.78 3.79-4.04 4.05-4.22 4.23-4.61
First-Year GPA
87%
90%
93%
95%
90%
93%
95%
97%
91%
95%
96%
97%
92%
96%
98%
98%
3.00-3.78 3.79-4.04 4.05-4.22 4.23-4.61
First Year Retetion
86%
90%
93%
96%
91%
94%
95%
97%
91%
96%
97%
97%
96%
97%
97%
98%
First Year Retetion
2.98
3.05
3.14
3.12
3.12
3.20
3.25
3.32
3.19
3.23
3.35
3.45
3.18
3.31
3.43
3.57
Graduation GPA
2.96
3.02
3.12
3.12
3.11
3.16
3.22
3.29
3.16
3.22
3.34
3.43
3.20
3.33
3.42
3.53
Graduation GPA
41%
47%
54%
57%
50%
63%
67%
70%
52%
65%
73%
79%
49%
62%
72%
83%
3.00-3.78 3.79-4.04 4.05-4.22 4.23-4.61
Four Year Graduation Rates
620-1070 1080-1210 1220-1340 1350-1600
41%
43%
53%
56%
51%
62%
66%
70%
56%
66%
74%
77%
49%
65%
73%
82%
3.00-3.78 3.79-4.04 4.05-4.22 4.23-4.61
Four Year Graduation Rates
9-23 24-27 28-31 32-36
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 26
Table 3: Percent of Variance in Freshman GPA Accounted for by HS GPA and the SAT/ACT
Model
2001
2005
2007
2012
2015
(1) HSGPA
17%
18%
20%
15%
13%
(2) SATRM
13%
17%
19%
21%
20%
(3) SAT Writing
15%
18%
19%
20%
19%
(4) SAT TOTAL (SATRM + Writing)
17%
19%
21%
22%
22%
(5) HSGPA + SATRM
22%
26%
27%
26%
25%
Standardized Coefficients: HSGPA
0.31
0.32
0.29
0.24
0.23
SATRM
0.25
0.29
0.31
0.36
0.36
(6) HSGPA + SATTOTAL (SATRM + Writing)
24%
27%
28%
27%
26%
Standardized Coefficients: HSGPA
0.30
0.30
0.30
0.22
0.22
SATTOTAL
0.28
0.32
0.33
0.38
0.38
(7) ACT Composite
16%
21%
20%
23%
22%
(8) ACT Writing
n/a
n/a
19%
18%
19%
(9) ACT Total (Composite + Writing)
n/a
n/a
22%
23%
22%
(10) HSGPA + ACT Composite
22%
27%
27%
27%
26%
Standardized Coefficients: HSGPA
0.28
0.31
0.29
0.22
0.23
ACT CMP
0.29
0.30
0.33
0.38
0.39
(11) HSGPA + ACT Total (ACT CMP and ACT Writing)
n/a
n/a
29%
27%
26%
Standardized Coefficients: HSGPA
n/a
n/a
0.28
0.22
0.22
ACT Total
n/a
n/a
0.35
0.38
0.38
(12) HSGPA + SAT Total + Demographics
30%
32%
34%
32%
32%
Standardized Coefficients: HSGPA
0.39
0.39
0.38
0.34
0.34
Standardized Coefficients: SATTOTAL
0.25
0.26
0.28
0.34
0.34
There are multiple reasons for the declining trend in explanatory power of HSGPA. Since 2001,
more and more campuses have adopted a comprehensive review process in admissions, including
several that have adopted holistic review.
52
Using this process, campuses look beyond grades and
test scores to evaluate students’ qualifications for admission. However, several aspects of UC
admissions policy still emphasize high school grades and test scores. The minimum requirement
for UC admissions eligibility is to complete a minimum of 15 college-preparatory courses with a
letter grade of C or better; the Eligibility in the Local Context (ELC) program purely replies on
HSGPA in UC-approved coursework completed in the 10
th
and 11
th
grades; the statewide
eligibility indices were created based on test scores and HSGPA; UC Regents increased the
minimum HSGPA required for UC freshman eligibility from 2.80 to 3.00, effective for the fall
2007 entering class. Therefore, HSGPA still plays a significant role in eligibility in admissions at
52
All campuses have used comprehensive review since 2002 and six campuses have adopted holistic review as their
method of implementing comprehensive review: Berkeley (starting 2002), Los Angeles (2007), Irvine (2011), San
Diego (2011), Davis (2012), Santa Cruz (2012).
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 27
UC. In addition, rapid growth of the qualified applicant pool pushed up the overall selectivity at
all UC campuses. Analyses showed that recent UC admits have higher HSGPA than admits in
prior years. All these together may have led to less variability in HSGPA of enrolled students.
This change along with more variability in test scores caused by educational disparity among
California K-12 schools, and constant variability in the freshman GPA at UC over the years may
be some of many reasons that may have contributed to the change in the explanatory power of
HSGPA and test scores in the freshman GPA at UC. (See the previous section The Relationship
between Demographic Characteristics and SAT/ACT Scores.”)
Two sets of models with the sum of SAT Reading/Math and SAT Writing (Model 4) or the sum
of ACT Composite and ACT Writing (Model 9) were developed to examine how much
additional variance writing scores accounted for beyond SAT Reading/Math or ACT Composite.
The reason to use the sum of scores on two tests is because SAT Reading/Math (ACT
Composite) and SAT Writing (ACT Writing) are highly correlated (e.g., r=.85 for the freshman
entering cohort in 2015). Therefore, there would be a collinearity issue if two measures were
entered in the same regression model. Results indicate that in 2001, adding SAT Writing to
SATRM increased the variance explained from 13 percent to 17 percent, or by four percentage
points, but since 2005, it has only increased the explained variance by about two percentage
points (difference in the explained variance between Model 2 and Model 4). Similarly, adding
ACT Writing scores to ACT Composite scores does not increase the explanatory power at all
(e.g., the difference in variance between Models 7 and 9). It is concluded that in the most recent
year, adding writing scores to reading/math or composite scores does increase the explanatory
power in explaining variation of freshman GPA, but the increase is not substantial.
In addition, four models were developed to examine how much additional variance in the
freshman GPA standardized test scores account for beyond HSGPA. As showed by Models 5, 6,
10, and 11, adding SAT Reading/Math or ACT Composite to the model in recent years (e.g., in
2015) doubled the variance accounted for by HSGPA alone. However, adding SAT Total or
ACT Total to the HSGPA models hardly changed the variance accounted for by the HSGPA and
SATRM or ACT Composite models (difference in variance between SAT Models 5 and 6, and
ACT Models 10 and 11). Similar to what has been found previously, it is concluded that writing
scores do not add any additional value in predicting student’s freshman GPA beyond HSGPA
and SAT Reading and Math tests or the ACT composite test. Also, the standardized coefficients
in these multivariate regression models indicate that test scores are stronger predictors for
freshman GPA than HSGPA, especially for the 2012 and 2015 entering cohorts.
The analysis in previous sections of this report indicates that student characteristics (parental
education, family income and race/ethnicity) account for 26 percent variation in applicants’ SAT
scores in the late 1990s and more than 40 percent in recent years. The explanatory power of these
three factors in HSGPA has also increased from five percent in 2000 to 11 percent in recent
years. Thus, it is helpful to examine the relationship between HSGPA and/or SAT Total and
freshman GPA after controlling for student demographics. We ran regression models adding in
student demographics such as campus affiliation, major discipline, first-generation status, family
income, and high school API quintile (Model 12 in Table 3). Results show that controlling for
demographics increased explained variation of freshman GPA by six percentage points
(difference between Model 6 and Model 12). Results further show that after controlling for
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 28
demographics, the explanatory power of HSGPA increased from 0.22 to 0.34 in terms of
standardized coefficients in the 2015 model. The standardized coefficients of the SAT total
decreased from 0.38 to 0.34 in the 2015 model, which indicates a decrease of the explanatory
power of the SAT total in freshman GPA. This implies that SAT scores are more associated with
the demographics than HSGPA, consistent with earlier findings.
The influence of varying student characteristics on the relationship between HSGPA and
SAT/ACT scores on first-year GPA is further examined based on 2015 entering students.
Detailed model coefficients are presented in Appendix C. Results are generally consistent with
those from the overall models (Table 3), indicating that standardized test scores were generally
the stronger predictor of first-year GPA than high school GPA by campus, discipline,
race/ethnicity, first generation, family income, and CA API ranking. For some groups, HSGPA
predicted first year GPA better. For example, HSGPA appeared to be a stronger predictor than
test scores for African American students, White students, students with family income over
161k, and etc.
We also examined the explanatory power of the SAT Essay score and subscores on each of three
components (Reading, Analysis, and Writing), using the most recent 2017 freshman entering
cohort data. Results (Table 4) show that the Essay score only accounts for nine percent of
variation in freshman GPA, which is much less explanatory power than SATRM (21 percent)
and the SAT Writing score (20 percent) in earlier years. Each of the three components accounts
for six to eight percent of variation in freshman GPA with the Essay Analysis being the strongest
predictor. Adding the Essay Total to HSGPA only increased the explanatory power by one
percentage point. With limited explanatory power added by the Essay scores, it is worth
considering the social costs of additional writing tests. It is not ideal that students spend lots of
time preparing for various tests while they could be focusing their energy on more important
academic and social activities that could benefit them in the future.
Table 4: Percent of Variance in Freshman GPA Accounted for by HSGPA and the SAT
Model
2017
(1) HSGPA
16%
(2) SATRM
21%
(3) Essay Total
9%
(4) Essay Reading
6%
(5) Essay Analysis
8%
(6) Essay Writing
7%
(7) HSGPA + Essay Total
19%
Standardized Tests and Academic Preparation: First-Year Course-Level Performance
The previous analyses of the relationship between application components and freshman GPA
are limited by the substantial variation in average grades across campuses, departments, and even
individual professors. For example, students enrolled in first-year courses in Engineering
departments tend to have higher-than-average SAT scores but earn lower-than-average grades,
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 29
not as a result of poor performance but rather due to the higher standards and lower grade curve
enforced by Engineering departments nationwide. As a result, Engineering departments decrease
the positive correlation between SAT scores and freshman grades, though the decrease is
uninformative about actual student performance.
In order to “control for” students’ freshman course selections, we disaggregate our analysis from
the level of freshman GPAs to the individual-course level. The resulting data contains the
numerical grades (from 0 to 4) received by each student in each course. Courses are weighted by
their number of units, and each student is given equal (total) weight. Unlike the analysis above,
these “within-course” estimates condition on the specific courses taken by each student (that is,
each linear regression is estimated using course-semester fixed effects). The resulting estimates
can be interpreted as the amount of variation within each course a student completes in their first
year—that is, only comparing the student’s performance to that of the other students in their
same classthat can be explained by each application component. As a result of data
availability, we estimate course-level results from 2001 to 2016 for students at only three UC
campuses: Berkeley, Davis, and Riverside. These regressions do not control for any fixed student
characteristics, though controlling for characteristics like race and gender hardly changes the
findings.
Table 5 shows the percent of within-course freshman grade variation that can be explained by
high school GPA (HSGPA) and SAT scores. In 2001, HSGPA and SAT each explained about 8-
9 percent of within-course performance variation, but reflecting a similar trend in the aggregate
GPA results, the two slowly diverge over the following years; by 2016, HSGPA only explains
about 5 percent of variation (likely reflecting the decline in cross-student HSGPA variation as
UC becomes more selective), while SAT explains 12 percent of variation. Moreover, HSGPA
and SAT strongly complement each other; the inclusion of both nearly sums to two R
2
values,
implying that they explain independent components of students’ first-year course performance
(see Model 6 compared to sum of Model 1 and Model 4). The SAT Writing exam has lower
explanatory power that has only slightly increased in the past 15 years. Estimates of standardized
Table 5: Percent of Within-Course Freshman GPA Explained by HS GPA and SAT (3 Campuses)
Note: Reported R
2
from the projected performance outcome net course-specific fixed effects, which also capture
campus effects. Courses are weighted by units earned and then normalized to give each student equal weight.
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 30
coefficients show that both HSGPA and SAT explain separate components of first-year course
performance, with a one-standard-deviation increase in SAT scores correlating to a 0.3 point
increase (almost a full grade step, e.g., A- to A) in each course taken by first-year students.
We conclude from this analysis that both HSGPA and SAT scores are independently-important
explanatory factors for first-year students’ course performance relative to their peers in similar
courses, with the latter growing in relative importance as SAT predictive validity improves and
HSGPA variation declines.
Disaggregating freshman GPAs to the course level also enables more precise analysis of the
explanatory power for student performance in different academic disciplines, which might
provide further insight into the specific values of each measure of students’ academic
preparedness. We identify first-year courses taught in four areasHumanities, Social Sciences,
Natural Sciences, and Engineeringand estimate the proportion of variation in course
performance across areas that can be explained by SAT and HSGPA.
Table 6 shows expected explanatory patterns. Disaggregating the SAT into its Reading and
Mathematics components, we find that the Reading component explains more variation in
Humanities and Social Science course performance (7-13%) while the Mathematics component
explains more variation in Natural Science and Engineering course performance (7-14%). SAT
scores explain substantially more variation than HSGPA alone; in Engineering, for example,
high school GPA explains 2.3 percent of performance, but its combination with the SAT
components explains more than 12 percent of performance. The SAT Writing out-performs
HSGPA in all four areas, explaining between 5 and 12 percent of variation, but its addition to
HSGPA and the two primary SAT components only explains substantial variation (almost 1%) in
the Humanities.
Note: Reported R
2
from the projected performance outcome net course-specific fixed effects, which also capture
campus effects. Courses are weighted by units earned and then normalized to give each student equal weight.
Table 6: Percent of Within-Course 2016 GPA Explained by HSGPA and SAT by General Area
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 31
The scaled coefficients reported in Table 6 show that a one standard deviation increase in
HSGPA is associated with increases in students’ performance in each of their first-year courses
by 0.19 grade points in the Humanities and 0.30 grade points in the Natural Sciences. A one
standard deviation increase across students’ SAT scores yields even stronger correlations when
holding HSGPA fixed, by 0.22 points in the Humanities and 0.45 points in the Natural Sciences,
with non-negligible contributions by each SAT component in the areas of their respective
strengths. We conclude that both HSGPA and SAT are independently-important explanatory
factors for first-year students’ course performance across academic disciplines, with SAT
particularly important in cases like Engineering courses where HSGPA alone has lower
explanatory power.
Finally, we identify two course types of particular interest: freshman writing courses, which are
required for most enrollees (unless they satisfy their Entry Level Writing Requirement via
examination), and Organic Chemistry, the key gatekeeper course for the popular pre-medical
track at each campus. Data for these courses is available at all campuses for 2016, though noise
in the nine-campus course data results in lower R
2
values across the board.
Table 7 shows that, unsurprisingly, the SAT Reading and Writing components each explain
substantial variation in freshman writing course performance, while the SAT Math component
explains the largest share of variation in Organic Chemistry performance. High school GPA also
explains a significant share of performance variation in each course type, and when both
admissions components are included simultaneously, once again high school GPA and the SAT
scores explain similar amounts of variation in course performance. As in the full course analysis,
these results suggest that each of the four admissions componentsHSGPA, SAT Math, SAT
Reading, and SAT Writingprovide valuable and differentiated information about expected UC
student performance in their chosen first-year coursework.
Standardized Tests and Academic Preparation: Longer-Run Outcomes
Table 7: Percent of Within-Course 2016 GPA Explained by HSGPA and SAT
Note: Reported R
2
from the projected performance outcome net course-specific fixed effects,
which also capture campus effects. Courses are weighted by units earned and then normalized to
give each student equal weight.
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 32
Table 8 presents the percent of variance in graduation GPA accounted for by HSGPA and test
scores. Results show that HSGPA and test scores are weak predictors of graduation GPA. They
have almost equal explanatory power, each accounting for less than 20 percent of variance in
graduation GPA. Adding SAT or ACT writing scores to SAT Math/Reading scores does not
increase predictive value of Graduation GPA beyond SAT Reading/Math or ACT Composite
scores. For example, for 2010 entering cohort, SAT Reading/Math accounts for 16 percent of
variance in graduation GPA, while the sum of SAT Reading/Math and Writing scores (SAT
Total) accounts for 18 percent, up by only two percentage points.
When HSGPA and SAT Reading/Math or ACT Composite scores were entered into the same
models, the percent of variance accounted for by the two predictors together increased about five
percentage points or one third of explanatory power by the HSGPA model alone. However,
replacing SAT Reading/Math or ACT Composite scores with the sum of SAT Reading/Math and
SAT Writing scores (SAT Total) or the sum of ACT Composite and ACT Writing scores (ACT
Total) in these models does not change explanatory power. For example, for the 2010 entering
cohort, the HSGPA and SATRM model accounts for 23 percent of variance in graduation GPA,
while the HSGPA and SAT Total model accounts for 24 percent of variance. The two models are
basically the same. This means that again writing scores do not contribute to predictive value of
graduation GPA beyond HSGPA and SAT Math/Reading or ACT Composite. This is largely due
to a high correlation between SAT Reading/Math and SAT Writing scores or ACT Composite
and ACT Writing scores.
After controlling for campus and student characteristics including campus affiliation, major
discipline, first-generation status, family income, and high school API quintile, HSGPA has
more explanatory power of UC graduation GPA than the SAT Total in terms of standardized
coefficients (Model 12 in Table 8). This means that SAT scores are more associated with the
demographics than HSGPA, consistent with previous findings.
The influence of varying student characteristics on the relationship between HSGPA and
SAT/ACT scores on graduation GPA is also examined based on 2010 entering students. Detailed
model coefficients are also presented in Appendix D. Similar results were found that
standardized test scores were the stronger predictor of graduation GPA than high school GPA by
campus, discipline, race/ethnicity, first generation, family income, and CA API ranking.
However, the difference in the explanatory power between HSGPA and SAT/ACT scores
became smaller for graduation GPA than for first year GPA. For some groups, HSGPA predicted
graduation GPA better. For example, HSGPA appeared to be a stronger predictor of graduation
GPA than test scores for students at Davis, students majored in Arts, White students, not first
generation students, students with family income over 107k, and etc.
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 33
Table 8. Percent of Variance in UC Graduation GPA Accounted for by HSGPA and the SAT/ACT
Model
2001
2005
2007
2010
(1) HSGPA
16%
17%
18%
16%
(2) SATRM
15%
15%
17%
16%
(3) SAT Writing
18%
19%
19%
18%
(4) SAT Total (SATRM + Writing)
18%
18%
19%
18%
(5) HSGPA + SATRM
22%
24%
25%
23%
Standardized Coefficients: HSGPA
0.29
0.32
0.31
0.27
SATRM
0.27
0.28
0.28
0.29
(6) HSGPA + SATTOTAL (SATRM + Writing)
27%
26%
26%
24%
Standardized Coefficients: HSGPA
0.27
0.29
0.29
0.26
SATTOTAL
0.31
0.32
0.31
0.32
(7) ACT Composite
18%
19%
18%
17%
(8) ACT Writing
n/a
n/a
20%
17%
(9) ACT Total (Composite + Writing)
n/a
n/a
19%
23%
(10) HSGPA + ACT Composite
23%
26%
25%
22%
Standardized Coefficients: HSGPA
0.24
0.27
0.28
0.25
ACT CMP
0.33
0.34
0.31
0.31
(11) HSGPA + ACT Total (ACT CMP and ACT Writing)
n/a
n/a
26%
23%
Standardized Coefficients: HSGPA
n/a
n/a
0.26
0.24
ACT Total
n/a
n/a
0.34
0.33
(12) HSGPA + SAT Total + Demographics
32%
33%
34%
31%
Standardized Coefficients: HSGPA
0.38
0.40
0.40
0.37
Standardized Coefficients: SATTOTAL
0.31
0.29
0.29
0.32
As shown in Table 9, both HSGPA and test scores are very weak predictors of first year retention
and four year graduation. The models with HSGPA and test scores only account for about 2.5
percent of variance in first year retention and eight percent of variance in four year graduation. In
addition, the standardized coefficients show that there is no significant difference in explanatory
power of predicting first year retention and four year graduation between HSGPA and test
scores. The percent of variance accounted for by these predictors has remained about the same
since 2001, so the results from the logistic regression models based on 2015 entering cohort for
first year retention and 2010 for four year graduation are presented here in the table.
Again after controlling for student demographics, HSGPA is still a stronger predictor for first
year retention and four-year graduation, with more explanatory power than SAT scores in terms
of standardized coefficients (Model 5 in Table 9). This finding further indicates that SAT scores
are more associated with the demographics than HSGPA, consistent with previous findings.
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 34
Table 9. Statistics of Logistic Regression Models Predicting First Year Retention and Four Year
Graduation by HSGPA and SAT/ACT Scores
Model
First Year
Retention (2015
Entering Cohort)
Four Year
Graduation (2010
Entering Cohort)
(1) HSGPA + SATRM (R-squared)
2.3%
7.5%
Standardized Coefficients: HSGPA
0.22
0.23
SATRM
0.22
0.17
(2) HSGPA + SATTOTAL (SATRM + Writing, R-squared)
2.4%
7.9%
Standardized Coefficients: HSGPA
0.20
0.21
SATTOTAL
0.22
0.19
(3) HSGPA + ACT Composite (R-squared)
2.5%
7.7%
Standardized Coefficients: HSGPA
0.20
0.19
ACT CMP
0.23
0.20
(4) HSGPA + ACT Total (ACT CMP and ACT Writing, R-squared)
2.4%
8.1%
Standardized Coefficients: HSGPA
0.20
0.18
ACT Total
0.22
0.22
(5) HSGPA + SAT Total + Demographics
3.0%
12.0%
Standardized Coefficients: HSGPA
0.19
0.25
Standardized Coefficients: SATTOTAL
0.14
0.16
Conclusion
This study reviewed literature about the validity of the SAT and ACT and examined the
relationship between these tests and college preparation of UC California resident applicants and
college performance of enrolled students at UC.
The following conclusions emerged from the literature review:
High school GPA and test scores are moderately correlated. Previous research also
indicates that socioeconomic status is associated with SAT and ACT scores.
SAT/ACT test scores are positively associated with college success in terms of freshman
GPA, graduation GPA, first-year retention, and graduation. In addition, research
demonstrates the increment in the predictive validity afforded by the SAT/ACT over
HSGPA. On the other hand, evidence also suggests that standardized test scores are less
predictive across certain subgroups.
Analysis based on UC data shows that:
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 35
The majority of UC applicants submit SAT scores, while more and more students submit
ACT scores. The average test scores changed slightly over years. SAT/ACT scores are
more associated with demographic factors than HSGPA. The explanatory power of
parental education, the URG status, and parents’ income has been growing with parental
education continuing to be the strongest predictor of SAT/ACT scores. This possibly
could be a problem as the UC seeks to find race neutral ways to achieve racial diversity at
selective campuses.
53
SAT/ACT scores may have a disproportionate effect on admit rates
for URG students.
Similar to what previous research findings show, test scores are moderately correlated to
high school GPA. Apparently, there is some overlap between what the tests measure and
what HSGPA measures, but not that much. The College Board and ACT hold that SAT
and ACT scores are designed to measure knowledge and skills in high school subject
areas and readiness for first year college courses. High school GPA likely represents both
knowledge and skills and also behavior (such as completing homework).
The analysis of the relationship between the new SAT and new ACT and UC’s Analytical
Writing Placement Exam (AWPE) shows that high scores on the SAT EBRW and ACT
ELA are associated with passing the AWPE. However, on the SAT Essay, even at the top
score of 24, only 69 percent passed the AWPE. Based on the results of data analysis,
UCOPE has accepted the new ACT ELA and SAT EBRW scores but not the new SAT
Essay scores as methods of meeting UC’s Entry Level Writing Requirement (ELWR).
Regarding college success, findings from UC’s data analysis are generally consistent with
those from previous research. SAT scores and HSGPA are both moderate predictors of
student first-year GPA and graduation GPA. They are weak predictors of student
retention and graduation. HSGPA predicted first-year GPA slightly better than SAT
scores from 2001 to 2005 before controlling for student characteristics. Over the years,
the explanatory power of HSGPA has been decreasing. For graduation GPA, HSGPA and
test scores have the same explanatory power. HSGPA and test scores are not strong
predictors of first-year retention and four-year graduation. However, after controlling for
student characteristics, HSGPA and test scores have the same explanatory power of
freshman GPA, but HSGPA is a stronger predictor of the first year retention, graduation
GPA and four-year graduation.
Models that combine HSGPA along with standardized test scores predict student success
better than ones that only use one or the other. In contrast, the addition of the SAT/ACT
writing tests did little to improve the prediction of student success above and beyond
HSGPA and SAT Reading/Math and ACT composite scores.
53
UC Office of the President. (2015). UC Files Amicus Brief in Affirmative Action Case. Retrieved from
https://www.universityofcalifornia.edu/press-room/uc-files-amicus-brief-affirmative-action-case on October 16,
2018.
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 36
In summary, this report presents what literature and UC data show about the relationship
between standardized test scores and high school GPA, and predictive value of HSGPA and test
scores in student success as measured by freshman GPA, first year retention, graduation GPA
and four-year graduation. IRAP will do a series of simulations to further examine what eligibility
pool and admissions outcomes would look like if UC used different measures to determine
eligibility and admit students.
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 37
Appendix A: Test Takers and Average Test Scores of UC California Applicants from 2001 to 2018
Table A1: Test Takers and Average Test Scores, UC California Applicants, 2001 to 2018
2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018
CA Freshman Applicants
59,747 62,903 66,774 63,852 65,851 71,011 74,509 80,029 81,113 82,341 85,187 93,460 99,447 100,077 103,259 105,547 111,869 120,026
% Submitted Scores
SAT I Verbal Math
98.4% 98.4% 98.4% 98.3% 98.3% 88.4% 0.8%
SAT II Writing
96.9% 96.8% 96.1% 96.0% 94.8% 48.5% 0.2%
SAT I & SATII Writing
96.0% 95.8% 95.0% 94.9% 93.8% 48.1% 0.1%
SAT Verbal Math
98.3% 97.9% 97.5% 97.0% 94.6% 94.0% 93.2% 91.5% 89.3% 86.7% 84.6% 19.5% 0.8%
SAT Writing
98.5% 97.9% 97.5% 97.0% 94.6% 94.0% 93.2% 91.5% 89.3% 86.7% 84.6% 20.4% 0.8%
SAT Reading Math Writing
98.1% 97.9% 97.5% 97.0% 94.6% 94.0% 93.2% 91.5% 89.3% 86.7% 84.6% 20.4% 0.8%
SAT Math and EBRW (Evidence-
Based Reading and Writing)
73.2% 86.4%
SAT Essay
73.2% 86.4%
ACT Composite
27.6% 27.7% 32.7% 32.9% 33.2% 34.9% 35.1% 39.4% 44.0% 46.2% 47.3% 45.5% 46.4% 49.1% 52.2% 53.4% 52.5% 44.3%
ACT English with Writing
29.5% 32.8% 37.8% 43.1% 45.4% 46.9% 45.0% 45.9% 48.7% 51.7% 50.2% 0.2% 0.0%
ACT ELA (English Language Arts)
45.5% 37.6%
ACT Writing subscore
27.9% 37.4%
Average Scores
SAT I Verbal Math
1171 1163 1164 1172 1177 1162 1144
SAT II Writing
575 578 573 581 579 649 559
SAT I & SATII Writing
1751 1746 1744 1760 1766 1816 1727
SAT Verbal Math
1161 1159 1153 1154 1157 1153 1137 1135 1136 1128 1117 1241 1327
SAT Writing
572 570 569 571 573 572 566 560 562 556 550 610 643
SAT Reading Math Writing
1734 1729 1722 1725 1729 1725 1703 1695 1698 1684 1667 1845 1952
SAT Math and EBRW (Evidence-
Based Reading and Writing)
1167 1203
SAT Essay
16 16
ACT Composite
24 24 24 24 24 24 24 25 25 25 25 25 25 25 25 25 26 25
ACT English with Writing
24 24 24 24 24 24 23 23 24 24 24 29 27
ACT ELA (English Language Arts)
25 25
ACT Writing subscore
8 8
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 38
Appendix B: Average Test Scores of UC California Applicants by Demographic
Characteristics, Selected Years
Table B1: SAT Reading/Math and Writing by Parents’ Highest Level of Education
Year
No High
School
Some High
School
High School
Graduate
Some
College
2 Year
College
4 Year
College
Post
Graduate
Study
Unknown
SAT
RM
SAT
W
SAT
RM
SAT
W
SAT
RM
SAT
W
SAT
RM
SAT
W
SAT
RM
SAT
W
SAT
RM
SAT
W
SAT
RM
SAT
W
SAT
RM
SAT
W
2001
978
479
1008
492
1079
526
1105
542
1125
550
1182
577
1249
617
1221
602
2005
981
483
1008
491
1082
528
1107
543
1135
555
1191
582
1264
625
1229
604
2007
967
472
998
485
1069
522
1087
533
1107
542
1177
578
1253
620
1210
594
2012
949
471
977
485
1049
519
1068
528
1104
544
1183
588
1271
637
1118
549
2015
952
468
973
479
1040
510
1061
522
1091
536
1181
583
1279
635
1114
544
2018
1052
15
1069
15
1126
15
1145
15
1179
16
1260
17
1342
17
1192
16
Table B2: SAT Reading/Math and Writing by Parents’ Income (2001 dollars)
Year
$0-$56k
$56k up to $112k
$112k up to $169k
$169k+
Unknown
SATRM
SATW
SATRM
SATW
SATRM
SATW
SATRM
SATW
SATRM
SATW
2001
1140
559
1180
578
1222
601
1251
617
n/a
n/a
2005
1084
531
1184
581
1236
608
1255
619
1248
616
2007
1068
521
1167
573
1213
599
1243
617
1244
616
2012
1049
520
1185
588
1243
620
1280
644
1208
607
2015
1042
512
1184
583
1249
619
1297
647
1205
601
2018
1127
15
1251
17
1311
17
1358
18
1256
16
Table B3: SAT Reading/Math and Writing by Race/Ethnicity
Year
African American
American Indian
Hispanic/Latinx
Asian/Pacific Islander
White
SATRM
SATW
SATRM
SATW
SATRM
SATW
SATRM
SATW
SATRM
SATW
2001
1035
520
1169
570
1047
521
1182
566
1215
603
2005
1042
526
1184
587
1049
524
1198
575
1230
610
2007
1027
516
1160
576
1032
514
1187
570
1213
603
2012
1015
509
1164
581
1013
507
1206
594
1215
607
2015
1025
512
1149
569
1008
499
1214
595
1217
602
2018
1120
15
1225
16
1099
15
1298
17
1288
17
Note: SAT Writing refers to SATII Writing for 2001 and 2005 and SAT Writing for 2007, 2012 and 2015,
and SAT Essay for 2018. International students are included in each racial/ethnic category based on their
self-reported race/ethnicity.
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 39
Table B4: ACT Composite and Writing by Parents’ Highest Level of Education
Year
No High
School
Some High
School
High School
Graduate
Some
College
2 Year
College
4 Year
College
Post
Graduate
Study
Unknown
ACT
CMP
ACT
W
ACT
CMP
ACT
W
ACT
CMP
ACT
W
ACT
CMP
ACT
W
ACT
CMP
ACT
W
ACT
CMP
ACT
W
ACT
CMP
ACT
W
ACT
CMP
ACT
W
2001
19
n/a
20
n/a
22
n/a
23
n/a
23
n/a
25
n/a
26
n/a
25
n/a
2005
20
n/a
20
n/a
22
n/a
23
n/a
24
n/a
25
n/a
26
n/a
26
n/a
2007
20
19
20
20
22
22
23
23
23
23
25
25
27
26
26
25
2012
20
19
21
19
22
21
23
22
24
23
26
25
28
26
25
23
2015
20
19
21
19
23
21
23
22
24
23
27
25
29
27
25
23
2018
20
19
21
20
23
21
24
22
25
23
28
26
30
28
26
24
Table B5: ACT Composite and Writing by Parents’ Income (2001 dollars)
Year
$0-$56k
$56k up to $112k
$112k up to $169k
$169k+
Unknown
ACT CMP
ACT W
ACT CMP
ACT W
ACT CMP
ACT W
ACT CMP
ACT W
ACT CMP
ACT W
2001
23
n/a
24
n/a
25
n/a
26
n/a
n/a
n/a
2005
22
n/a
25
n/a
26
n/a
26
n/a
26
n/a
2007
22
21
25
24
26
25
27
26
27
26
2012
22
21
26
25
28
26
29
27
27
25
2015
22
21
27
25
28
27
29
28
27
26
2018
23
22
27
26
29
27
30
29
29
27
Table B6: ACT Composite and Writing by Race/Ethnicity
Year
African American
American Indian
Hispanic/Latinx
Asian/Pacific
Islander
White
ACT CMP
ACT W
ACT CMP
ACT W
ACT CMP
ACT W
ACT CMP
ACT W
ACT CMP
ACT W
2001
21
n/a
25
n/a
21
n/a
24
n/a
26
n/a
2005
21
n/a
25
n/a
21
n/a
24
n/a
26
n/a
2007
21
21
25
25
21
21
25
24
26
26
2012
22
21
26
25
22
20
26
24
27
26
2015
22
22
26
25
22
21
27
25
28
27
2018
23
22
27
26
22
21
29
27
29
27
Notes: ACT Writing refers to ACT English with Writing for 2007, 2012, and 2015 and ACT English
Language Arts for 2018. International students are included in each racial/ethnic category based on their
self-reported race/ethnicity.
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 40
Appendix C: Standardized Regression Coefficients for HSGPA and SAT/ACT
Scores in UC Freshman GPA Models, 2015 Freshman Entering Cohort
HSGPA SAT_RM % Variance Explained HSGPA ACTCMP % Variance Explained
UC System
0.23 0.36 25% 0.22 0.38 26%
By Campus
Berkeley
0.20 0.36 22% 0.22 0.32 19%
Davis
0.18 0.44 27% 0.17 0.44 27%
Irvine
0.31 0.36 28% 0.32 0.38 30%
Los Angeles
0.16 0.39 23% 0.17 0.41 25%
Merced
0.29 0.29 18% 0.29 0.28 18%
Riverside
0.24 0.25 11% 0.22 0.28 12%
San Diego
0.24 0.38 21% 0.22 0.39 21%
Santa Barbara
0.29 0.32 21% 0.28 0.36 24%
Santa Cruz
0.22 0.31 16% 0.22 0.34 18%
By Enrolled Majors
Arts
0.21 0.28 15% 0.15 0.40 22%
Engineering/Computer Sciences
0.16 0.42 26% 0.17 0.40 25%
Humanities
0.30 0.35 31% 0.32 0.35 33%
Life Sciences
0.25 0.42 31% 0.23 0.44 31%
Physical Sciences/Math
0.28 0.36 27% 0.26 0.37 28%
Professional Fields
0.24 0.38 26% 0.22 0.39 25%
Social Sciences/Psychology
0.29 0.34 26% 0.26 0.39 29%
Undeclared/All Others/Unknown
0.25 0.31 20% 0.25 0.33 21%
By Race/Ethnicity
African American
0.27 0.25 18% 0.28 0.26 20%
Asian
0.22 0.35 22% 0.19 0.38 22%
Latino(a)
0.23 0.24 14% 0.23 0.25 15%
White
0.28 0.23 18% 0.28 0.22 28%
By First Generation Status
Not First Generation
0.26 0.29 21% 0.24 0.29 20%
First Generation
0.22 0.29 22% 0.22 0.30 17%
By Family Income
$0-$53,999
0.22 0.30 18% 0.22 0.32 19%
$54,000-$106,999
0.22 0.31 19% 0.21 0.32 20%
$107,000- $160,999
0.26 0.28 20% 0.24 0.28 19%
$161,000 or higher
0.27 0.26 20% 0.27 0.25 19%
Missing
0.24 0.27 19% 0.07 0.01 16%
By High School API Quintile (2013)
1 through 3
0.22 0.20 12% 0.25 0.20 13%
4 through 7
0.25 0.26 19% 0.25 0.27 19%
8 and higher
0.30 0.26 22% 0.26 0.28 21%
Private 0.27 0.26 21% 0.30 0.23 20%
Public Missing 0.21 0.34 22% 0.23 0.36 24%
Missing Other 0.17 0.25 11% 0.18 0.15 8%
SAT Models
ACT Models
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 41
Appendix D: Standardized Regression Coefficients for HSGPA and SAT/ACT
Scores in UC Graduation GPA Models, 2010 Freshman Entering Cohort
HSGPA SAT_RM % Variance Explained HSGPA ACTCMP % Variance Explained
UC System
0.27 0.29 23% 0.25 0.31 22%
By Campus
Berkeley
0.20 0.30 18% 0.20 0.33 20%
Davis
0.34 0.30 23% 0.33 0.32 24%
Irvine
0.24 0.27 14% 0.22 0.26 12%
Los Angeles
0.20 0.32 18% 0.17 0.31 16%
Merced
0.26 0.33 17% 0.34 0.32 23%
Riverside
0.25 0.25 13% 0.21 0.27 13%
San Diego
0.29 0.32 20% 0.25 0.34 20%
Santa Barbara
0.27 0.28 17% 0.24 0.32 18%
Santa Cruz
0.31 0.30 21% 0.29 0.34 23%
By Enrolled Majors
Arts
0.35 0.19 20% 0.29 0.22 17%
Engineering/Computer Sciences
0.30 0.28 24% 0.27 0.31 24%
Humanities
0.29 0.30 26% 0.28 0.34 28%
Life Sciences
0.30 0.35 30% 0.27 0.36 27%
Physical Sciences/Math
0.30 0.33 26% 0.28 0.36 27%
Professional Fields
0.29 0.27 23% 0.30 0.31 27%
Social Sciences/Psychology
0.30 0.29 25% 0.26 0.32 24%
Undeclared/All Others/Unknown
0.24 0.32 22% 0.24 0.31 21%
By Race/Ethnicity
African American
0.22 0.25 16% 0.26 0.22 16%
Asian
0.29 0.26 21% 0.26 0.26 18%
Latino(a)
0.24 0.27 18% 0.24 0.28 18%
White
0.30 0.19 17% 0.27 0.20 15%
By First Generation Status
Not First Generation
0.31 0.20 19% 0.29 0.21 18%
First Generation
0.24 0.27 18% 0.23 0.28 17%
By Family Income
$0-$53,999
0.25 0.28 19% 0.23 0.29 18%
$54,000-$106,999
0.27 0.25 18% 0.25 0.25 18%
$107,000- $160,999
0.30 0.19 17% 0.29 0.21 18%
$161,000 or higher
0.36 0.16 20% 0.31 0.17 17%
Missing
0.32 0.21 21% 0.31 0.20 19%
By High School API Quintile (2013)
1 through 3
0.24 0.23 15% 0.24 0.21 14%
4 through 7
0.26 0.26 19% 0.23 0.28 19%
8 and higher
0.34 0.18 20% 0.32 0.19 20%
Private 0.31 0.25 24% 0.29 0.29 24%
Public Missing 0.21 0.39 26% 0.12 0.40 22%
Missing Other 0.31 0.20 19% 0.34 0.26 25%
SAT Models
ACT Models
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 42
Appendix E: Explanation for Change of Variance in Freshman GPA Accounted for
by High School GPA and Test Scores
Over the past 10 years, the amount of freshman GPA variation explained by high school GPA
(HSGPA) has fallen significantly. Table 3 shows that, in terms of aggregate freshman GPA,
HSGPA’s explanatory power fell from 17% in 2001 and 20% in 2007 to 13% in 2015. There are
a large number of possible explanations for this decline. This section provides the results of some
preliminary analysis conducted to examine the change.
Change of the Admit Pool
The change of variation in HSGPA and test scores may be related to the admit pool. While
always skewed toward the top quartiles of SAT/ACT and HSGPA, the distribution of admits has
changed over time (Figures E1 and E2). In particular, admits became more concentrated in the
top quintiles of HSGPA between 2001 and 2015. This change has definitely resulted in less
variation in HSGPA and may be related to the diminishing power of HSGPA to predict UC
freshman GPA during this time period. Almost all applicants with high HSGPA are admitted, but
the admit rates of those with high SAT/ACT depend on HSGPA. This also suggests that HSGPA
is given more weight in the admission process than SAT/ACT.
Figure C1. Admit Rate by HSGPA and SAT/ACT Quartiles
Note: Thresholds for HSGPA and SAT/ACT quartiles were constructed from
2001 data and used for all years for consistency.
0%
20%
40%
60%
80%
100%
2001 2015 2001 2015 2001 2015 2001 2015
<3.35 3.35-< 3.71 3.71-< 4.02 >= 4.02
< 1070 1070- < 1190 1190- <1310 >= 1310
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 43
Figure C2. Share of Admits by HSGPA and SAT/ACT Quartiles
Note: Thresholds for HSGPA and SAT/ACT quartiles were constructed from
2001 data and used for all years for consistency.
High School Grade Inflation, Enrollment in Low-Grade and Larger Freshman Courses
There are a large number of possible explanations for the decline in the explanatory power of
HSGPA over time, many of which can be adjudicated between using course-level grade
performance.
Table C1 shows that at the freshman course level, HSGPA explanatory power has fallen by 18.8
percent since 2012, from 4.3 percent to 3.5 percent (note that the SAT and HSGPA explain less
17%
6%
27%
19%
28%
29%
27%
46%
2001 2015
Share of admits by
HSGPA quartile
>= 4.02
3.71 - < 4.02
3.35 - < 3.71
< 3.35
21%
19%
25%
21%
27%
23%
27%
37%
2001 2015
Share of admits by
SAT/ACT quartile
>= 1310
1190 - < 1310
1070 - < 1190
< 1070
Note: "FE R2" measures the percent of variation explained by each additional group of fixed
effects (FE), like FE's for every high school or every SAT score. "GPA R2" measures the percent
of remaining variation explained by HS GPA. "Δ" shows the percent change from 2012 to 2016.
Valid explanations for the decline in HS GPA explanatory power for freshman grades should
decrease the GPA R2, since some of the decline would be explained away by the FEs.
Table C1: Changes in Explanatory Power of HSGPA for Freshman Grades
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 44
course-specific variation than they do freshman aggregate GPA variation, since the latter are
averaged over many courses and thus present a smoother and more-easily-modeled distribution
of student performance). This baseline decline is conditional on gender, ethnicity, and campus.
Each row of Table C1 adds an additional set of controls, removing one dimension of grade
variation and testing how that information’s removal impacts the degree to which HSGPA’s
explanatory power has declined since 2012. Controls that lead to a decline in the change in
HSGPA’s R
2
are interpreted as playing a role in HSGPA’s explanatory decline.
The second row of Table C1 tests the hypothesis that HSGPA’s explanatory power is declining
because UC is enrolling students from a wider range of high schools with varying GPA
standards, muddying the informational content of HSGPA. Indeed, Figure C3 shows that the
concentration of high schools from which UC has enrolled students (as measured by a
Normalized Herfindahl Index) has indeed been declining since about 2007 after a period of
relative stability, implying that UC has been taking students from increasingly-dispersed high
schools, likely as a result of its Eligibility in the Local Context program and the expansion of its
Merced campus. However, when the relationship between HSGPA and freshman GPA is
measured conditional on high school fixed effectsthat is, only comparing students who
attended the same high schoolthe decline in the explanatory power of HSGPA grows to 25.0
percent. We conclude that the wider distribution of UC students’ high schools plays no role in
the decline in HSGPA’s explanatory power, since the within-high-school decline is even larger
than the across-high-school decline.
The third row tests the hypothesis that HSGPA has less variance among students with the same
SAT score, which would likely arise if the distribution of HSGPAs was increasingly compressed
as a result of high schools’ grade inflation. Controlling for fixed effects for every SAT score, the
0.0008
0.0009
0.001
0.0011
0.0012
0.0013
0.0014
1990 1995 2000 2005 2010 2015 2020
Figure C3: High School Concentration of First-Year UC Students
Note: Annual Normalized Herfindahl Index of high schools attended by freshman California-resident UC students,
measuring the degree to which students come from a concentrated set of schools.
01.10.2020 INSTITUTIONAL RESEARCH & ACADEMIC PLANNING ucal.us/irap 45
decline in HSGPA explanatory power falls slightly to 15.3 percent. We conclude that a small
part of HSGPA’s decline in explanatory power results from its diminished relationship with SAT
score, likely resulting from high schools’ grade inflation.
The fourth row of Table C1 tests whether the decline in HSGPA explanatory power is the result
of which departments freshmen choose to enroll in. If high-HSGPA students are increasingly
enrolling in low-grade courses like Engineering and Economics, then HSGPA’s explanatory power
would decline (since it would appear that high-HSGPA students were performing poorly, when in
fact they’re merely taking courses that award lower grades). Indeed, this appears to be the largest
identified factor in the decline in HSGPA’s explanatory power, explaining about 1/3 of the decline.
Moreover, a full 10 percent of freshman grade variation can be explained by the department in
which each course was taken, and the explanatory power of departments has increased by 25
percent since 2012. Cross-department variation is a key explanation for changes in freshman GPA
evaluation in the past five years.
Finally, if students have tended towards taking larger freshman courses with more formal test-
based examination (like multiple choice exams), then HSGPA would lose explanatory power
because student performance would increasingly resemble SAT examination as opposed to the
multi-modal examinations provided in high school. The UC Accountability Report shows that the
availability of small classes at UC has fallen dramatically over the past 5 years, limiting freshman
course options. Indeed, controlling for course size shows that size explains a small part of the
decline in HSGPA explanatory power.
Overall, we conclude that almost half of the decline in HSGPA’s ability to explain freshman
course performance can be attributed to changes in freshman course enrollment across
departments (the largest observed factor), HSGPA compression as a result of grade inflation and
increased UC selectivity, and increases in the average size of freshman courses. While about half
of the explanatory decline remains unexplained, we conclude that increased dispersion in the
high schools sending students to UC does not play a role in HSGPA’s explanatory decline, since
within-high-school explanatory power has fallen even faster than overall explanatory power
(leaving no room for a decline across high schools).
In summary, the change of admit pool and thus enrollment may result in less variability of
HSGPA and a slightly greater variability of test scores. This change may be related to increase of
eligible applicants, admission process, and student enrollment behavior. Further analyses need to
be conducted to test this hypothesis. Regardless of how it happened, it definitely explains part of
HSGPA’s decline and test scores’ increase in explanatory power. In addition, high school grade
inflation and enrollment in low-grade course and large freshman courses also explain part of the
decline in HSGPA explanatory power in course performance.