A single number replaced everything a student learned with a ranking of how well they performed.
The grade point average emerged in American higher education during the early twentieth century as institutions sought a standardized method for comparing student performance across courses. The system assigns numerical values to letter grades, typically on a four-point scale where A equals 4.0, B equals 3.0, and so on, then averages them across all courses.1
Yale University introduced one of the earliest grading systems in 1785, using a four-point scale, though the modern GPA calculation did not become widespread until the early 1900s as university enrollments expanded and institutions needed bureaucratic tools for sorting large numbers of students.2
The GPA solved an administrative problem. When a university has thousands of students taking different courses from different instructors, a single summary number allows quick comparison. Admissions offices, scholarship committees, and employers adopted it as a screening tool because it reduced complexity to a decimal.3
The number measures performance on assessments designed by instructors, not knowledge, skill, or capacity. A 3.8 GPA in one institution’s program may reflect different standards than a 3.8 in another’s. Grade inflation has steadily compressed the distribution. A 2012 study found that the average GPA at four-year American colleges and universities had risen from approximately 2.52 in the 1950s to 3.11 by the early 2000s.4
Some employers have moved away from GPA requirements. Google announced in 2013 that GPAs were not a reliable predictor of job performance, citing internal research. Other companies followed.5
The system persists because screening tools are hard to replace. An employer reviewing five hundred applications needs a filter. The GPA provides one, even when the people who created it and the people who use it agree that it measures something narrower than what it claims.6