The U.S. military invented merit ratings in World War I to decide which soldiers to discharge.
The formal performance appraisal as a workplace practice traces to World War I, when the U.S. military created a merit rating system to identify soldiers who should be promoted, transferred, or discharged based on their performance.1 The idea migrated to corporate America after the war. By the 1940s, roughly 60 percent of U.S. employers had adopted some form of appraisal system to evaluate workers.2
In 1950, the U.S. Congress passed the Performance Rating Act, requiring the federal government to create formal appraisal systems for its employees. The Act established a three-tier rating scale of "outstanding, satisfactory, and unsatisfactory."3 By the 1960s, nearly 90 percent of American companies used some form of annual review.4
The system drew persistent criticism. A 2015 analysis found that Deloitte was spending 1.8 million hours annually on performance reviews across the firm. Adobe calculated that its 2,000 managers collectively invested 80,000 hours per year in the process.5
Kelly Services became one of the first large companies to abandon annual reviews in 2011, followed by Adobe in 2012 and Deloitte and PwC by 2016.6 A 2022 survey by the consulting firm WTW found that only 26 percent of global employers considered their performance review strategy effective. The practice that originated as a military sorting mechanism remains one of the most widely used and widely disliked features of corporate employment.