Before the 1920s, no one sorted workers by the color of their shirts.
The division of workers into blue-collar and white-collar categories was not an ancient distinction. It was an invention of the early twentieth century, built on the literal observation that office workers wore white shirts and manual laborers wore dark ones. The language appeared in American newspapers between 1910 and 1924, solidified after World War II, and became the dominant framework for describing class in the English-speaking world.1
The distinction carried weight beyond vocabulary. It organized assumptions about education, intelligence, respectability, and economic value. White-collar work was associated with thinking, cleanliness, and upward mobility. Blue-collar work was associated with physical effort, dirt, and limited advancement.
These associations did not reflect economic reality consistently. Many skilled tradespeople in construction, plumbing, and electrical work earned more than entry-level office employees. The distinction tracked perceived status, not actual compensation.2
Sociologist C. Wright Mills analyzed the white-collar world in his 1951 book White Collar: The American Middle Classes, describing a new class of salaried employees who owned neither businesses nor tools but sold their time and compliance to large organizations.3
The collar-color framework obscured as much as it revealed. A nurse, a teacher, and a retail clerk occupied different economic positions, but none fit neatly into either category. The rise of knowledge work, gig platforms, and remote employment further eroded the physical cues the original distinction depended on.4
The Bureau of Labor Statistics does not use "blue-collar" or "white-collar" as official occupational categories. The terms survive in journalism, political rhetoric, and everyday speech as cultural shorthand for a class boundary that no longer maps onto the actual workforce.5