How Did We End Up With Performance Reviews, Anyway? -- Journal Report

Dow Jones
Nov 05

By Heidi Mitchell

Almost all of us have gone through the ritual. The trek to the boss's office, the tense stares across the desk, the awkward mix of praise and criticism -- and the potential for a promotion or a pink slip when it is all over.

Performance reviews aren't anybody's idea of a good time, yet they have endured for more than a century -- and show no signs of going away. Over the years, the practice has evolved to fit changes in the working world, but it still faces the same essential question it did at the start: How do you balance praise and criticism so that people actually hear what you're saying, and feel inspired to improve?

Where it began

The roots of performance reviews stretch back to around the time of World War I. Factories were scaling rapidly, and businesses needed strategies for managing vast numbers of workers and deploying them in the most effective ways. That meant collecting -- and leveraging -- solid information about how they performed, instead of relying on highly subjective measures like a worker's visibility on the job or relationship with the boss.

Early industrial psychologists and thought leaders attacked the problem and came up with ways to evaluate workers' flaws and potential more objectively and fairly. Some companies adopted the strategies early on: DuPont was among the first industrial firms to codify performance using metrics such as tracking efficiency, output and return on investment, says Michael Rivera, a professor of business information systems at Lehigh University.

General Motors expanded on these principles in the 1920s, Rivera says, by introducing concepts such as formal goal-setting and comparative performance ratings across divisions, as well as review mechanisms that linked individual and departmental outcomes with corporate objectives.

But the practice of performance reviews really took off thanks to the military. In 1914, it created formalized officer appraisals to determine who should be transferred, discharged or elevated -- based on leadership, discipline, character and professional knowledge, among other things, instead of length of service or personal recommendations.

The program -- intended to reduce favoritism and promote merit-based advancement -- worked so well that the military made them mandatory with the creation of its Personnel Research Section. From there, the concept migrated into industry, getting a boost from a couple of trends: the rise of American corporations in the postwar years and broader trends in midcentury management science.

One of the biggest names was theorist Peter Drucker, author of 1954's "The Practice of Management," who is credited with observing that what gets measured gets managed. In other words, many managers surmised, if you can gauge it, you can improve upon it. Companies eagerly applied that principle to their employees.

Those early reviews didn't look exactly the same as the ones we're used to, though. Yes, there were often sit-down meetings between bosses and workers, centered on annual written assessments (often reduced to a 1-to-5 scale with a few lines of commentary). But those talks didn't involve a lot of input from employees, and bosses tended to be heavily negative, focusing on shortcomings rather than giving advice and encouragement.

Work with me

For the most part, Rivera says, managers didn't start mixing in positive feedback until the mid-1960s, prompted in part by the civil-rights movement and new requirements for equitable hiring and promotion, along with the growing popularity of the field of management psychology. The results were mixed at best, though, says Joshua Klayman, professor emeritus of behavioral science at the University of Chicago Booth School of Business.

On the one hand, reviews served as administrative tools, justifying promotions, raises or dismissals. On the other, they were meant to coach employees, identify growth opportunities and inspire improvement. Those two functions clash with each other, Klayman says.

"To engage people in an improvement-focused discussion, you want to minimize the threat of poor ratings or lost rewards," he says. " Our research shows clearly that focusing on past performance reduces people's acceptance of mixed or negative feedback and also reduces their motivation to change."

Performance reviews took on new urgency for companies in the 1970s, as inflation soared and the economy tanked. By the 1980s and '90s, they became notorious as tools for justifying big corporate overhauls. At General Electric, Jack Welch's "rank and yank" system required managers to grade employees on a curve and cull the bottom 10% each year.

This approach was initially celebrated by some management experts as a way to promote excellence, but the practice often sowed fear and rivalry. Corporate cultures built on constant threats struggled with trust and team collaboration.

"If you need to force a cultural shift, reviews can be effective. But they are not a sustainable approach," Rivera says. "Eventually, culture and morale are affected, and it has trickle-down effects in terms of performance and engagement."

Some companies embraced performance improvement plans, or PIPs, which could come at any time of the year and ideally helped at-risk employees get back on track. In this setup, the boss would write out concrete goals for the worker to hit and outline the steps needed to get there.

But many workers came to view the plans less as genuine support and more as bureaucratic tools used to justify firing people or withholding raises. And that didn't do wonders for morale and performance.

What's the point?

By the early 2000s, dissatisfaction with reviews hit a breaking point. Surveys found that most employees dreaded them and managers resented the paperwork and the awkwardness. Klayman points to another problem: the staggering expense. Organizations poured countless dollars and hours into designing systems, training managers, documenting conversations and maintaining software platforms.

"It was a costly process," he says. "Then the question is, 'Why did you do it, and what did you get from it?' " The answer was, often, not a lot.

The digital era both magnified the problem and opened the door for solutions. Companies flattened, with managers overseeing too many direct reports to be able to give meaningful feedback at an annual review. At the same time, companies found effective ways to use software platforms, so managers and peers could easily deliver more-frequent assessments to those above, below and alongside them.

That forced many companies to rethink the review cycle. Adobe became an early adopter in 2012, when the company swapped annual reviews in favor of regular "check-ins" that focused on goals, feedback and development. Other large firms followed suit, and many employees took to the setup eagerly -- since it focused on future performance and how to achieve success, rather than rehashing past failures. It helped that the feedback was also coming from peers, and that the employee under review would get to review others in turn.

Academic research has also shaped the modern rebellion. Joonyoung Kim, a professor of management at the University of Missouri, has studied how employees perceive different performance-review formats. He found that numeric ratings, a staple of reviews, are viewed as unfair by nearly everyone (except those who received the highest scores). Written reports, rather than simple rankings, feel more useful and more equitable.

"When employees see only numbers, they feel the performance-management process is not fair," Kim explains.

Similarly, Klayman's research shows that backward-looking criticism can often trigger defensiveness, which interferes with the desire to improve, as opposed to focusing on plans for improvement. "There's nothing I can do about what I've done in the past," he says. "But I have some control over the future. If we jointly develop ideas for how to do things better next time, I leave with my self-esteem intact, with our relationship intact and motivated to make changes."

Full circle

Which is why many organizations are moving away from numeric scales and forced rankings, replacing them with narrative feedback and "360-degree" reviews. Instead of one annual judgment from a manager, employees may now receive regular comments from peers, subordinates and supervisors.

Advocates like Lehigh's Rivera see a future of AI-assisted coaching and adaptive learning systems that can link individual performance to organizational strategy.

Still, even as technology enables new approaches, the human element remains critical, says Klayman. Performance reviews work best as face-to-face conversations, he says, and many scholars caution against losing that dimension entirely. Algorithms may summarize feedback, but trust is required to engage in an honest conversation with a boss or direct report.

"Technology can make feedback faster and more consistent," says Klayman. "But at the end of the day, performance management is still about two people trying to work together to make things better. You can't automate trust."

Heidi Mitchell is a writer in Connecticut and London. She can be reached at reports@wsj.com.

 

(END) Dow Jones Newswires

November 04, 2025 12:00 ET (17:00 GMT)

Copyright (c) 2025 Dow Jones & Company, Inc.

At the request of the copyright holder, you need to log in to view this content

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Most Discussed

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10