Issue No. 17
As I'm building this newsletter (and a podcast and YouTube channel) in the open, you will get updates on this project here from time to time.
Writing this week's newsletter brought back a flood of memories, as I recalled one of the best jobs I ever had – a place where I experienced significant growth early in my career.
This ended up longer than what I aim for, I hope you like it!
💬 In this issue, I cover:
- My experience working in Accenture's cutthroat "up or out" culture.
- Why performance ratings suck and don't work.
- One alternative to performance ratings.
Accenture, a Walk Down Memory Lane
At Accenture years ago, I couldn't stand the annual performance ratings and promotion cycle. As part of the cutthroat consulting workforce, it was "up or out" – either get promoted within a set time or hit the road.
One thing that really stands out is the significance of the annual performance cycle. If you received the wrong rating during a promotion year, you could be stuck at the same level for another year. So, being on the right project with the right manager and career counsellor was essential. Without the proper conditions and support, you wouldn't move up, which could seriously impact your career.
The performance ratings process involved receiving a performance rating on a five-point scale. I can't recall the exact terms, but the range was something like Outstanding, Exceeds Expectations, Meets Expectations, Needs Improvement, and Unacceptable. These rankings were forced along a distribution curve.
So much depended on our annual ranking, from promotions to significant differences in pay. It was a messy process that led to politics, cutthroat competition for top projects, and brown-nosing to make sure we had the support of our manager and career counsellor. All of this jockeying for position was aimed at making sure that, during the "calibration" meetings, we'd rank higher on the ladder.
Often, those up for promotion were ranked higher than more deserving colleagues, as missing a promotion had severe consequences. As a result, individuals who had an outstanding year were sometimes ranked lower to accommodate those on the promotion track.
The whole process sucked, and of course, it didn't improve the performance of employees. So, Accenture (along with numerous other companies) ditched the process and adopted a continuous feedback culture.
The Big Lie 🤥
People Can Reliably Rate Other People
Years later I read the book "Nine Lies About Work: A Freethinking Leader’s Guide to the Real World" by Marcus Buckingham and Ashley Goodall and my experience of performance ratings at Accenture (and other companies since) resonated with Lie 6: People Can Reliably Rate Other People.
The truth is, in reality, none of this works. All the mechanisms and meetings – the models, consensus sessions, exhaustive competencies, and carefully calibrated rating scales – can't ensure that the truth about us emerges in the room. Why? Because they're all based on the belief that people can reliably rate others. And they can't.
In the end, our rating says more about our manager's personality or rating habits than it does about us or our work. People are simply incapable of rating others accurately.
Why does this happen? It all boils down to something called the Idiosyncratic Rater Effect.
The Idiosyncratic Rater Effect
The Idiosyncratic Rater Effect is a phenomenon that occurs when a manager's personal biases and opinions impact their evaluations of their employees. Essentially, it means that two different managers may evaluate the same employee differently based on their own unique perspectives and opinions. This effect can lead to inconsistencies in performance evaluations and hinder the growth and development of team members.
Research shows people cannot reliably rate others, and raters' patterns of rating do not change when they rate two or more people. Ratings reveal much more information about the personality of the rater than they do about the person being rated! Rating each other at a workplace may seem like a great instrument for measuring skills and performance, but the accuracy of these results has proved to be very doubtful.
Sociologists tried to minimize the Idiosyncratic Rater Effect by creating increasingly detailed scales, but they achieved the opposite result, showing that “the more complex the rating scale, the more powerful the influence of our idiosyncratic rating patterns.”
The book argues that humans are unreliable raters of other humans, making feedback more distortion than truth. People cannot reliably rate other people. 360-degree reviews and performance rankings are useless.
For me personally, these research findings are incredibly liberating. They explain what I've always felt but could never quite pinpoint. Staff rankings, talent reviews, calibration meetings, and 360 reviews are just plain useless! Why? Because these activities rely on fundamentally flawed data. Garbage in equals garbage out.
The Truth: We can reliably rate our own experiences, but not other people
So what is the truth? Well, it is pretty simple. People can’t rate others, but they can reliably rate their own experience.
In their book Marcus Buckingham and Ashley Goodall reveal the simple truth that the image of you that your manager has is more important than all the figures presented by false tools. In this sense, team leaders can always rely on their own experience and ask questions about their reactions to each team member, so here's what the book suggests doing:
Rather than asking whether another person has a given quality, we need to ask how we would react to that other person if he or she did… asking the leader about what he would do, or how he would feel.
In other words, what matters more is your leader's own experience of how you show up at work.
As a team leader, what do I feel in the presence of this person? Would I promote him or her? Your subjective reaction may not be accurate, but it will be reliable because we cannot be wrong about our feelings.
In the end, you can only accurately and reliably rate your own experience!
The Top 4 Questions Every Manager Should Be Asking Themselves About Their Team
As a general rule, if you're after good data, be on the lookout for questions that ask only that you rate your own experience or intended actions.
Marcus Buckingham and Ashley Goodall suggest that these four questions are pretty good ones for a manager to ask themselves about their experience or intended actions for a team member.
- Given what I know of this person’s performance, and if it were my money, I would award this person the highest possible compensation increase and bonus.
- Given what I know of this person’s performance, I always want them on my team.
- This person is at risk for low performance.
- This person is ready for promotion today.
Each of these should be answered with a Likert scale (Strongly Disagree, Disagree, Neutral, Agree, Strongly Agree).
I've found that my answers to these questions, and my reasons for those answers, to have prompted better conversations than any performance rating system I've used.
Accenture Remains One of the Best Places I've Ever Worked
As I wrap up, I should mention that working at Accenture was one of the best experiences I've ever had. I grew so much in a short period of time and had many amazing experiences both in and outside of work.
Things changed a bit during my time there when they introduced the concept of "landing at a level," and I believe they have continued to evolve their organization.
💬 What has been your experience with performance reviews and ratings in your workplace? I would love to hear your thoughts (via a comment below) and about any alternative methods that have worked for you or your organization.