Your email address will not be published. Required fields are marked *
Save my name and email in this browser for the next time I comment.
This site uses Akismet to reduce spam. Learn how your comment data is processed.
I completely agree with this in terms of employee performance reviews. What about rating people after interviews? Are the seam biases present?
Particularly I am wondering about all those competency models – if anyone has done some work flipping those around in to “experience of this person” questions instead – the communication one was golden “clearly understand” vs “they are a good communicator”, and now I’m trying to think about all the other ones!
So good – does anyone have a list, resource or guide of sorts on better questions to ask that focus on rating personal experience vs rating other people?
Hi! Thrilled to be part of the coalition! I just saw Marcus speak at the Entreleadership Summit – I was immediately drawn into this and am pouring through the book and materials on this site. I am a risk and strategy consultant with a primary focus of helping my clients gap-assess why strategy execution is off course, stuck or straight-up failing. While there is often multiple drivers for this, it is no surprise that we often find people people stuff (versus ill-defined vision, etc) at the root. The 9 Lies and Truths shine light in these areas and I am terribly excited to bring this to bear in my process. My particular challenge is that I often work as a Board Advisor and part of that job occasionally involves supporting a CEO review process. I am crystal clear on the need to fully re-engineer the questions to capture experience and intent, and I feel that input from direct reports to the CEO as well as from the Board is crucial to that process…. Clearly we would be designing appropriate topic and structure for the questions posed to each group. Thoughts?
very very insightful. my brain is spinning with possibilities about improving feedback (asking for it, and giving it). it’s all about a person’s experience – how they feel about a topic or issue. great work ashley and marcus – this could really have a huge impact on, not just feedback, but also relationships/collaboration at work.
Given we can reliably share our own perspective or experience, how do we account for the influence of our own personal bias on our perspective or experience of another person?
I think we can frame the questions to make it clear it is related to *your* rating by starting with “Given what you know of this person’s performance…”
Bogdan–thanks for the comment! We can indeed start our questions like that–however it’s what comes next that’s the trick. If we follow that with “…I consider them a top performer,” for example, we get bad data because we’re rating someone else. But if we follow your opening with “…I would promote them as soon as possible,” we get good data because we’re rating our own future intent. The key is to get to the action or emotion for the person responding to the question. Does that make sense?
So instinctively managers despise the annual review process secondary to the fact that the measurement is bogus yet we are forced to measure anyway?
Are we saying that nobody is good at rating anybody? Or are we saying that it is more of an exception to the far-reaching standard that people do not rate others well?
I always tell people that you can never judge someone else’s emotions because it’s their emotion. So, I’m really glad this is part of the truth. What if we are going to change our engagement survey in a way we can capture everyone’s personal experience, do all the answers then give a reliable perception of someone else? To take your example, ‘I understand it clearly whenever my manager explains something’, if most of the people would say ‘no’, can we then reliably decide that the manager needs to work on how he can explain things in a better way?
Benny–Thanks for the question. We’re getting into some fine details here, but if lots of people say they can’t understand someone, all we can actually reliably conclude is that lots of people can’t understand someone. And then if the manager tries something different, and now lots of people say they *can* understand, we can reliably conclude that they do understand! A subtle but important point–the truth always resides in those having the experience, not those generating it. Of course in practical terms, if someone told me that no-one understood what I was banging on about (please everyone don’t rush at once…) I’d try to remedy that. But they would remain the judges of their experience of me.
Thanks Ashley for your clear response. Fortunately your ‘banging on’ is very inspirational 👌
I believe it was Peter Drucker who said “communication is what the listener does”, so in your example I would conclude that the manager needs to work on her communication skills. Just food for thought.
HA! Ashley – I remember once telling a boss that I felt a certain way about something, and she looked at me straight on and said “No you don’t”. I was AGHAST!! I said to her, “You can’t tell me what I do and don’t feel – yes, I do feel that way!” and she was absolutely defensive about it. Needless to say, I left that job….
Erm, crikey. I think I’d have done the same…
We are in the midst of our annual performance assessment, where we rate people on how well they model our 6 corporate values, and how many of their goals they met. We’ve been doing this for years, but this is the last time we will ever do that assessment. It is gone for good. What I’m most proud of is that our HR people know this! When asked how we could simplify our people processes, their first recommendation was to eliminate our annual PA! The “freethinking” HR professionals understand that this process results in wasted time and unreliable data. That’s a great start for change. In the future we will be measuring leaders’ intent with regard to each of their team members–something that can be reliably measured.
Ron–so good to hear this–great stuff!
I love, Ron, that your HR folks are so progressive. What then, WILL you use to make decisions about how to spend “people” dollars? It’s the question I’m going to get and would love some concrete alternatives. I agree with everything around the lies and truths. I would love some guidance around what I could position as alternatives…..
Leesa–here’s what we do at Cisco (where we got rid of ratings in 2014): we give each team leader a budget and some guidelines, and ask them to exercise their best judgment. We can review their decisions up the line, as you would in most comp processes–but it turns out the rating was just a waypoint between the team leader and the comp decision, and you can just remove it. Hope that helps!
Ashley, thanks for sharing how you do this at Cisco. Sounds like the approach would save so much time and heartache. I am curious to know how you overcome the potential for leader favoritism.
This is a great little thread. We have had a version of this dialogue. An alternative was presented to individuals who have served on our employee engagement advisory group. The alternative included more frequent and meaningful check-ins with managers and an end of year indication that an individual met the expectations of their job or did not. The dialogue was reduced to an issue of comparing one individuals work against another and how does that result in a “fair” raise. My question back to the group was, why do we do performance assessments? What’s the goal? Progress is slow.
Excellent example of the “pain rating scale” as a reference for this Truth – everyone can relate to that in their own experience. I’m interested to learn more about how to explain the rater bias to others…and am looking at my 360 now to start flipping questions to avoid the RUBBISH!
Stephanie–I’ve found this topic to be a hard one to explain. If you find something that works, could you come back here and let us all know?
Also what about the the theory od the Wisdom of many?
Eddie–yes, the Wisdom of Crowds comes up a lot. There are two reasons why this doesn’t work. First, while it is true that asking a crowd of people to estimate something and then averaging their guesses produces a better answer than one person would alone, it only applies when the crowd has experience of the thing they’re estimating. Ask a crowd to guess the number of M&Ms in a jar, and you’ll get a good guess because they know what M&Ms are and they know what a jar is. But ask about “strategic thinking”, and each person’s definition is meaningfully different from the next person’s, such that the data is hopelessly unreliable. Second, and relatedly, we often presume that if one person’s data is bad, then we can remove that error by averaging lots of people’s data. Sadly, this isn’t true, if the data errors are what we call systematic. When all the measurement instruments don’t work in a predictable way, having more instruments doesn’t make them work better or lessen the error–in fact, it’s magnified. Where we are today with rating one another is that the measurement instrument doesn’t work, predictably, and so adding more ratings doesn’t achieve anything in terms of data reliability. It’s a complex topic–I hope that helps shed some light.
Is this not based on what the metric is, i.e. they may be using a different foundation for their rating?
I think some sort of ranking is important, but not necessarily rating. Business reality can dictate that we may have to shrink a team, or, in a more positive context, move our players to another team. Having some idea of the impact each person makes relative to others in the team is, I believe, healthy and productive. Example: Three years ago, we promoted someone in a team I had just “inherited.” It was the easiest call ever, because this person was the only promotable player. Today, in a team of 15, the person ranks about #6 or #7. Even more valuable than three years ago, but part of a much stronger team… We gave ourselves options on personnel moves and know, with some degree of certainty, the relative strength of each player on the roster, and it helps us help them identify areas for self-development as well as coaching strategies for our leadership team.
You hit on a key relational factor organizations trip over time and again –the trust factor! Without trust the “us” versus “them” cultural mindset rules, management/executives come across to their teams as bias favoring select personnel. This discussion actually proves the point.
What are we to do now?? the rating isn’t working and the top is only connected to the number?
This is probably one of the most controversial of the lies because there is a whole industry of performance management based on some sort of rating system. And all that is really being done is trying to figure out who gets the biggest bonus, pay rise and depending on where you work promotion. I have seen HR people crumble when made to think about this, they are being told that one of their fundamental beliefs is simply wrong. And senior execs don’t want to hear they have too much interest in having a bonus system to want to mess with the fundamentals of it. Everyone else would probably be happier with a bit more equality. I look forward to hearing the Truth on this one!
Jane–yes, the Ratings-Industrial Complex is hard to put a dent in. But dent it we must if we’re to do right by our people. Deloitte is probably furthest along in this journey–HBR is hosting a webinar with two of the team there on Feb 14th to dive into the enormous progress they’ve made. You might want to check it out.
Ashley – Do you have a link for the HBR webinar with Deloitte? Thank you.
I get what you are saying about the pre-packaged tools having systematic errors. But we are a small firm and often get feedback on our team by talking face-to-face with others that work with each team member. I would be curious to hear if your data shows the effect of gathering 360 feedback this way.
Beth–thanks for the comment. In this situation, it’s all about the questions you ask. Here are two we use at Cisco. First, we ask each person, “What is your experience of Ashley?” Then, we ask, “Do you have any advice for Ashley?” As you’ll see when we get to the truth, these are much more likely to yield good data.
Can’t wait to hear the truth on this one!
It’s funny. When we buy technology or equipment for work we don’t RATE those items. Why is it we look at people differently? We make decisions so differently with people.