We’ve seen one of the interesting challenges is around performance management and a sense of who’s actually successful. How do we know how well they’re doing? When you can count widgets sold or airplane engines sold, you get it. You can look at margin and things like that. But for these more complicated roles, you say, “Well, who’s good?” You get some really squishy answers because you’ll ask, “Why are they good?”
We’ve turned on something in the last couple years called imputed performance, where we accept that most performance-appraisal ratings are useless—not because ratings are bad, but because everyone is a four out of a five. You’re living in Lake Wobegon. And you’re just asking and getting answered questions like, What would be great here? Is greatness exceeding the budgeted number or the way you set a stretch target—so the goal is to hit that number? Is it getting people promoted? Is it turning on new internet protocol? Is it conversion within your existing client base? What are the seven to eight things that really matter? Then they are weighted. Now you have a new number. But what you also have is differentiation. The science behind being able to predict people is you need variability. As one of the people I studied under used to say, “Variance is your friend because you can explain variance. And as soon as you can explain variance among people, you know what matters and what doesn’t.”
Click to Add the First »