Multi-rater, multi-perspective feedback programs have been around for a long time serving many purposes with very mixed results and reviews. Like many other forms of feedback, if not done properly, the intended objective and outcomes may not be realized. In some multi-rater feedback programs, many of the best managers have difficulty finding value in the process and understanding how to use the feedback to improve. Less effective managers frequently look for ways to devalue the process and, in some cases, the inadequacies of the assessment itself have often made this too easy. In far too many instances, these techniques have been stretched beyond their capabilities or utilized to rate competencies for which the technique is simply not appropriate. Maybe it’s time to shed a little light on what works and what doesn’t. If you are going to utilize these techniques, we recommend that you give considerable attention to this important question:
Question: What makes 360 and upward feedback successful and meaningful and what gets in the way of success?
If you want a multi-rater feedback program that is both meaningful and developmental, you must understand behaviors related to performance and then design an assessment that can appropriately measure those behaviors. The single most important factor in designing a successful multi-rater feedback program can be summed up in one sentence: Use the right items! The single biggest mistake made by most companies when designing their 360 or upward feedback assessments is the use of items that may measure important factors but are simply inappropriate for this kind of process.
To illustrate, consider the implication of this simple example. If you want to know if I’m a good speller, which approach makes the most sense? You could ask ten people who work with me to rate my spelling, even though at least half of them are likely to be poor spellers themselves, and half haven’t really seen much of my spelling. Or, you could give me a spelling test. Even better, you could ask an expert speller to rate samples of my unedited writing. I think the poorest choice is pretty evident. Yet, we have been witness to this same type of mistake in so many instances.
If you ask my employees to rate my Strategic Planning skills or my Competitive Market Intelligence, the results are going to be really meaningless, unless of course I have a lot of highly qualified subject matter experts working for me. It makes sense that there are times when the judgement of one expert is certainly going to be better feedback than the combined judgement of an uninformed group of people.
Said another way, the rule of thumb is—keep it simple! In almost all instances, if you want to evaluate my abilities in complex, analytical areas, for example, the opinion of one qualified expert has a much higher likelihood of being accurate and meaningful than the ratings of several individuals who have not proven their competence in the factors they are rating.
Keeping it simple means relying on the observations of each rater as they make judgements based on observed behaviors. For example, if you want to know how I interact, listen to, communicate, motivate, recognize, inspire, hold accountable, develop, encourage, etc., then there is no better group of people to ask about those behaviors than the people who work for or with me, day in and day out. A short list of effective sample items might include items like the following:
My manager listens to me.
My manager provides me with timely, helpful feedback.
My manager has made a personal investment in my growth and development.
My manager cares about my well-being.
My manager recognizes me when I do good work.
My manager involves me in decisions that affect my work.
My manager helps me understand how my job contributes to the vision of this company.
What do these items have in common? Two things. First, they are pretty simple, straight forward, and clearly observable actions/behaviors. Second, everyone who works closely with the person being evaluated has a valid opinion about each of these issues because each has their unique experience from which to draw.
For many companies, the first step in creating their 360 process is to start with the competencies that have been developed for each position. This is frequently the first mistake. But this is not a criticism of the competencies. They may all be highly relevant to job success. It’s just that, for many of them, asking colleagues and direct reports to make the judgement is asking them to do something for which they are ill equipped.
The best 360 feedback assessments contain items that are straightforward, meaningful, and actionable. People are asked to rate managers and colleagues on behaviors the raters understand and that measure the types of behaviors exhibited by the very best managers from which we have had the opportunity to study and learn. When designing a 360 feedback process for leader or manager development, it is highly recommended the assessment contain dimensions that include, but may not be limited to, the categories listed below:
An additional component for consideration is the inclusion of an overall manager/leadership effectiveness item for the purpose of ongoing research related to the assessment. This item should be specifically designed to spread out the range of responses, especially at the high end of the scale. It can make the long-term research findings even more meaningful if the raters are assured of confidentiality and understand that the ratings of that summary item will not be included in the feedback. Thus, raters feel the freedom to be completely honest in rating their manager. This overall effectiveness item provides the opportunity to learn more about the dimensions and items in order to learn which ones are the most critical to the perception of outstanding management and/or leadership. This research can then be used to make the feedback process more meaningful and efficient. Here is an example of an item and explanation that reflects the criteria discussed here:
The following question is for research purposes only and will not be a part of the feedback. Please respond honestly and be assured your response to this item is totally confidential.
In your opinion, which of the following categories best describes this person’s overall leadership potential and effectiveness?
1 2 3 4 5
bottom top top top top
50% 50% 25% 10% 3%
Using this item for research can help provide insights as well as guidance for the organization in determining developmental gaps, inform developmental investments and planning, and establishing input for organizational bench strength. In other words, this item in not about rating an individual but rather is designed to help organizations with identifying talent management priorities and planning.
Aside from designing a strong 360 and upward feedback process, it is highly recommended that the intention of the program be clearly communicated to all involved. Multi-rater assessments are most often used for developmental purposes. In fact, when these programs are applied authentically and leaders appreciate and reap the valuable outcomes from this investment, a culture of advancement and development emerges. The 360 feedback and associated programs are trusted assets people take seriously, both when rating others and valuing their own results. There are certainly other ways in which multi-rater assessments can be applied and leveraged but there are pitfalls if not used appropriately. For more guidance in designing your process, please contact us at WSA for additional recommendations.