Should you use an interview- or questionnaire-based 360 survey for leadership development?
This past week I met with two experienced HR professionals at a large Canadian organization, to discuss the use of 360 surveys as part of their leadership development process.
In preparation for the meeting, I made several pages of notes on the topic off the top of my head. However, the occasion also motivated me to read some recent papers on the use of 360s for leadership development, and to think in a deeper way about the subject.
Going into the meeting, I was aware of my own biases. In the early part of my career, I primarily used questionnaire-based 360s to facilitate leadership programs. These are surveys in which the rater receives a link to a web-based survey, which they use to answer several dozen questions about the leader-participant, on a range of behavioural competencies (e.g., influencing others, driving for results, strategic perspective, etc), using a numerical rating scale (i.e., usually 1-5). The raters often include the leader’s supervisor, peers, direct reports, as well as ‘others’ (like the supervisor’s supervisor, or 2-up, or even those outside the company like customers). At the time I found the tools cost effective on a per person basis, easy to administer, and efficient in generating reports once all the data was collected.
However, as I advanced in my career, and started working with more senior leaders, I began using interview-based 360s more often. Over time I developed a process that involved interviewing anywhere from 7-18 raters, in 30-60 minute sessions, covering the same rater groups as mentioned above. After collecting the data, I would use a systematic coding process to distill from my interview notes key themes related to strengths and possible development needs. In the summary report I would describe each major theme, illustrated by a sample ‘impact quote’ pulled from an interview. Sometimes at the end of the report I also included an appendix containing near verbatims from the interviews. In retrospect I think I gravitated to this interview-based process because it was customizable (which executives seemed to like), user friendly (raters seemed to enjoy being interviewed), and impactful (the data clarity and quality was high).
So as I prepared for the discussion this past week, and had the opportunity to reflect on the relative strengths of each approach, I began to realize that neither method is necessarily superior for leadership development programs. A more balanced framing might be that each process has particular advantages, and that depending on the purpose of the program, users should consider which tool’s strengths best fit their needs.
In this article, I’d like to share my views about the relative strengths of interview- compared to questionnaire-based 360 tools, in the context of usage in leadership development programs. I hope this analysis and comparison informs your choices the next time you either participate in or administer a leadership program.
Advantages of interview-based 360 methods
- Customizable: With an interview-based tool, you can ask whatever questions you want, covering any themes you wish, and tailor them to the specific interests of the leader. With most off-the-shelf questionnaire-based tools the user doesn’t have the ability to customize the questions or their particular wording. (Note, in practice I find leader-participants prefer me as the assessor to suggest several standard ‘boilerplate’ questions, and then to add 1-3 custom questions based on themes they most want feedback about.)
- Greater clarity of feedback: Interviewers can follow up on any ambiguous responses to increase the clarity of the feedback. If the rater is vague about the specific behaviours the leader-participant needs to improve on, the interviewer can ask for greater detail. If describing an example would help to illustrate a point of feedback, the interviewer can ask for one. If the rater uses a word that requires defining, the interviewer can ask for that information (e.g., ‘when you say the leader should be more ‘constructive’ in their interactions, what does that mean in this organizational culture?’). To my knowledge, questionnaire-based tools have no mechanism to ask raters follow up questions to clarify or elaborate on their responses.
- Accessing interview-based feedback is more ‘frictionless’: With an interview-based report you can open the front cover and just start talking about the themes with the client. There are few if any numbers, or statistics to interpret. The executive summary of key findings, and any verbatims usually appear in a narrative format, which is straightforward to digest. As a result accessing the key development feedback is more ‘frictionless.’ By contrast before debriefing questionnaire-based 360 reports, you have to spend considerable time explaining technical details like the structure of the report, how the rating scales work, and what statistics like ‘percentiles’ mean. This time you take to access the report often feels like a distraction when all the leader-participant really wants is to dive into a development discussion.
I can share further details on how the interpretation of percentiles slows down and adds time, confusion, and ‘friction’ to the interpretation of questionnaire-based 360 results. (Warning, I’m going into a nerd zone.) Imagine you’re a leader, and you receive feedback that your average rating for the dimension ‘executive presence’ was 4 out of 5. What does that mean? Is a 4, good or bad, effective or ineffective? Before a leader can develop, they must translate that number (and many other numbers in the questionnaire-based report) into some kind of meaning. Ok well, intuitively we might think 4 is a great score. The number 4 is 80% of the number 5. That’s an ‘A-minus’ right? In fact, the interpretation of ratings can be more ambiguous than they appear. To explain, when your raters score you as a 4 out of 5 in a 360 survey, that’s called a ‘raw score.’ To determine where that ‘raw score’ compares relative to others, it gets ranked against all the other scores of all the other people who ever completed that survey. This rank ordering produces something called a percentile. The percentile tells you the percentage of people who have ever taken the survey and who were rated lower than you on a given dimension (i.e., if you’re in the 80th percentile on ‘executive presence,’ that means you scored higher than 80% of the people who ever received ratings on that competency in that survey). This is where interpretation can get wonky. In practice many 360 raters feel reluctant to give poor ratings to their work colleagues, and so they inflate their scores. This might mean most leaders receive 4’s and 5’s out of 5 on ‘executive presence.’ The net result of this is that I’ve seen leaders receive a ‘raw score’ of 4 out of 5 on a dimension like ‘executive presence’ and then be told that they fall in the 20th percentile. “But how can that be? I got a 4 out of 5, I should be in the 80th percentile, right?” Helping leader-participants make meaning out of their numerical and percentile scores, therefore, adds time, confusion, and ‘friction’ to the task of interpreting the results.
- You can capture contextual information: When you interview someone as an assessor, you can gather rich contextual information unrelated to the questions asked, that can enhance the quality of the feedback. What’s the rater’s body language and how aligned is it with the feedback they’re giving (i.e., is it consistent or inconsistent)? Is the rater warm vs hostile? Do they seem invested in giving the leader-participant feedback or not? Did they seem to prepare for the interview in advance? Based on all of that, how should I weight the feedback? And importantly, is this a person the leader-participant may want to invite to be an ally for their development, after the feedback process is over? In addition, during the interviews assessors may uncover useful contextual information about the organization. Raters may tell stories or use slogans or phrases that describe the culture or environment the leader-participant is working in, which may in turn generate ideas on how to best use the data.
- Interview data may better capture the over- or under-use of leadership skills: Typical questionnaire-based 360 rating scales use a 1-5 format, which implicitly assumes that ‘more is better’ on any given leadership trait. However accumulating evidence contradicts this, and suggests that extremity in the form of either using a skill too much, or not enough can contribute to underperformance. Therefore *most* questionnaire-based tools don’t measure the ‘too much/too little’ aspects of leadership. With interview-based 360s, assessors can measure these qualities by asking questions like ‘what does a leader do too much of, too little, or get just right in their behaviour?’ Or you can ask ‘what should this person start, stop, or continue doing?’, which gets at the same concepts. Finally, I would add two caveats here. First, there is one questionnaire-based 360 survey called the Leadership Versatility Index (or LVI), which has developed a unique rating format that can measure the over- or under-use of leadership behaviours. Second, questionnaire-based surveys can ask the ‘start, stop, continue’ questions in their open-ended feedback sections (where raters can write free-form comments about the leader-participant), but I find this data is often less detailed and useful than that collected from interviews.
- The feedback is immersive, and therefore may increase readiness for change: If the interview-based survey contains verbatims, or near verbatims of rater responses, the data can be quite engrossing for leader-participants to read. This seems to be the case because the feedback represents the voices and words of the raters (anonymized except for the supervisor’s). Many clients tell me they read these verbatims multiple times. How often in life are you given the opportunity to read 40-50 pages of vivid, candid feedback about what others observe to be your strengths and areas you could work on to improve further? It’s compelling. My impression is this immersion generates deeper processing of the content, and greater consideration of behaviour change alternatives. By contrast, the technical details contained in a questionnaire-based report represent a kind of language that a user must learn and continue to relearn during the interpretation phase (imagine constantly checking the dictionary for definitions of words you don’t know, as you read a book in a new language). My experience suggests constantly having to re-familiarize yourself with these technical details in order to interpret the findings reduces the immersive or engrossing qualities of the feedback, and (I’m asserting) may reduce its overall developmental impact.
- User experience: In my experience, raters enjoy being interviewed more than completing an online questionnaire. The interview process is relational, in that through a thoughtful dialogue both parties can build an interpersonal connection, which feels rewarding in itself. It’s also flattering for raters to speak what’s on their mind and have someone take copious notes about it. Also, the interview format is familiar to many. You might say we live in an ‘interview society,’ in that every time you watch current events television or listen to a podcast, you uncover interviews. The format has become one of the dominant ways that we share information throughout our culture.
- Following up on unanticipated areas: One of the great advantages of interview-based processes is their flexibility to explore themes that arise unexpectedly. If a rater mentions a unique, surprising, or controversial point of feedback during an interview, the assessor can ask several follow up questions to better understand that feedback. In other words, assessors can change the question list on the fly, if they think doing so will pursue valuable feedback for the leader-participant. By contrast, questionnaire-based methods don’t offer the flexibility to alter interview questions based on real-time emerging data.
So interview-based 360s have many strengths. What advantages do questionnaire-based 360s offer?
Advantages of questionnaire-based 360 tools
- Lower cost on a per person basis: If you need to use 360 surveys at scale, questionnaire-based tools are more cost effective. Conducting interview-based surveys is time intensive for the assessor – think about all the time involved in interviewing 10+ raters, editing and ‘cleaning up’ interview notes, combing through pages of notes looking for patterns, and then trying to pull out the most important messages in the data. It’s real ‘needle work’ to borrow a phrase from one of my former professors. Interview-based 360s will likely cost thousands of dollars per person, while questionnaire-based surveys may only cost hundreds per person. If you’re a large organization looking to use 360s for hundreds or thousands of team members, questionnaire-based tools may be the only option if you want the project budgeting to work.
- Easy to administer: For questionnaire-based tools, technology facilitates efficiency and some administrative flexibility during the data collection process. Raters receive auto-generated links to complete their survey, and receive regular reminders if they don’t. New raters can be added at any point. Lost passwords can be recovered. Deadlines can be extended at any moment. In addition, many surveys delegate administrative responsibilities to the leader-participant, which further reduces the time required to oversee the process. For example for some surveys, leader-participants receive a link to the survey’s portal, and then use it to enter the contact information for all of their raters. Interview-based surveys require more manual administration in my experience, such as requesting or rescheduling interviews, sending notes to raters to review for accuracy after the interview, and sometimes scheduling separate conversations with raters to discuss concerns around confidentiality.
- Faster report turnaround: Also related to leveraging technology, once the raters submit their feedback in a questionnaire-based process, the report can be generated within hours. By contrast with interview-based surveys, there may be a lag of one to two weeks from the completion of the data collection, to the point when the report is ready, because the assessor requires considerable working time to summarize the data. (Perhaps artificial intelligence will streamline the summarizing of qualitative data in the future. I think some consulting firms are likely experimenting with this right now.)
- Standardized results across people or cohorts: Questionnaire-based tools are useful because they measure all the leader-participants on the exact same competencies, and the same rating scale. This allows organizations to compare results across large numbers of leaders, for a range of purposes. For example, who among these leaders should be selected into a high potential program? Who should be selected for specific training (e.g., on communication, strategy, or influencing)? If the company’s going through a major organizational change, and we believe we need competencies XYZ to succeed in that initiative, who among our team has those skills, and who needs some remedial support in those areas? With questionnaire-based tools, users can benchmark participants in a localized way, comparing them to other members of the organization, in order to facilitate various talent management decisions.
- Supports the use of survey results for a broader range of talent management functions: As suggested above, because of their standardized nature, results from questionnaire-based tools can be used for both development of the individual participant, and a wide range of other purposes – e.g., high potential selection, training selection, informing talent management or succession planning discussions, and change management support. This feature suggests that questionnaire-based tools possess distinct advantages for larger organizations possessing sophisticated talent management needs.
- Measuring change over time: Some leadership programs seek to measure the behaviour change of leader-participants by administering 360s at the beginning of the process, and then again at some later time (say a year after starting the engagement). Again, because questionnaire-based tools are standardized, it’s easier to compare results across Time 1 and Time 2 administrations.
- Can examine ratings by source in a simple way: Do you want to examine how ratings from a leader’s supervisor compare with those from peers, direct reports, and the 2-up (i.e., the supervisor’s supervisor)? This is easy to do using a questionnaire-based tool, again because the survey measures every leader using the same questions and process. That makes comparing feedback across rating sources simple and clear. With interview-based data, you can set up the process to make some comparisons across rater groups, but even if you do, in my experience interpreting clear themes across rater groups with qualitative data requires more cognitive effort and time investment.
- The quality of the feedback is less dependent on the individual skill of the assessor: When using interview-based 360s, the quality of the end-product is closely related to the skill, expertise, and experience of the assessor. They make numerous judgment calls when aggregating the data, including which themes, data points, and rater voices to over- or under-weight in the final summary. By comparison, with questionnaire-based tools, software aggregates the data in a consistent way, and there is less scope to inject idiosyncratic judgment into the final results.
- Higher quality competency ratings: The competencies used in 360 processes are often multi-layered, or hierarchical. That means that the competency may have a headline title (like ‘strategic perspective’), but then it might also include several more specific behaviours that fall under the theme of that title (like ‘links her responsibilities with the mission of the organization’ and ‘thinks with a long-term time horizon’). Questionnaire-based tools often ask raters to give feedback about all those component parts. This results in gathering quite fine grained feedback about each competency. With an interview-based process, collecting such rich competency data is possible but harder. Asking raters to provide feedback on the effectiveness of a leader’s performance across every part of every competency may challenge the time constraints of the interview. Doing this may also crowd out the ability to ask other valuable open-ended questions, or to follow up on unanticipated areas of inquiry, thus reducing some of the unique advantage of using an interview-based process.
If you’re interested to use a questionnaire-based 360 survey, here are a few options I’ve worked with, and some that I haven’t but heard good things about:
- Benchmarks – published by the Center for Creative Leadership (CCL), this tool is a good fit for mid-level management.
- Executive dimensions – also published by CCL, this survey and its competencies target senior executives and C-suite leaders.
- LVI – this tool uses a unique rating scale that measures if a leader uses too much, too little, or just the right amount of a behaviour. It’s based on the assumption that extremity in leadership, either over- or under-doing a skill, can contribute to underperformance. In fact, some research supports the notion that the LVI rating scale measures unique characteristics of leadership compared to typical 1-5 rating scales. Note I haven’t used this tool, or been certified on it, but I would use it given the opportunity.
- LEA by MRG – I haven’t used this tool but multiple other consultants I’ve worked with have used it and praised it.
- Qualitrics/Survey Monkey – for building custom 360 questionnaires, Survey Monkey is the cheaper option, and Qualtrics is a more expensive but sophisticated and feature-rich alternative.
I hope this article provides a balanced analysis of the relative strengths of questionnaire- vs interview-based 360 approaches. I also hope this review makes clear that one process isn’t inherently superior to the other, rather that they offer unique advantages, and that their value relates to their degree of fit with the needs of the user, and with the purpose of the program into which they’re integrated.
Thank you for reading and I would welcome your feedback. If you would like to receive future original articles providing unique insights on leadership, please consider subscribing to my newsletter at www.timjacksonphd.com
Tim Jackson Ph.D. is the President of Jackson Leadership, Inc. and a leadership assessment and coaching expert with 17 years of experience. He has assessed and coached leaders across a variety of sectors including agriculture, chemicals, consumer products, finance, logistics, manufacturing, media, not-for-profit, pharmaceuticals, healthcare, and utilities and power generation, including multiple private-equity-owned businesses. He's also worked with leaders across numerous functional areas, including sales, marketing, supply chain, finance, information technology, operations, sustainability, charitable, general management, health and safety, quality control, and across hierarchical levels from individual contributors to CEOs. In addition Tim has worked with leaders across several geographical regions, including Canada, the US, Western Europe, and China. He has published his ideas on leadership in both popular media, and peer-reviewed journals. Tim has a Ph.D. in organizational psychology, and is based in Toronto.
Email: tjackson@jacksonleadership.com
Web: www.jacksonleadership.com
Newsletter: www.timjacksonphd.com
Member discussion