The Interview Performance That Has Nothing to Do With Job Performance

You have sat across from this person.
The answers were precise, well-structured, and delivered with exactly the right mix of confidence and humility. The technical questions were handled fluently. The behavioural questions produced clear, specific stories that hit every competency marker on your mental checklist. The team liked them. The second-round panel was impressed. The reference calls were positive. The offer was extended.
And then, six months into the role, the performance is not what any of that predicted.
The feature delivery is slower than expected. The architectural decisions are more conservative than the role requires. The communication within the team is technically competent but generates friction. The candidate who interviewed brilliantly has become the hire you are quietly managing.
You find yourself returning to the interview in your mind, replaying the answers, trying to identify what you missed. Most of the time, you missed nothing. The interview told you exactly what it was designed to tell you. The problem is that it was not designed to tell you what you actually needed to know.
What Interviews Actually Measure
The conventional interview, whether structured with competency-based questions or unstructured with conversational probing, is measuring something real. It is just not measuring job performance.
Research in organisational psychology consistently identifies unstructured interviews as having near-zero predictive validity for job performance. Even structured behavioural interviews, which are significantly better, predict only a fraction of the variance in actual on-the-job output. What interviews reliably measure is interview performance: a distinct competency set that includes verbal fluency, narrative construction, social calibration, emotional regulation under pressure, and the ability to present a curated version of past experience in a compelling way.
These are genuinely useful competencies in some roles. They are not the competencies that determine whether an engineer ships high-quality code, whether an operations manager makes good decisions under time pressure, or whether a product manager builds products that users actually want.
The candidates who score highest in conventional interviews are the candidates who are best at interviews. In competitive talent markets, these candidates have often had many interviews and have refined their presentation accordingly. The candidate who performs less well in the interview room, who answers more directly, with less narrative polish, and without the structured format that interviewers are trained to respond to, may have meaningfully better job performance potential. They simply have not invested in the presentation layer.
The candidates your process advances are the candidates who are best at advancing through your process. Whether they are the candidates who will perform best in the role is a different question, and one your current process may not be answering well.
The Nigerian Context Makes This Worse
In Nigerian corporate culture, interview performance is further inflated as a signal by factors specific to the local context.
Presentation and social confidence carry disproportionate weight in many Nigerian interview settings. The candidate who is well-dressed, speaks confidently, uses appropriate corporate register, and demonstrates deference to senior interviewers while still projecting ambition is performing a cultural script that Nigerian hiring managers have been trained to respond to favourably. This script is learnable. It is taught in university career offices, in interview preparation workshops, and through observation of what gets rewarded.
The candidate who has invested in learning and performing this script may or may not be the candidate who will perform well in the role. The candidate who has invested in technical depth, operational judgment, or genuine problem-solving capability, and who has not invested in interview performance, may present less well and be systematically passed over by a process that cannot distinguish between the two.
Affinity bias compounds this further. The candidate who communicates in a style familiar to the interviewer, who references shared experiences or networks, and who feels “like us” is rated higher across all evaluation dimensions, including technical ones, even when the interviewers believe they are making purely competency-based assessments. The Nigerian interview room, where network overlap and shared institutional backgrounds are common, is particularly susceptible to this dynamic.
What Actually Predicts Performance
The research on what assessment methods actually predict job performance has been consistent for decades.
Work samples are the most predictive assessment method available. A take-home engineering challenge that resembles the actual work the engineer will do tells you more about output quality than four rounds of technical interviews. A brief, paid consulting engagement for a strategic hire tells you more about their thinking and fit than any interview process. The output does not lie the way an interview answer can.
Structured reference conversations with former direct managers are the second most predictive tool available and the most underused. Not HR confirmation calls, but substantive conversations about how the candidate handled specific situations. The former manager who says, with a pause before answering, that the candidate was “very good in certain contexts” has told you something that no interview question would have revealed. Research confirms that structured reference conversations are among the most predictive assessment methods available, significantly more predictive than behavioural interviews and comparable to work sample tests in their ability to forecast actual job performance.
Structured interviews with consistent, scored questions across all candidates significantly improve predictive validity compared to unstructured conversations. But the key word is scored: interviewers evaluating the same competencies against the same criteria, not following their conversational instincts through a list of questions.
The Uncomfortable Implication
If your current process consists primarily of behavioural interviews, a technical screen, and a panel conversation with the team, you are running a process that is systematically optimising for interview performance rather than job performance.
This is not a critique of your intentions. It is a description of the structural limitations of the tools you are using.
Hiring teams now conduct 42% more interviews per hire than in 2021, contributing to a 24% increase in hiring timelines, with no evidence that this additional process is improving outcomes. The fix is not more interviews. It is different assessment methods: work samples, structured references, and scored evaluation criteria that make the difference between interview performance and job performance visible enough to act on.
Measure the Right Thing
The best person for the role may not be the best interviewer in your process. In fact, in a market where interview preparation has become an industry, the inverse is often true: the most polished interviewer may be the least available to show you what they are actually like when the performance ends and the work begins.
Building a process that can distinguish between the two is not a process refinement. It is the difference between consistently building high-performing teams and consistently being surprised by the gap between how people interviewed and how they actually perform.
The founders and CTOs in Nigerian tech who close that gap will not do it by adding another interview round. They will do it by measuring the right things at the right moment in the right way.
Revent Technologies uses structured vetting frameworks, including role-specific assessments and direct manager reference verification, to ensure the candidates you meet are candidates you can trust.
Start here: www.reventtechnologies.com/site/hire-a-developer
Research Sources
– Cogn-IQ / Organisational Psychology: Unstructured interviews have near-zero predictive validity for job performance; affinity bias in interview settings
– Gem: 2025 Recruiting Benchmarks: hiring teams conduct 42% more interviews per hire than in 2021
– SHRM: Standardised interview processes and hiring outcome research
– Harvard Business Review: Work samples as predictive assessment tools