Neil Jordan – The AI Job Interview

It has been reported that Ribbon AI, a Canadian company, is offering employers a new form of screening interview conducted by artificial intelligence. With a view to helping organisations hire staff more quickly and job-seekers find work sooner by cutting out ‘dead time’ in the recruitment process, early stage job interviews are conducted in a fashion similar to a remote interview, with applicants responding to questions and prompts from a natural sounding voice, generated by AI.

The idea is that the AI interview saves time for job-seekers, who get to interview almost immediately rather than waiting for two or three weeks until a member of the HR department can see them. Ribbon does not make decisions about candidates; rather, it provides companies with applicants’ responses to questions, offering transcripts, summaries, analyses and scoring of candidates’ responses, which the company can then use to inform its decisions according to internal criteria and requirements. As such, it is intended that companies should gather information about candidates and assess their suitability for roles more efficiently than by way of an early-stage screening interview.

Talking (in)to Machines

It is worth pausing to ask what candidates are being expected to do here.

It is not uncommon for companies to ask applicants to complete psychometric tests prior to offering an interview and with increasing frequency, candidates will also be provided with set questions and instructed to submit answers recorded using a web-cam. Anyone Who has been through such a process might have experienced certain misgivings. The rationale behind including tests is intelligible (though it is not always clear whether they really determine which candidates possess the skills requisite for the role on offer) but recording answers can feel artificial. We might wonder what is to be gained by this approach, and why the hiring company is unwilling to talk to candidates in person at this stage, particularly if it is serious about ultimately hiring and investing in a member of staff. In any event, if the recorded answers are assessed by a member of HR staff or a hiring manager, it is not clear how far such an approach is any more efficient than a short, standard-format interview. Perhaps the most sympathetic view is to see such an approach as being akin to recording a spoken assessment as part of a remote learning course, perhaps in modern languages. Strange as it feels to be speaking (in)to a machine, candidates assume that the recording will at least be properly assessed by a human being at some stage.

A One-Sided Conversation

In this instance, however, it seems that candidates are being expected to speak to and interact with a machine as though that machine were the human interviewer. An applicant is in effect asked to hold a conversation with an interlocutor that has no consciousness, is unaware of the interviewee’s existence and understands neither the candidate’s answers nor even its own questions. In such a situation, there can be said to be no conversation at all and therefore, arguably no interview. This is a peculiar arrangement, whereby an applicant is expected to behave as though a non-conscious object is interviewing her.

Moreover, her responses are not necessarily going to be sent to a human assessor for consideration at all. Rather, a decision might very well be made based on an analysis and data produced by the AI based on her responses. These analyses may be less biased and more uniform than would be possible for human interviewers, but the applicant has been ‘interviewed’ and ‘assessed’ by a machine. She might be rejected without any human consideration of the actual performance itself at all. Instead, the candidacy depends on a view taken of a computer generated assessment of that performance.

Human-Centric AI

What does such an approach suggest about a company’s attitude to applicants, when it chooses to have them prove themselves before a machine before granting any meaningful human interaction?

Andrei Rogobete has written about the need for a human-centric AI, stating that AI ‘ought to be embraced in a prudent manner that directs its contributions towards human thriving’. In arguing for a use of AI that benefits humanity first and foremost and does not deify machines, he quotes Pope John-Paul II’s call for humanity to ‘use science and technology in a full and constructive way, while recognizing that the findings of science always have to be evaluated in the light of the centrality of the human person (and) of the common good’.

If we are to make responsible use of artificial intelligence for the good of humanity, the technology ought to be restricted to those tasks which any given form of AI performs well, so that it brings clear, tangible benefits to all relevant parties. Crucially, the dignity of the human beings that this form of technology is to serve should always be our foremost consideration. It is pertinent to ask, therefore, whether reducing interviews – for which candidates will often prepare carefully and about which they are ordinarily nervous – to a confected and solipsistic ‘conversation’ with a non-conscious AI model rather than meaningful engagement with other human beings is indicative of such an approach. Is it consistent with human dignity to base decisions about an individual’s potential livelihood not on a meaningful conversation with that person – and perhaps not even directly on her responses themselves – but on the reduction of her performance to a collection of data and analytics produced by artificial intelligence – artificial intelligence which has not seen or heard, let alone engaged with or understood the person in question?

 


Neil Jordan is Senior Editor at the Centre for Enterprise, Markets and Ethics. For more information about Neil please click here.
 
Image courtesy of Freepik (www.freepik.com)