‘AI cannot determine cultural fit, leadership style, and chemistry’

June 8, 2023

This month, Sam Burman, global managing partner for specialty practices at Heidrick & Struggles spoke to The Loop about how to balance AI and human insights in hiring practices…

What are or what will be the uses for artificial intelligence (AI) in the hiring processes of an organization?

It’s difficult to answer this question with any amount of certainty – given the constant evolution of its capabilities – but, many solutions do already exist, such as: resume parsing, creating job specifications, automated candidate outreach/screening through chat bots, video capture and assessment, and summarization. Other uses could include: automated background checks, automated tests, and target universe creation.

Could it lead to organizations becoming less reliant on human insights (e.g., feedback from internal and external stakeholders) and is this desirable?

Depends. For low-skilled roles where hiring processes are typically quite transactional, then, yes, decisions could become less reliant on human insights and more matter-of-fact on the skill-set of the individual. However, for high-skilled and leadership roles, the importance of human insight will remain critical, as right now you cannot use AI to determine cultural fit, leadership style, and chemistry, as well as carry out robust referencing. The latter of which is invaluable insight which is a real differentiator for an executive search firm.

Where organizations make greater use of AI in their hiring processes, could this create the risk that hires become more homogenous?

Potentially, yes, as AI is only as good as the rules and data you feed it, but probably too soon to call this. Until we get to Artificial General Intelligence where AI can think for itself, we should see AI as a tool to augment human insight, as opposed to relying on it to make impactful decisions (such as hiring decisions). For example, Generative AI is the hot topic right now with tools such as ChatGPT and Bard disrupting how organizations think about leveraging the technology. However, as great as these technologies are, they’re still fundamentally task based and essentially distil huge amounts of data (in a short amount of time) in a digestible format. In other words, they’re simply playing back what they’ve scraped from the Large Language Model (LLM), as opposed to ‘thinking’ for themselves and adding net new insight.

How can AI enhance cognitive diversity, help establish psychological safety and instil a growth mindset? Or are these much more dependent on human interactions and insights?

Again, augmenting AI with human insights should allow the hiring manager/team to think more broadly and come up with ideas and avenues that they might not have previously considered/explored. What I think AI will be able to do is uncover potential candidates that the human otherwise wouldn’t have found or thought of, which will in turn promote diversity of all kinds.

Will there always be a conflict between human and machine insights within organizations, or do you consider there to be a way the two can be used harmoniously?

To my earlier point, I think they can be used harmoniously but there needs to be a clear understanding of the role that AI plays. Augmenting AI capability within human insights is a healthy construct. Overly relying on AI to make hiring decisions is not. An enduring conflict will be in play if a company doesn’t recognise the role that each plays. An effective AI/data organization should be able to evangelize and communicate insights in a manner that always appears harmonious to their respective constituents. Conflict arises when data quality is low or inconsistent. Having the right foundations in place make conflict less likely.