I've started to seek out research from Machine Learning professors and optimists to hear their forecast for the future of work - a search that led me to a video interviewing the 6 winners of the Queen Elizabeth Award for Engineering: Yoshua Bengio, Professor at the Mila Quebec AI Institute, Bill Dally, Chief Scientist at NVIDIA and Professor at Stanford, Geoffrey Hinton, Professor at the Vector Institute, John Hopfield, Professor Emeritus at Princeton University, Jensen Huang, Founder and CEO of NVIDIA, Yann LeCun, Professor at New York University and Chief AI Scientist at Meta, and Fei-Fei Li, Professor at Stanford University and co-founder of the Stanford Human-Centered AI Institute .
I'm such a nerd. I recognized more than half of these people before they were introduced. In all the reading I've done on the topic, these are by far the folks most often cited.
They were asked the billion dollar question, too - when will AI job loss hit us? I was sure they'd spout another made up answer like 5 years or some other number pulled right out of their... hat. But Fei-Fei Li surprised me by saying, "it's already happening."
What The Top Engineers In The World Think About AI Job Loss
You can hear all of the answers to this question at the end of the video - jump to the 30 minute mark - but here's what I took away from the whole video: stop waiting for the light switch. There's not going to be a big event or moment where AI job loss happens all at once. Stop waiting around. It's already happening slowly right now.
Perfect example: name a person that can speak 100 languages. ChatGPT can and it's probably unrealistic to think any person could master that many. Alternatively, show me a machine that can pick up on nuance or contextualize information without a prompt. That doesn't exist (yet).
| Capability | ChatGPT | Humans |
|---|---|---|
| Number of languages | Can understand and generate text in ~100 languages | No human can realistically master 100 languages |
| Nuance detection | Limited ability; depends on prompts and training data | Strong ability to pick up nuance, tone, and subtle meaning |
| Contextual understanding | Requires explicit prompts to interpret context | Naturally contextualizes information without being prompted |
| Strengths | Scale, speed, multilingual output | Emotional intelligence, real-world context, intuitive understanding |
| Limitations | Lacks innate intuition; struggles without clear input | Limited memorization capacity; time |
I know it might not sound like it, but this is good news. It means there's a place for both machines and people. Plus, it gives a little clarity on what recruiters like you should be focused on right now.
What Recruiters Should Do Now With That Intel
First, dig into understanding what machines are good at. Listen to podcasts. Read books on the topic. Just don't aimlessly deploy machines into candidate experiences because you heard about it at some conference. Take some time to learn what machines are good at and contrast that to problems you have. Use AI to solve problems, not to just use AI.
Then, do the same for understanding what people are good at. What can a person do better than machines? For example, contextualization. AI machines can't take a generic job post and add the context to make it specific to your culture, company, or team the way a recruiter who filled the last 3 open seats can (if you train them how).
Finally, it's time to make manager training part of every onboarding so they're prepared to manage machines. There was only one thing all of these smart people agreed on: you will be managing a machine in the next 5 years. The catch? We don't actually train most leaders. With that said, I don't know exactly what goes into learning to train a machine. I do know this: if you have no leadership skills, it's going to be harder. (More on my Bounce Back Factor leadership training for recruiters here).
And the most important lesson of all? You don't need to read those salacious, bias, fear-driven headlines about when AI is coming for your job any more.

