AI is coming for your job, but only a piece of it

Ben Toronto

Co-Founder & CEO

image generated by Adobe 30
MAY

We’re more likely to be replaced by humans using AI than we are to be replaced by AI

Hardly a week goes by without some analysis or other of the effects of AI on the job market, or companies citing AI as a rationale for layoffs1. And many of us are probably asking ourselves what effect AI will have on our own jobs. I ask myself this question as an engineer and a startup founder, two areas where AI has already been very disruptive.

Two recent stories about AI in customer service show that the answer to this question for most of us will likely be a combination of “No, your job isn’t going away” and “Yes, AI will fundamentally change the way you work.”

The first of these is from Klarna, the Swedish fintech company, that recently announced2 the results of a month-long trial with an AI assistant in its app: the assistant had 2.3 million conversations with users during that month, representing the equivalent work of 700 of their full-time agents. And not only that, but the interactions were judged to have higher accuracy and got to resolution more quickly, while being on par with human agents in customer satisfaction.

The second story was the news that Air Canada has been ordered to pay damages to an unfortunate traveler after an AI support bot essentially made up a new policy on bereavement fares that the airline subsequently refused to honor3. The airline’s contention that the virtual assistant was “solely responsible for its own actions” thankfully did not hold up in court.

So, what’s the difference between these two stories? Both are about companies applying AI to customer service interactions. Both probably had similar motivations in lowering costs and improving the customer experience. At the risk of oversimplifying (there may well have been cases where the Klarna AI failed miserably, and there could have been many cases where the Air Canada AI did magnificently), why did the first application of AI work so well and the second fall flat on its face?

The answer lies in what a group of social scientists in a recent paper4 on AI in the workplace called the Jagged Frontier: AI performs remarkably well on some tasks, where the “frontier” of its capabilities lies further out, and surprisingly poorly on others, where the frontier has not advanced as far as we think. The trick is in understanding what this jagged frontier looks like, and it’s not always easy to know, unless you’re practiced at using AI in your own work.

For example, large language models (LLMs), the kind of AI that ChatGPT, Gemini, and Claude represent, are remarkably good at distilling a large amount of information down to something smaller but still representative of the whole. Given their nature, LLMs are very good summarizers. This is why at TheraPro we chose to release therapy session summarization as our first AI-based tool that helps psychotherapists draft their progress notes. Summarization is a logical first use case for an LLM.

On the flip side, LLMs are not always good at precision and accuracy. If you’ve ever experienced a chatbot hallucinating, you know how confidently incorrect they can sometimes be. This is because they’re optimized to provide well-formed and reasonable answers, which doesn’t always overlap with providing correct answers. And while with additional training these models are getting better at understanding their own limitations, hallucination has not and may never fully go away5.

Is this where Air Canada’s bot failed? Perhaps, it’s hard to know for sure without understanding better how they implemented it. But the contrast between these two stories as we have them does point to two important conclusions: first, AI is a tool, and like any tool, it’s good at helping with some things but not others. You wouldn’t use a hammer to trim your bushes, nor clippers to pound in a nail. The wrinkle is, there’s no instruction manual for this tool. You have to use it in order to learn how best to use it.

The second conclusion is that AI is more likely to come for pieces of all of our jobs, rather than all of some of our jobs. It’s tempting to declare, based on the Klarna announcement, that customer service jobs are dead. While there will no doubt be fewer human agents required in future, the Air Canada incident shows that in some cases they are still very much needed.

Long live these future customer service agents, who will need to be sharper, more experienced, better trained, and yes, more highly-paid as a result. I’m fine for AI to handle the simple stuff, but if I’m talking to a human, it means I have a real problem, and I want to be working with someone competent, experienced, and empowered to make decisions for my benefit.

A similar evolution will surely happen in other professions, including writers and teachers and engineers… and therapists. How well we adopt AI as a tool will determine how well we adapt to the workplace in the age of AI. We’re more likely to be replaced by humans who use AI than we are to be replaced by AI. Or to look at it in a more positive light, far from putting humans out of work, AI will allow us to focus on the things that matter most in our professions. That should be good news for all of us.

1 See, for example: https://www.axios.com/2024/01/18/tech-layoffs-ai-2024-google-amazon

2 https://www.prnewswire.com/news-releases/klarna-ai-assistant-handles-two-thirds-of-customer-service-chats-in-its-first-month-302072740.html

3 https://www.theregister.com/2024/02/15/air_canada_chatbot_fine/

4 https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4573321

5 Though I buy into the argument that hallucination, or confabulation, is a feature, not a bug, of intelligence as we understand it. As Geoff Hinton said, “In our minds, there’s no boundary between just making it up and telling the truth. Telling the truth is just making it up correctly.” See https://www.newyorker.com/magazine/2023/11/20/geoffrey-hinton-profile-ai