Gartner ThinkCast
Gartner ThinkCast

AI Is Ready, Your Workforce Isn't: Why AI ROI Falls Short

7d ago19:133,194 words
0:000:00

Most organizations don't have an AI problem — they have a human readiness problem. Despite massive investment and accelerating adoption, most AI initiatives are falling short of real business impact....

Transcript

EN

(upbeat music)

AI is everywhere, but what does it mean for your business? Gartner is the world authority on AI,

with more than 200,000 client conversations,

and more than 6,000 written insights on AI in 2025 alone. Leaders across the C Suite, just like you, are partnering with Gartner to turn AI ambition into impact. Go to gartner.com/AI to learn more.

(upbeat music) - Welcome to Gartner Thinkcast. I'm Karen Stokes Lockhart. Today, we're tackling one of the biggest barriers to AI value creation, human readiness.

While AI capabilities continue to accelerate, most organizations are still struggling to turn that potential into real business outcomes. Not because the tools don't work, but because workforces, roles, and leaders aren't ready.

The hard truth, technology isn't the bottleneck. People are. In this episode, previewing a Gartner webinar. VP analyst Alicia Mallory breaks down what's actually happening.

It's not a job apocalypse. It's hiring restraint, role redesign, and a fundamental remix of talent.

You'll hear why only a small fraction of AI initiatives

are delivering true ROI. How IT and business roles are being reshaped by AI

and the essential skills every employee

now needs in today's environment? Now, here's Alicia. - I am excited to take you through some of the big concepts that we've been sharing with our clients all over the globe.

So let me dive right in and start off with just kind of the state of where organizations are around AI right now. At the end of last year, 2025, one out of five AI initiatives were achieving ROI, so about 20%, and one in 50,

were achieving true disruptive transformative value. So for a lot of organizations, they would say to me, "It's all right, but we want more. "We want more value." And when we look at what does value mean

when it's a kind of value that's being seen from a CFO perspective, 74%, say that they're seeing productivity gains, right? So this is like Alicia saved 26 minutes a day, or making faster decisions.

That's good, but it's not a financial number

we take off the bottom line.

And that financial ROI, only about 11% of CFOs are saying, "Yes, we're seeing clearer ROI." So obviously, this is not good enough for most CFOs and for most organizations. And what's happening at the same time

is there is this adoption, yeah? So if you look at the first curve of the top here, we call the AI innovation race. This is everything happening out in the industry. This is when you wake up every morning

and you look at your phone, and there's a million podcasts and news update about the technology phase about what the vendors are creating and what are the capabilities that are out there. That is going very rapidly and high,

whereas the second curve at the bottom, this is the AI outcomes race, the ability for organizations to get value from all that technology. And you'll notice suddenly a gap, but it is widening.

And it is the wide as gap we had seen between the technology that's available and the ability for companies to capture it and get value on it. This is the state of what a lot of organizations

are feeling right now. And if I had to sum this up in one sentence of where we think companies are, what I'd love me to take away from today's session is this. Not all the AI technology is ready, right?

Not all of it is 100%. There's still things to be developed to there, but it's a whole lot more ready than humans are. And when I say this human side, I mean everything within our organizations, right?

AI readiness is helping us understand, is this science fiction or is this real? What's the capability that's out there in the marketplace that we can access? Human readiness is everything within our organizations

that lets us take that opportunity, capture value, and then keep getting it. It's our workforce, it's our organizational structures, it's our processes, it's our change management, it's our data capture, it's everything within our organizations

to capture that value. And then is the bigger gap consistently across organizations, it's not working right now, though we need to improve. So not to say everything as I said before,

not to say that everything with AI isn't ready,

but the human readiness side is more important.

So I want to take you through a couple of things that I can get important on this human readiness.

Number one is first, in our organizations,

do we even understand what value is?

I think a lot of IT leaders on the line,

you have a definition of it.

If we interviewed your CFL, your head of HR,

your marketing team, your CEO's. Everybody would have a different definition of value, right? I'll take you through each of the three layers here. We think there are three types of value with AI. The first one we call defend.

So this is about value, these are about use cases where we are augmenting our employees. This is about using AI, in many cases, generated AI to save some time, to write better emails, to get decisions done faster.

And that's great stuff. But typically, it's not going to be a financial return.

It's not something you could take off the bottom line.

So we call this ROE return unemployed. In the middle, is what we call extend business cases. And this is about using AI to create competitive differentiation. So using AI to re-engineer an end-to-end processes, then you have, or to use AI to create better pricing

for your customers. That is where we see financial gain. And in fact, what we see is somewhere between,

you need to re-engineer somewhere between 30,

the density percent of your end-to-end processes to see this financial ROI. But that's where you'll see it in these kinds of businesses cases. The third kind of value is what we call an append business case. So this is about using AI to completely disrupt the marketplace

to discover new products, new services. We call this return on the future or ROF. Not because you won't get a financial return on it, but it's just a longer term bet, right? You're not going to get it immediately.

And why I think this is important is because what ends up happening is a lot of times, I'll hear an executive and a business leader, listen to a podcast to a vendor, and they'll see an extend business case. So they're going to expect ROI.

But then they go buy a deep end use case, and they get ROE. And so there's this disconnect that ends up happening of what kind of value we're expecting and what the value we're getting.

So one of the first pieces of human readiness

is really clear on the value we want. And my recommendation here would be to take a portfolio approach here. If you think about all the 100 pennies and how much do you want to put into AI investment, I would probably only put 20 of those pennies

into defend business cases. And maybe more like 50 or 60% of those pennies in the extent, right? And depending on how aggressive and risk receivers you are, how much you really want to upend.

But this is number one, right? Having a language around value and being really clear of what we're looking for. Now after that, one question that comes up a lot is often an elephant in the room when I talk about human readiness

is of all the people sides about what we sometimes hear referred to as this job apocalypse. Has that happened, right? We see in the news, in a lot of headlines, all of this talk about AI replacing jobs.

So we've done a lot of research into this and continue to look out into it.

And what we have seen, the first half of last year,

we did a large analysis, and we continue to keep track of it. Almost 80% of job loss for last year had nothing to do with AI. Read it had to do with uncertainty,

it had to do with parents, it had to do with lots of things. What we found is that there was only one percent of job loss that came from pure productivity gains. From people saying, AI is doing so much great work. I don't need a leisure.

We only saw that as 1%, what we saw as much larger is the 17% of what we call requisitioning. And essentially what this is is when you see a lot of these headlines, Salesforce saying they don't need 4,000 heads or IBM taking out certain parts of their organization,

behind the headline, what is happening is typically they are reducing headcount in one area and they are hiring net new headcount in another area. Now, why this is important is because it's not saying that AI is filling up and being so productive,

we don't need people. It's a strategic business decision to say, we're going to remove people in this area because we think we're going to have more profitability and more growth having moved people

into different jobs in different areas.

I think this is important to know in this story

that when we look at this, right now, layoffs, job loss is not the big story of AI right now. In fact, what we think is the far bigger story that organizations need to focus on is role redesign, job redesign. We think this is a 20 times bigger effort

than hiring or layoffs. And this is where I really want to dive into how do you start thinking about this? This is a, as we so hear, right, a big effort. You don't have time to waste searching for answers.

Meet AskGartner, our clients AI powered gateway

to Gartner's trusted business and technology insights.

No more digging through lengthy reports or generic web results.

Want to benchmark your digital transformation? Looking to start piloting a Gentic AI, needs strategies to respond to cybersecurity threats. AskGartner delivers clear, executive ready recommendations in seconds.

It's where AI stops guessing and starts guiding. Discover how AskGartner can work for you at Gartner.com/AskGartner. Common question that comes up immediately when I talk about role and job redesign is, what are some of the new skills?

Where do the new skills we need for AI? And I'm going to start off by saying, one of the things we have found to be true is that AI is far different in terms of the demand of training and change management

than past technology we have. For traditional ERP, we had 100 days to implement ERP. We'd need another 15 to 20 days of training people. And then another 50 days to 50% of the time on reengineering processes.

With AI, that number increases. The effort required for training and preparing our people jumps to 25%. And when we think about the change management required to redesign processes and work well as in all of that,

it's up to 200% more effort. This is a huge hidden tax on the human side, the human readiness side, that a lot of companies are underestimated. The way I like to say this sometimes is we

understand that the day one bill for AI, we don't know the day one hundred, because some of these hidden costs are coming in there. And part of that is the training and the job redesign. So what are some of those skills

that everyone in your organization needs?

I'm going to start first by saying,

what are the skills that everybody needs? And then I'm going to double click into showing you how to think more specifically about particular roles. So first, what are the skills that everyone needs

around an AI dominant future? Number one is use case identification. So being able to recognize where is there a good opportunity for AI? And as I said at the start,

would that framework we looked at ROI or OLs? Where this shift is going to happen is that a lot of employees where we may naturally look at ROI augmentation. But as I said before, the Sally was going to happen when we shift more of right.

Looking for use cases that are more about the reimagining of processes end to end or the reinvention of ROI and ROI. So having employees who have the skills

and the knowledge to look more to the right becomes important,

along with general technology fluency, understanding just enough about the background of how some of these tools work, so we can understand the limitations of it.

The third skill, which is still important,

it's been important and still is, is prompting, having a set of prompting and in particular really pushing beyond where we are now is having the ability to understand the context and help in that prompting craft for the context

of the organization. The question you have, the situation that you're in, the risk that you may need the Gen AI to account for. And finally, the last one is what we call discernment here. Now what I don't mean, my business,

I don't mean, can I go in and just verify that what the tool said is actually correct, sexually correct? Of course, that's important, that is table stakes around AI. discernment is going a step further to say,

not only is this accurate, but is this useful? What is the output that is coming out of here? Is this useful for whatever decision or outcome we're trying to create? Because one of the things that we're experiencing right now

is that, well, AI looks like it seems a lot of time. It's actually also creating a lot of additional work for people. I'm sure we have all experienced this where someone right suddenly generates a 20 page note

that now they can send to you that now you have to read

and we're so much more for you to consume. Part of that comes to discernment, that needs to become some of the new social etiquette within our workplace of, is this useful, if I do this? What's the social norm of what should be established?

Now, that is, as I said, general skills for all employees but let's dive a little bit deeper into what specific work and specific jobs. And how is AI changing that? I'm finding that AI, everything AI touches,

it starts to more what it touches. And we have to be prepared for some of that. So I'm going to share an example with you. This is an example for a software engineer role. Now, Gartner is building lots of examples of these in the IT states.

I think we've got like 15 or 20 right now

but right now I'm just going to show you one.

And the first place to start with

is to figure out or start to hypothesize

how do we believe AI is going to transform work flows?

Level one to level five represents the regulatory of a gentic and generative AI. From level one, the static connected chatbot, all the way to level five where autonomous in the SELC.

The decision, the organizations need to start making right now is deciding across these levels. Why don't we want the human to do and what do we want the machine to do? Like in the beginning, we need AI to answer some of the questions

outside of a workflow and the human's going to manually

validate it and copy it and paste it. Versus maybe like at a level four, here we want the AI to be doing multiple agents, be doing some routine tasks all at the same time. And the human is the one that's going to be directing those agents.

Now, a couple of things to note here,

not every workflow, do you want to get to level five?

For some workflows, you may decide level three is good enough. And right now, most organizations, we're going to technology is what's been adopted in the organizations is more like level one level two. The point of this is this kind of decomposition

is the kind of work that we are seeing companies who are really engineering for ROI value realizing. If I got to re-engineer processes, I kind of figured out how roles are going to change. And I need to help my employees understand

the vision of where they're headed as well. Now, we've done deeper work into all of this then to look across roles and I'm just going to show you a little snapshot of a couple of them. Where we create heat maps to say, as the technology

ensures, what happens to these roles?

And you can see, basically, the darker the blue,

the more we need a human, the lighter the blue to white, the work gets automated. And there is an asymmetry to how capability is released. So for example, we see what happens in the software engineering

role is that AI can automate some of the core tasks of that job more quickly. Now does that mean we should get rid of all of our software engineers? No, we would not say that at all.

We would say now that it bugs what this software engineering role is, it creates more versatility versus, for instance, a people-manager where we see the technologies

never going to fully automate the job,

it's going to augment it to a really high degree. Now, as the core of this job gets created and pushed more, you have to bend aside what are some of the new future capabilities of what we want this role to do.

Thanks for listening to this latest episode of Thinkcast. That was Gartner VP Analyst, Alicia Mallory, to learn more about this topic and to register for the full webinar, follow the links in the notes. And to watch the video version of this episode and future episodes,

make sure to visit and subscribe to the Gartner YouTube channel. Thinkcast will be back where we listen to podcasts a week from today. In the meantime, please rate, review, and share with a colleague, so neither of you will miss it. Thinkcast is a production of Gartner.

This podcast may not be reproduced or distributed in any form without Gartner's permission. It consists of the opinions of Gartner's research organization which should not be construed as statements effect. Content provided by other speakers

is expressly the views of the speaker and/or their organization. While the information contained in this podcast has been obtained from sources believed to be reliable, Gartner disclaims all warranties as to the accuracy, completeness or adequacy of such information.

Although Gartner research may address legal and financial issues, Gartner does not provide legal or investment advice, and its research should not be construed or used as such. [BLANK_AUDIO]

Compare and Explore