X

Jeff Jolton on How HR Leaders Can Use AI to Promote Greater Engagement, Not Just Efficiency

By Michelle Gouldsberry
October 6, 2025
3 minute read
Share
Share

AI is transforming the way organizations work, but speed and automation alone won’t deliver stronger performance or more resilient teams. 

On this episode of People Fundamentals, Jeff Jolton, leader of data and insights in Spencer Stuart’s Leadership Advisory Services, shares why leaders must resist the trap of “job enlargement” and instead design AI strategies that unlock learning, collaboration, and innovation.

“If it’s allowing us to learn more, innovate more, collaborate more, those are the things that we really should be measuring as the impact of AI—not just the operational things.”

Subscribe wherever you listen to podcasts: Apple Podcasts | Spotify | YouTube Music

Watch out for ‘more of the same’

Jeff draws a clear line between two outcomes of AI adoption: job enlargement and job enrichment. Enlargement happens when AI simply accelerates repetitive tasks. “We’re just doing more of the same, where it helped me write more reports, so now I’m just writing even more reports,” Jeff explains.

Enrichment, on the other hand, gives employees new opportunities to grow and connect their work to something bigger. “When we tie a job to the vision, to the strategy, when we give people a sense of ownership and purpose, that’s when we start enriching the job. And it’s giving them something new to work on, something challenging, something to build on, then it’s more exciting.”

The distinction matters because enlargement drains energy while enrichment builds engagement. The real promise of AI is to free people to do more meaningful work, not just more of the same.

Give your people time to experiment with AI

The road to AI success is rarely instant. Leaders may expect quick ROI, but Jeff reminds us that organizations need patience to see the benefits. “You’ll see a lot of research out there, a lot of data points that talk about CEOs and leaders are frustrated that they’re not getting this magnificent return on AI out of the gate,” he notes.

The gap often comes down to skill and confidence. “I can’t tell you how many people I’ve talked to. It’s like, ‘Yeah, they told us that we need to do AI and I’ve never touched it before. I don’t know what generative AI prompts are supposed to be, and I’m just supposed to figure it out.’”

Employees need time to explore, test, and even fail before they can create real value with AI. Building space for that experimentation is as important as the technology itself.

Build critical thinkers and adaptable teams for the AI era

AI may reshape workflows, but people remain at the center. Jeff emphasizes two skills above all others: adaptability and critical thinking. “AI is a really powerful tool, but AI lies. AI is deceptive. AI is very much telling you what you want to hear. It is not a parthenon of truth and accuracy and perfection,” he says.

That means employees must sharpen their ability to question, interpret, and apply results thoughtfully. Leaders must also model curiosity and data literacy. “I want them to ask better questions. I want them to connect AI to value.”

And for early-career employees, AI presents both challenges and opportunities. As entry-level tasks are automated, Jeff suggests reimagining learning paths through rotations and gradual skill-building: “You could spend more time rotating across different roles to get a broader sense before you then move up into that more high-level position.”

Jeff’s perspective brings the conversation back to people. AI should create more space for creativity, innovation, and growth. When organizations pair data with enrichment, adaptability, and human-centered design, they not only improve performance but also build workplaces where employees feel valued and motivated.

People in this episode

Jeff Jolton: LinkedIn

Transcript

Expand

Jeff Jolton:

At the leader level, I think we want leaders to become more data-centric. I’m not saying leaders need to be data scientists. But what I mean is I want them to be data-centric, I want them to ask better questions. I want them to connect AI to value. So leaders need to make sure that we are making sure we’re feeding and developing good data, right? So they have to be good consumers, really good data-centric minds. They know they’re bringing that into their organization.

Ashley Litzenberger:

Hi, and welcome to Betterworks People Fundamentals podcast. I’m your host, Ashley Litzenberger, senior director of product marketing.

Michelle Gouldsberry:

And I’m Michelle Gouldsberry, senior content marketing manager here at Betterworks.

Ashley Litzenberger:

Betterworks core belief in people fundamentals revolves around helping HR lead through constant change by focusing on core values like fairness, support, balance, and enabling growth opportunities for employees.

Michelle Gouldsberry:

These tenets empower everyone in the workforce to strive for excellence, to foster creativity, and to acknowledge each other’s contributions. We believe that strategic HR leaders can translate these principles into action to shape their workforces for the better and drive meaningful business outcomes.

Ashley Litzenberger:

And in this show, we’re digging even deeper into these principles by listening to experts share how you can make them come alive at your organization.

Michelle Gouldsberry:

In this episode, we’re joined by Jeff Jolton, leader of Data and Insights in Spencer Stuart’s Leadership Advisory Services. Spencer Stuart is a global executive search and leadership consulting firm, and Jeff helps clients and consultants transform assessment data into actionable insights that support business performance and change initiatives.

Ashley Litzenberger:

Jeff brings a thoughtful perspective on how organizations can use AI in ways that go beyond efficiency. He explains the difference between job enlargement and job enrichment and why that distinction really matters for building engaged resilient workforces.

Michelle Gouldsberry:

We also dig into how AI is reshaping the skills employees need, from critical thinking to adaptability, and why leaders must rethink development paths, especially for early career employees. Jeff highlights how leaders can strike the balance between experimenting with AI tools and maintaining focus on long-term growth. It is a forward-looking conversation that connects people, data, and strategy in really practical ways, so let’s dive in.

Ashley Litzenberger:

Hi, Jeff. It is so nice to have you on the People Fundamentals podcast today.

Jeff Jolton:

Ashley, great to be here. I’m very excited to get a chance to talk about AI and how to make meaningful work and not just more output from it. So this is going to be a great conversation, I hope.

Ashley Litzenberger:

So let’s go ahead and dive in. Our first question for today, how do you see AI currently being used in performance management and what are some of the opportunities and risks for organizations?

Jeff Jolton:

Yeah, definitely. I think one of the things that really kind of stand out, just what we started to talk about, I think the tendency for AI to be looked at in terms of operational or efficiency measures. And that’s because that’s where it’s helping with, right? It’s helping with anything done faster or more output more quickly, or helping with the quality of things like that. And so I think that the tendency to feel like that’s what we should be measuring.

But what I really encourage the leaders I work with, the organizations I talk to, to really flip the script on that and to not think about just the productivity about the human impact and really what we’re tracking, not just the cycle time or the error rate, but whether what AI is making that time for, right? If we are using AI to free up time, like we’re doing that mundane work for us, then we should be measuring the extra higher impact things. So if it’s allowing us to learn more, innovate more, collaborate more, that those are the things that we really should be measuring as impact of AI, not just the operational things, which I think just become very natural.

And we have seen leaders tend to focus on those operational measures and start to forget about these impact measures of the human side of it. And so I really like us to think about that because if we just think on the operational side, we really run the risk of what I call in creating job enlargement. We’re just doing more of the same, where it helped me write more reports so now I’m just writing even more reports as opposed to enrichment, which is I’ve written a report faster so now I can spend more time doing something else that’s more meaningful or better years of my capability as a human being versus as a machine, a report writer.

Ashley Litzenberger:

You’re actually suggesting that in addition to the AI outputs, how quickly can you write the report or how quickly can you get to the end of a process looking at tracking metrics that have nothing to do with AI, but that AI should free up more time for the human to do. And that’s where you’re getting into the idea of we should be measuring more hours spent in collaboration, hours spent or innovation or workshops or things that are not what the AI is doing, but what has given the human more bandwidth to accomplish.

Jeff Jolton:

Exactly. Exactly. Because I think that’s supposed to be the promise of AI, right? Or the whole promise of the digital revolution was that the digital technology would be taking on more of this low-status admin work, right? It’s going to do the status updates, a routine report, the things that are boring or the automated things, and that we should be able then spend time on those higher order things.

And the beauty of that is if we follow that, we can have a much richer work experience. I mentioned earlier enlargement and enrichment, so maybe I can take a moment to define that a little bit, maybe what that is.

Ashley Litzenberger:

Yeah, tell me a little bit more about that difference between what is job enlargement versus job enrichment, how does AI play into it, and how do you actually create the right balance for what is going to feel like an employee is actually benefiting from AI coming into the workplace.

Jeff Jolton:

Yeah, for certain. So first of all, this is an old concept. It goes back to the early days of the research around motivation and job satisfaction, the job characteristic model by men and old men if you really want to go into the archives of IO psychology and all that.

But really when we talk about job enlargement, it’s just adding to our job, but just doing more of the same thing. So if you’re working in a call center, that is we’re doing more calls. Or if we’re writing reports, that we have to write more reports. So we’re getting more of the same thing done. And what we find with enlargement, it’s not particularly motivating or engaging. And if we enlarge the job too much, we just pile on more of the same work, it actually starts to lead to burnout. You can imagine even if something like nursing or something, it becomes too much, right? It’s just too much of the same thing.

People like variety, people like being challenged, they like autonomy. And that’s where enrichment comes in. That’s where we are adding a certain value to the job, whether is that autonomy or variety or some sense of ownership and purpose. When we tie a job to the vision, to the strategy, when we give people a sense of and-and ownership, that’s when we start enriching the job. And it’s giving them something new to work on, something challenging, something to build on, then it’s more exciting.

And you can think about your own work, what really engages you, right? It’s when your leader asks you to try and figure out something new or work on a challenging project. So that’s enrichment. Doubling up on the number of reports you have to get out next week doesn’t really engage people, doesn’t feel like it’s really enriching. So that’s where I talked about enlargement and enrichment.

And when we focus on AI from that operational or efficiency perspective, we tend to run that risk of piling on, right? “Oh, I can now, instead of doing 20 calls an hour, I can get 40 or 50 calls processed in an hour.”

Ashley Litzenberger:

I was talking to a fellow head of marketing a couple of weeks ago. He was saying, “I’ve trialed AI in 12 different ways within our marketing team. I’ve used it to build case studies. I’ve used it to draft new pitch decks. I’ve used it to build new positioning and messaging or campaign strategies.” And his response was like, “We tried 12 things last quarter. Three of them worked really well, and nine of them created intense levels of frustration because they didn’t do the thing. And we spent a ton of time trying to figure out how to run that experiment.” But his response was, “It’s not that we shouldn’t have tested those things. It’s that we still need to test them. And maybe some of the things AI isn’t quite ready for, but maybe some of them we can take those learnings into next quarter and refine.”

But it goes back to this idea, you can’t expect efficiencies in the same timeframe that you’re trying to encourage experimentation. Otherwise, you’re only ever going to get that first level of efficiency where you’re like, “I know this will be a home run, and it’s the only thing I feel comfortable taking a risk or taking a bet on.”

Jeff Jolton:

Yeah, I think you’ll see a lot of research out there, a lot of data points that talk about CEOs and leaders are frustrated that they’re not getting this magnificent return on AI out of the gate. They feel like they make any big investments that’s not coming back. And I think that part of the reason is that they expect that everything that you do is going to have a return. And I think that there is a lot of… First of all, there’s a lot of junk out there. Anyone, everyone can say, “Hey, I got a great wonderful tool. They’re used with AI. It’s magic. It’s perfect.” So we’re still sifting through and really, “What does my business truly need? What does my job truly warrant to get out of this?” And then there’s still time to really figure out how to make it work to get that 15% or 30% or whatever it’s going to be down the road. So there is this, I think, an over expectation of what that return will be.

I think we’ll get there, but I think that there’s, just to your point, there is still a learning curve. There is still a comfort curve. And at the same time, the expectation of the day-to-day job is not changed. People still have to get things done. So they’re trying to incorporate a very fast-moving technology at the same time that they’re still trying to get the work done. And in most cases, how many organizations have truly taking time to educate and develop people around AI?

I can’t tell you how many people I’ve talked to. It’s like, “Yeah, they told us that we need to do AI and I’ve never touched it before. I don’t know what generative AI prompts are supposed to be, and I’m just supposed to figure it out.” And they’re not those natural early adopters. They’re people that are a little tense around technology, and, “Gee, I’m going to break ChatGPT,” right? It’s like you’re not going to break it, but they’re not comfortable with the exploring. And so they need someone to say, “Hey, these are prompts that I developed and this is how it work.”

We need to be realistic about the learning curve. We need to be realistic that not everyone is going to adopt the same way. And we need to be realistic that some things fail and something are going to be greatly successful. We’ll get there.

Ashley Litzenberger:

As AI is getting more integrated into the workforce, what are the new skills or the mindsets or the culture expectations that companies and HR teams should be setting to help drive these adoptions?

Jeff Jolton:

So just in the broadest of it, the two things that always come to my mind, certainly adaptability. Just being a lot more flexible, realizing that we are working at a different pace than before, and a willingness to try new things and to just be able to do something different. It sounds very obvious and very easy, but it’s not. It’s just something that people still struggle with. So learning that adaptability and what did that mean.

The other, and this is… I feel like I highlight this every day on some Lincoln Post that I talk to one or the other, but it’s critical thinking. AI is a really powerful tool, but AI lies. AI is deceptive. AI is very much telling you what you want to hear. It is not a parthenon of truth and accuracy and perfection.

Ashley Litzenberger:

It’s true. AI learns who you are and it starts to feed responses that it knows you’re going to like. And that is something that that is just true. You have to actually start to realize that it’s biasing towards what it thinks you already want to hear as opposed to what you actually need to learn. It’s not objective.

Jeff Jolton:

And you have to be mindful of that. It’s going to be very helpful, but you have to maintain a really critical objective eye in working with it. And the paradox is because of the way AI works is that it’s very easy to actually move away from critical thinking, right? “Oh, it’s going to write this for me. It’s going to look this up for me.” And so it’s very easy to become complacent and move away from that critical evaluation. It’s actually where you really need to ramp up and dial that way up. And so the people that are going to be really important in the workplace are the ones who really keep their foot on that critical thinking gap.

If we go back for the last 40 years, critical thinking has always been at the top of the list of job success at all levels of the organization. I think now with AI, it will become even more so because they’re going to be the ones that are going to be able to differentiate, is this accurate, it’s not accurate, is this the right application or is this a better application, is this a good prompt versus this. There’s subtlety in a prompt that can make it effective or not effective, and that comes in great critical reasoning. So I think that is at a broad level, those two skills.

Now at the leader level, I think we want leader to become more data-centric. We’ve talked about this. I’m not saying leader need to be data scientists. No. But what I mean I want them to be data-centric, I want them to ask better questions. I want them to connect AI to value. So very much like we started this conversation, not just AI to efficiency or to operational value, but where’s this driving my strategy? Where’s it closing the gap? How I can leverage other value to my organization? And then also to insist on cross-functional data governance again to that accuracy and making sure that we’re really kind of getting the right information.

Remember, as we start to go through this revolution, we are also going to be importing and building data into our own AI systems. And so we have to remember AI consume data. So what you put into it is going to be very important. So leaders need to make sure that we are making sure we’re feeding and developing good data. So they have to be good consumer to really good data-centric mind, so they know they’re bringing that into their organization. Again, they’re not the scientists here, they’re not the [inaudible 00:17:41], I’m not asking that, but for them to be very mindful of it.

Ashley Litzenberger:

Humans at the end of the day are the ones who will direct AI for now at least. And so making sure that you are equipping the humans to do it as well as possible is more critical than ever. But to your last point about how if we just aim for efficiency and as AI kind of starts to do the lowest level of effort or some of the things that used to be entry points into workforces or into career fields, how do you see the future of entry level work evolving, especially as AI is taking over some of those most basic things? How should HR organizations be thinking about bringing folks into the workforce and getting that skill development that they need? Is it changing? Is it not changing? What do we do?

Jeff Jolton:

Yeah. I mean, I definitely think it is changing. I mean, one of the things that I think you… I know I sounds like guilty being a big Reddit reader, but you’d see it, people are getting really frustrated. They are trying to find entry level jobs, and it’s like everyone wants experience because they feel like the entry level, they’re already looking for experience, and a lot of the entry level work is going away.

I think that learning ladder needs to be redefined or protected in some way, and I think we need to unbundle the roles a little bit to help protect that. So I think the early talent still need to have some learn by doing reps. It may not be two years or a year like what we have historically done, but as AI takes on the basic task, I don’t think it is helpful for entry-level people to work alongside and even maybe be part of the training of AI to learn those task or to improve AI’s ability to do those tasks. But there should be some challenge gradients, for example, so they can shadow and then assist with AI and then start oversight and then be able to move on to the higher levels.

So I think there just need to be a realization that we can’t just jump ahead because people just are going to need that development, but it can be accelerated by AI. I think that we can also take advantage of doing more job rotations just with more practice because there’s part of the advantage is then because we can’t spend as much time on these low levels, that you could spend more time rotating across different roles to get a broader sense before you then move up into that more high-level position. So there’s an opportunity to get this early learn by doing in a broader sense, which historically we haven’t been great at doing. So you get a broader perspective, giving a richer background that otherwise we would not have afforded our entry-level folks. So that’s something that really could be a very exciting opportunity that we could take advantage of.

And then just like everything else, we need to start thinking about new metrics. What does the first year, two years, what does success look like? Maybe it changes from what we have historically looked for in terms of time to independence. What are we looking for or the ability to train and go from learning from AI to training AI or internal mobility, right? If we have the rotations, being able to be flexible because we are also moving into more of a skills market. So it’s not about having the job, but the ability to move around skills, a broader range of skills. So that could also be something that we’re tracking on. I have a broader range of skills now, and then I can keep expanding on that, which is another benefit of, “Well, I can use that AI time for… I can develop a broader range of skills than I would otherwise have afforded.” So those are just some things off the top of mind.

Ashley Litzenberger:

Yeah, I think that’s great. I love the idea of actually thinking about rejiggering entry level work to be a little bit more around getting to do internships in different areas and getting that more holistic view early. Because if we are moving into a place where we tend to be more agent managers or agent trainers, learning how different teams need to train different agents in different ways might actually help you do better in wherever it is that you end up landing. And that is a very transferable skill. And so there could be a world where we do actually get more generalists through different exposures.

And I love the idea of actually taking this opportunity of transformation or what is it? I can’t remember the right word, but a moment where things are fundamentally shifting and breaking a little bit. Let’s use it to actually reform entry-level work in a way that is more valuable for the employee and ultimately more valuable for the business.

Jeff Jolton:

Right. Right. How we unfreeze, and let’s just take advantage of the fact that we are in a point where maybe things are unfrozen. I always can talk if we’re in this unfrozen date and we really can take advantage of it and move into something new, but if we don’t, we kind of refreeze even harder in old ways. And so this is a great opportunity to really move into, “Hey, let’s level up because we’re not leveling up the skills, we’re not leveling up the way we’re thinking about what we’re measuring in performance.” Then what’s going to happen is to your point about the marketing job, is we’re going to double down. It’s going to be this really mundane and we’re just pushing buttons on the prompt machine and it’s not going to be a lot of fun.

Ashley Litzenberger:

And that’s already having an issue. We already have a lot of feedback about all of these blog posts are written by AI agents or signs that you can tell that it’s a bot on LinkedIn instead of a human. To that extent, you still need a human in there to make things look different, feel different, still feel valuable, still feel resonant. So there is actually a net negative when you overly automate something, it just feels like fluff or a noise instead of something functional. So there’s a lot of reasons to keep us away from just looking at and driving towards efficiency and focusing on how AI will deliver enrichment.

Jeff Jolton:

I will say in my own experience, I love using AI to read something I’ve written and just say what key points it pulls away or what it might change. And it tends to take really unique data points or really unique things I think are really unique highlights and it takes them out, because it’s not trained to look for those, right? It’s looking for the norm. That’s where the human part comes in.

Ashley Litzenberger:

Last question for you. We’ve been talking about efficiency versus enrichment, and you’ve been talking about you bringing AI into your own processes. Give me a couple of examples of how you’ve brought in AI into your work in a way that’s also led to professional enrichment as well.

Jeff Jolton:

So a lot of the work I do, so it’s like an intersection of data, talent strategy, and effectiveness. So I work a lot with data and working in these different things. Sometimes I’m trying to figure something out from a statistic perspective and I don’t necessarily have a lot of peers who are in that same space. I’m kind of in my own world here. And so AI is sometimes a really great sounding board. I can say like, “I am considering this or this, what do you think the pros and cons of this?” And so it becomes a really great spawning partner. And I can sometimes push it a little bit and hit like, “Well, really this and that.” Again, sometimes I catch it lying and it’s like, “That’s not true. That can’t be possibly… You’re missing this part.”

“Oh yeah, you’re right.” They always come back and says, “Oh, you’re right. This is right.”

So it’s not flawless, but it has been really great to have something that can help me challenge something. And don’t forget it, it can help me write the odd code or help me write the SPSS or AMOS code. Man, that’s just like great. It gets me started. It gets me 50% there and then I can finesse it. That’s just a great time saver and it is a great partnership. That’s just something I really love.

And then I write very quickly. I really enjoy the process of writing, but I am also really self-critical. I never trust anything I write. And so it’s nice to just have something to say like, “Well, how would this be perceived? What are the key points coming out of it? And so it does a good job of summarizing. And then I can just kind of see, is this going to be too wordy for a CEO audience? Is this going to pick up the key points? And as I said, I can see where it washes something out, but then it also help me know I need to punch something up. So I don’t want to write for me, but it certainly is a really good editor or reviewer. And I find that really great into having to bother someone all the time, “Hey, can you read my paper? Can you read my paper?” So those have been two really good examples of how it saved me time, enhance my work, made things a little bit more effective for me.

Ashley Litzenberger:

I love that. That is great. I use it a lot kind of in the same way. I have ADHD, and executive functioning is a little challenging for me. And so whenever I have to write something, I tend to now I’ve been able to shift where I put the majority of my mindset into thinking about, how do I want to frame this, how do I want to talk about it? How do I want to introduce it. But I don’t look at the attention to detail, the typos. Or when I write, sometimes I miss words. It’s just one of my challenges. But I find now I’m able to think more about how do I improve the structure or quality of this to be even clearer to the audience. And the time that I used to spend and the mental power I used to spend to really work on my weakness, which is attention to detail, I can now pass off to ChatGPT.

So it’s an exact example of what you were saying where AI is going to actually help you with efficiencies. It will catch every typo. It will help create consistent tone. It will help me move things out of passive voice into active voice. And that lets me actually produce a higher quality piece of content at the end of the day. So I’m still producing that one piece, but it’s so much better than if I had just created the same piece using only my mental capacities for every step of the way.

Jeff Jolton:

Then you’re spending more time on what you want to say rather than how you’re setting in, which is always great.

Ashley Litzenberger:

Or getting to look at like, “Ooh, I want to…” In marketing, I get to create new web pages and instead of just being like, “Oh my gosh, I have three hours to create a new web page. Let me just focus on copy updates.” I can actually go and be like, “If I actually completely re-approached this, what would it look like?” And it gives me that creation time, which I really appreciate. And that, again, like you said, that’s the bringing, giving more time back to the human to do what the human does best.

Jeff Jolton:

Exactly. Beautiful. Well said.

Ashley Litzenberger:

Yeah.

Well, I think we are right at time. Thank you so much for joining us today, Jeff. This has been one of my favorite conversations I’ve had on the People Fundamentals podcast. I really appreciate you taking the time to come and talk about it, all things AI today.

Jeff Jolton:

It’s been my pleasure. This has been a lot of fun for me as well. The time has certainly flown by.

Ashley Litzenberger:

As we wrap up our conversation with Jeff, let’s focus on a few key takeaways. First, AI should enrich work, not just enlarge it. And leaders who focus on creating opportunities for innovation, collaboration, and development will build more engaged teams.

Michelle Gouldsberry:

And second, adoption takes time. Organizations need to create space for experimentation and learning, allowing employees to gain confidence as they build new skills.

Ashley Litzenberger:

And third, people remain at the center of this transformation. Critical thinking, adaptability, and reimagined pathways for entry-level employees are key to making AI adoption sustainable.

Michelle Gouldsberry:

What I loved most about this conversation is how Jeff connected these ideas back to leadership practices. Leaders cannot simply layer AI on top of existing workflows and expect results. They need to think about how jobs are designed, how employees are trained, and how culture supports continuous development. That means investing in skill building, encouraging curiosity, and making sure people have the freedom to use AI tools in ways that spark creativity rather than just speed.

Ashley Litzenberger:

If you put enrichment, adaptability, and human-centered design at the heart of your AI strategy, you’ll not only improve performance, but create workplaces where employees feel valued and motivated, and that is what drives long-term success.

Michelle Gouldsberry:

Be sure to stay tuned for our next episode of the People Fundamentals podcast. Subscribe to us on Apple Podcasts, Spotify, or YouTube Music. And if you like what you hear, share us with your friends and colleagues. We’ll see you again soon.

Take a Tour