Show all episodes

No One to Call featuring Tim McElgunn

Released on FEBRUARY 9, 2024

Great comedy requires keen observation and analytical skills. One of the best at observing and analyzing human behavior was George Carlin. In this bit from 1976, he takes aim at our behaviors with the telephone. Of course, telephone technology has come farther than either Alexander Graham Bell or George Carlin could have anticipated. Today, they are indeed everywhere, all the way from Orlando to Albuquerque.

The kind of observation and analytical skills needed for comedy are like what’s needed to provide insights into industries like the Contact Center industry. Tim McElgunn has made a career analyzing technology solutions and strategies. Contact Centers are often the first to take advantage of new technologies and Generative AI is no exception. But good analysis considers the risks, not just the potential gains.

We discuss:

  • The risks companies face when adopting Generative AI
  • The challenges for employees
  • Data poisoning and security concerns
  • Dangers for customers
  • Opportunities where AI can make positive impacts
  • Change management and accountability

Connect with Tim on LinkedIn

Music courtesy of Big Red Horse

Transcript

Rob Dwyer (00:01.294)
Tim McElgunn, you are next in queue. How are you today, sir?

Tim McElgunn (00:06.697)
I'm doing well and I'm really happy to be here. Thanks for inviting me.

Rob Dwyer (00:10.23)
Well, I am excited to have you on the show. I don't know that I will have an episode that matters more in the first few months of 2024 than the episode that we're about to do. And I think that maybe people are getting...

tired of hearing about AI, but it's not going away anytime soon. And you and I are going to talk about, it's for sure, we're going to talk about some of the risks involved with companies that are rushing out for that shiny new thing and what kinds of things they need to be aware of.

Tim McElgunn (00:44.492)
That's for sure

Tim McElgunn (00:58.189)
Mm-hmm.

Rob Dwyer (01:04.797)
and how they can protect themselves, their employees, and their customers.

Tim McElgunn (01:10.873)
Yeah, a lot to talk about there for sure. And you know, I mean, a lot of this is not new in the sense that every time a transformational technology has come along, there's been some of these same risks. I just think that the power of these solutions is such, and the shininess of the new thing in this case is at such a level that I think...

that we really, really need to be careful to, as always, bring it back to first principles. Like, what are we actually trying to accomplish with all this technology?

Rob Dwyer (01:50.047)
For those that are not familiar with you, tell us a little bit about you.

Tim McElgunn (01:56.293)
Sure, so I've spent my career as an industry analyst, covering a pretty vast variety of topics. Started out my career looking at telecoms, specifically broadband back in the early days when that was the shiny new thing that people were trying to figure out how to make a buck employing in the enterprise. And vendors were trying to figure out a way to make a buck selling it.

So that's always sort of the question that's there. I've spent some time looking at HR and benefits solutions and strategies. And then most recently, I was a principal analyst at HDI and ICMI looking at all things contact center and IT support and service management related. So that's my most recent gig and certainly the one that applies most directly here.

Rob Dwyer (02:46.966)
Yeah, it certainly has given you a lot of insight into what companies are thinking about, how they're approaching AI. And we have seen a lot of companies really dive headlong into putting AI front and center and engaging with their customers. So what I wanna do is kind of dig into

Tim McElgunn (02:56.131)
Mm-hmm.

Rob Dwyer (03:16.962)
some of the risks that the company is putting themselves at. We're gonna dig into employees and into customers, but let's talk about the company and potentially reputation and those kinds of things. What are some of the things that you see as pitfalls that companies need to be wary of?

Tim McElgunn (03:24.849)
sure.

Tim McElgunn (03:40.917)
And you know some of this stuff right now it kind of comes off as funny right so there's the Chevy dealership where the chatbot You know ended up offering a guy You know multi tens of thousands of dollar vehicle for a simple dollar It seems funny, but it's not funny because even after the fact they fixed it They figured it out it you know we've seen this sort of thing before where you know through some Call it a paperwork error or whatever somebody gets a deal. That's obviously too good to be true and

with very few exceptions, those don't end up actually taking place. Right. So we can kind of laugh at that. But the fact of the matter is that company, the dealership, had to spend a load of effort money. They lost time, they lost reputation. And now they'll forever be known sort of as the one dollar dealership. And that is, you know, in that situation, probably something that you can overcome over time, especially if you're

doing a good job of training your human customer service rep so that they can respond to this sort of event effectively. But yeah, that kind of reputation, reputational damage takes a while to clear up. And as I said, it's expensive. It's not like, you know, you just, they had to take that thing offline. They had to rejigger it. They had to figure out what went wrong with it. And then they had to put it back up and test it and hope for, to some degree, hope for the best.

I think it also points out, and I don't know this for sure because I haven't seen kind of behind the curtain in terms of what happened in that particular case, but I suspect they didn't really have the expertise in-house to build and manage the technology that they deployed. And I think, you know, you take that from a single dealership and now you start to apply that across thousands of contact center seats, multiple product lines that are being supported in those contact centers.

and you start to get a feel for the potential impact of making an error in deploying these things.

Rob Dwyer (05:46.518)
Yeah, it reminds me, and this happened to me literally yesterday, I was dealing with a SaaS company and I had a problem, right? I reached out and, of course, the chatbot is the first thing that they've got. I knew who was providing the chatbot, and I went to their website. And, of course, it is an AI enabled chatbot. By the time that I was done, I was

Tim McElgunn (05:52.525)
Oh, OK.

Rob Dwyer (06:15.874)
done with that chatbot and I wasn't really trying, right? I mean, I had a legitimate problem, but the chatbot had given me a credit for 10% of what I had already spent with them. And I was thinking at the time, like, I wonder how far this will go. Like, what's the limit? What are the guardrails? What I had spent with this...

a company was all of $12, right? So the chat bot has given me $1.20 worth of value extra, which, you know, not a big deal until you start multiplying that across tens of thousands of potential users. And some people will go, oh, well, if I can get 10%, can I get 20? Can I get 30? Can I get 40? And it becomes this slippery slope.

And if you don't have the right guardrails in place, you can actually just give away the farm.

Tim McElgunn (07:19.381)
Right, and that's, I think, really important because

Tim McElgunn (07:25.349)
Contact Center leadership is always going to talk about CSAT and all of those things, as if that's the holy grail. But let's be honest, for most contact centers, that is not the holy grail. And I use actually an example from IT support. IT support should never get an excellent rating. It's not what's intended to happen. What's intended to happen is that it does its job. It accomplishes the goal.

So I think we really need to be very careful about what is the goal. And for contact center leadership and leadership in the larger organization, the vendors are out there telling them, hey, we've got AI. We're bringing all this power to the party, and you're going to be able to do all this great stuff with it. And it sounds fantastic, and it sounds like, okay, hey, you know, we're going to be able to not only reduce costs, but we're also going to be able to increase CSAT. Well,

You know, that still remains a difficult needle to thread, let's call it, trying to do both those things at the same time. And I think if you take your eye off the wall of which of those efforts is really at the core of what you're trying to do, now you're implementing either the wrong sorts of AI or you're implementing AI at the wrong, you know, kind of junction points in your processes.

And in a lot of cases, what you're not doing is you're not doing the core work that needs to be accomplished before any of these tools, whether they're AI powered or, you know, more legacy traditional tools can do what you're hoping them to do. You know, you've got to do that grunt work up front.

Rob Dwyer (09:05.614)
Yeah, absolutely. And I love that, to your point, AI itself is not inherently good or bad. It's a tool just like anything else. It can be used effectively. It's all about where we apply it and what we're trying to accomplish when we do that. Let's transition a little bit. And obviously,

Brands can potentially experience some big challenges. Let's talk about the employees. You just talked about doing the grunt work and really equipping our people internally. But what are some of the challenges with AI as it relates to employees of the companies that are adopting this?

Tim McElgunn (10:00.053)
Yeah, I think that, you know, it's a pretty vast spectrum of potential issues that come up. I mean, just as vast as the potential benefits this technology can bring. So first of all, you've got just human concern, right? What's happening to my job? You know, am I training in AI that's just going to replace me? You know, what does all this stuff mean for me? Secondarily, you know, especially at this stage, we can't pretend that these are

perfect tools, right? We hear about all these crazy hallucinations. We hear about the errors such as the ones that we just discussed. Well, now you've got an employee, a human employee, who is being told that this is your coworker. This is your tool. This is what you're gonna lean on to improve your performance. But they're not sure they can trust it, which means they go into their days nervous. They go into their days without a sense of confidence. And in many cases,

you know, maybe they're not learning the core business, the core of their business. They're not building the skills that make for a fantastic contact center agent, because some of that stuff has been taken on by the AI. But, you know, so, and I think we've talked about this in, you know, many other venues, but an unconfident employee, an unhappy employee,

Whether they're backed up by the most expensive, shiniest tool available or not, is not going to deliver excellent customer service. And the other piece of that is, that that's part of what these systems learn from. And if they're learning from agents that are not doing a good job, that's expensive. It can be overcome obviously over time. These things are smart. What they really bring to the party is massive horsepower. They're just really, really fast.

So all this stuff will be addressed eventually, but there will be damage done. And you talked a little bit, you brought up the sort of the institutional knowledge that really powers excellent customer service, customer experiences. It comes from people who have learned from dealing directly with customers and have gotten really good at their job. Contact centers have always had an issue with turnover, massive turnover. So this is not something new.

Tim McElgunn (12:24.845)
But what I think is going to happen is you're going to have people leaving sooner if they feel again, like they're uncertain that the tools that they've been told they're going to use are not confidence building for them. And then the other piece of this, and this is not really from the employee, but more from the employer perspective, the folks who get really, really good at this are going to get very expensive. And when that happens...

the cost saving aspect of this exercise starts to get really shaky.

Rob Dwyer (12:56.662)
Yeah, very interesting. I wonder what your thoughts are on, obviously, a lot of the use cases around AI interacting with customers has to do with deflection. So we're talking about chatbots. We're increasingly talking about what are known as virtual agents. So if you think about the IVR where it says, well, I think of the.

Tim McElgunn (13:11.077)
Hmm.

Rob Dwyer (13:25.478)
scene in Seinfeld where Kramer is answering movie phone. And he goes, why don't you just tell me the movie you want to see, right? But those types of interactions, to some extent, can be handled by AI and sound very much like a human. So we start taking some of the low-hanging fruit, the easier to solve issues, off the table.

and we're automating those, and we're leaving the more complicated challenges for the humans. What impact do you think that has, particular to the contact center environment?

Tim McElgunn (14:10.161)
I think there's a couple of things. You know, there's a reason that folks are making money position themselves as prompt coaches, right? Actually, how you interact with these device or these systems impacts very directly the quality of the output that you get. Well, you can't expect everybody who's calling into your center or contacting your center online to actually understand the best way

put the question or express their issue, describe what they're really dealing with. And these things don't think, they basically recognize patterns in words, and then they pull out what they think is the next best response. So I think we're gonna see situations where not unlike the IVR, and I am probably one of the biggest haters of IVR on the planet, I've just had some...

some crazy experiences over the years with IVRs. Those mistakes or those lessons have been learned by the folks who are integrating AI into contact center solutions, obviously, because they were at the brunt of all that stuff. But I think that that's a big part of it, is that these things are going to be dependent to some degree on input, right? Garbage in, garbage out, and it's not like your customers are trying it well.

some of your customers are, but it's not like all of your customers are trying to trip up the system. And the trouble is, if you've got a customer who comes in and does a lousy job of presenting what their problem is, but the response comes back and it seems really confident and it seems to make sense, and I don't know enough to make the call, is this correct or is this not correct? Where's the guardrail on that? The guardrail on that, and we've talked about this in the context as well.

using your customers as quality control is a really bad idea. And I think in this situation, for some time, we are going to be using customers as quality control because it's only when the solution blows up that we're going to hear about it. And that also has implications for the cost and the complexity of implementing these systems. Because when those problems show up, a human being

Tim McElgunn (16:29.665)
or probably a team of human beings, is going to have to go in there and do sort of a post-mortem and figure out what really went wrong and what do we actually fix. And even if you talk to the vendors who are really doing what seems like a really good job in thinking this stuff through, and certainly at least their marketing positions it that way, there's a black box element to this, even for them, that makes it really, really tough to go in and fix a problem and understand

where does that problem really derive from? And I think that that's gonna trip some people up.

Rob Dwyer (17:09.154)
So you talk about the black box aspect of this. And one of the things that I think is both interesting and frightening is the idea of data poisoning, which could come from the employee side. If I get an employee who is really upset, you talked about, am I training this AI to take my job? You may have some employees who are

take a vindictive approach to that and try to poison the well. Can you talk a little bit about that?

Tim McElgunn (17:47.681)
Yes, I'm not a deep data security guy. Certainly, it's anybody who looks at these topics has to be aware of the reality. But from what I do know, the phone call is most often coming from inside the house. Data security issues, data breaches, a lot of that stuff happens either because of human error, lack of attention on the part of an employee.

or in a lot of cases, you know, as the effect of a disgruntled employee. And something I think we need to always remember is that these tools are not restricted to good guys, right? The same power that we're hoping to apply to our businesses and improving the customer experience and increasing our sales and all the stuff that, you know, is the reason to make an investment in any technology.

Rob Dwyer (18:28.727)
Yeah.

Tim McElgunn (18:40.193)
All of that is available to folks who really just want to mess with us as well. So I think, you know, we need to be cognizant of that, that we are dealing with, you know, an arms race to some degree. We've seen it. We've seen it in the past where, you know, organized crime is among the earliest investors in advanced technology. It's not going to be any different here. So, you know, I sometimes think about the example of Uber and I don't know if this is still true, but it certainly was true for a while. The Uber model,

Rob Dwyer (18:44.707)
Yeah.

Tim McElgunn (19:09.913)
basically said, hey, we get rid of all these cab drivers and we replace them with contract workers and some AI. What you end up doing is you end up replacing one cab driver, relatively inexpensive, with a couple of really high priced computer scientists. So that model just doesn't work and we need to be really careful, I think contact centers need to be really careful that we're not replacing a bunch of comparatively inexpensive human beings who are doing a decent job.

with much more expensive computer scientists that are required to do the care and feeding of these systems.

Rob Dwyer (19:50.634)
Yeah, I do think it's important that people recognize that when you are dealing with AI, there is not a lot that you can do just off the shelf. There is a level of training that is required to fit your specific business needs. Yes, we can all go out and we can play with ChatGPT or ARD or the likes of.

those solutions and they can be really fun, they can be interesting, they can provide surprising results that make us ooh and ah. But when it comes to being focused on your business and providing reasonable, actionable, and accurate information, you need to work with it for a while to get it to where it needs to be.

And that is in and of itself a job, just to get there.

Tim McElgunn (20:57.733)
It is. Yeah, and this is something I've spoken with vendors about. I understand that you've been doing this for a long time. We'll say contact center solution vendors. I know that you understand the ins and outs of what a contact center does, but do you know exactly what that contact center needs in terms of its businesses? Do you understand how it interacts with the rest of the business? Do you understand in the

ITSM context, IT service management context, IDLE4, whatever, the value stream, how it fits into the bigger picture. And if you don't, well, that requires a whole kind of sales consulting staff that needs to be added on top of the work that you've already been doing to sell and maintain these systems with your clients. So I think that, you know, what that really...

speaks to in the real world is that anybody who's looking to deploy this stuff needs to talk to their vendors about how well do you understand my business. Because like any other tool, you can automate anything. You can automate a broken process. And God knows that that's happened over and over and over again in the history of technology. And it's happened in the case of AI as well, certainly many times, more than we even know about. There are a lot of broken AI initiatives out there.

and they're hugely expensive, especially at this point in the game. So I think that aspect of understanding, not just what does a contact center do, because that's kind of not a realistic question. What does this contact center do?

Rob Dwyer (22:40.096)
Hehehe

Yeah, it reminds me a little bit of the BPO approach when you're working with another business that is looking to outsource some of the functions, often within the contact center, although there are a lot of other functions that BPO's can provide. And one of the things that those people who are vetting BPO's often want to know is, what's your experience in BPO?

in my business sphere, not necessarily with my specific business, but have you done this type of work before? And that could be, have you done this type of work for a business that does what we do? Or it could also be, have you supported this channel and the unique things that go along with that? So for instance, right, there's a big difference between answering the phones.

and dealing with physical US mail. Those are two different contact channels that require totally different infrastructure to deal with and experience to deal with. And as you talk about bringing AI into a business, it's some of the same kinds of questions that you're going to need to ask a vendor.

Tim McElgunn (23:50.395)
Mm-hmm.

Tim McElgunn (24:07.413)
Yeah, and I think it's really important to understand also that if we've gone through the multi-channel, the omni-channel, all these things, and we talk about what does that mean for agents, and the really great agents can handle X number of simultaneous sessions and blah, blah. I would say my experience when I know that somebody is handling multiple sessions, you can tell, right? You can tell, you're on that chat, and that's with a human being.

And you can sort of feel that delay and you know that you are not the only focus for that particular agent. So I think that that's one of the things that needs to be kept in mind is that people are I think more aware of, even if it's at some sort of like subliminal level, they're pretty aware of what they're dealing with when they interact with the contact center. And I think now one of the dangers is...

because these things are really, really good at imitating human beings, there's going to be situations where you think you're dealing with a human being, and at some point it becomes revealed that you're not. And I think that is going to be a negative experience for a whole lot of people, and one that really needs to be kept in mind, because then we talk about, oh, this stuff's gonna get handed off to a human when it gets complicated. Well, that's where it gets complicated, and how suddenly complicated is negative. So the folks that you're...

you know, using as your tier, whatever it's going to be called at this point, your next tier up, they're going to be handling not only difficult, complex situations, they're going to be having difficult situations. And I think that has a real impact on stress, on employee satisfaction and all the things that go into, I think, making for a successful contact center, especially for higher value goods and services.

Rob Dwyer (26:04.61)
Well, you've kind of led us down this path about customers and maybe some of the frustration and negative experience that it could potentially create. Let's talk about some of the potential dangers that lie for the customer. Because ultimately, that's who we're trying to serve with this technology. Where can that go wrong?

Tim McElgunn (26:32.089)
Well, and again, I'm not a computer scientist, and I may be a little bit behind the times on some of this stuff. Nonetheless, what is training data in this context? Now, you talk to the vendors, and they're like, hey, this is all anonymized. It's from decades of doing this work. There's no danger there. Well, I'm sorry. That's just not true. There's always going to be danger in terms of the training data that you're using.

And it's a privacy issue. And I think, you know, you look at GDPR, especially the European regulations around data privacy and data agency, which is not a bad way to look at it, right? How much agency do I have over my data, my, you know, the story of my life, so to speak? So if anything happens and that data is exposed, well, we know what that looks like. You know, I mean, I get probably a couple, three envelopes a year.

saying, hey, your data may have been exposed and we're gonna give you a year of free credit checks. And, you know, well, now we're talking about AI and it gets that much more complex, that much more potentially catastrophic, I think, for people. So I think that is a huge concern and one that, again, contact center leadership needs to discuss with their vendors very, very forthrightly. The other thing that has come to mind for me, especially when looking at things like

intellectual property, copyright, that sort of stuff. We're seeing it with the chat GPTs and the bards and all the rest of these large language models that are out there and they're saying, no, we're not violating copyright with this work that we're doing. But wait a second, you're scraping all the human knowledge on the web and you're using that to make your product with. How are you not potentially...

you know, violating existing or in some cases, probably developing regulatory and legal structures. The question is how far down the supply chain does that trickle? In other words, if I use your system and your system's done something that is found to be illegal, et cetera, I've been using it, my customers have been interacting with this system, what is my liability?

Tim McElgunn (28:55.401)
And much smarter legal minds than mine are looking at this stuff every day. But I think that, again, this isn't plug and play. It's not set it, forget it. There are issues around this that are going to be, in some cases, probably business survival threatening if they are not foreseen and headed off. And right now there's just a little too much sort of, let's stick it in there and see what happens. It's bright and shiny. The CEO says, what are we doing in AI? So we're gonna do something in AI.

And those kinds of mistakes maybe will crop up.

Rob Dwyer (29:29.162)
Yeah, I do worry that we are very much in a time where we're moving fast and loose and hoping to ask for forgiveness later as it relates to legislation. And we'll see in the long term how that plays out. There have already been some legislative actions when it comes to biometric data, particularly to the state of Illinois. Those people who listen to the show may remember

the episode that I did with John Walter when we explored that. But you talked before, too, about this idea of personal data. And I go back again to the black box nature of some of these systems. And that black box nature doesn't facilitate.

the ability to cleanse data in the way that a standard database would. So if I know, and you brought up GDPR, right? If I get a request to remove personal data pursuant to GDPR, I need to know where all of that data is stored. I need to be able to remove all of that data and provide validation that I've done so.

If I have pulled in that data with my LLM enabled chat bot, I don't reasonably believe that you can delete that data because of the black box nature.

Tim McElgunn (31:00.835)
Well, yeah.

Tim McElgunn (31:20.325)
Well, I think that that's very true. And there's another piece of it. We don't really know what personal data is at this point. Is it everything that a chat bot learns and therefore modifies the way that it behaves? Is that information about my activity, my sentiment, sentiment analysis, all of these things that are very powerful tools that these AI systems are enabling? How much of that is considered, could be?

Rob Dwyer (31:27.412)
Mmm.

Tim McElgunn (31:49.233)
consider personal data. There's a lot of law around this that still, in my view, remains very unsettled. Some people say, hey, copyright handles it, GDPR, current privacy data, data privacy regulations. They just need to be applied. And I'm not 100% convinced that that's true. And I think that especially large corporations are gonna be spending a whole lot of money on lawyers.

just to make sure that they understand what all that stuff is, what is consent? And one of the things that we don't think about, every single interaction now creates more data. So we've already seen the issue with dark data, with siloed data, with data that we don't understand where it is or what we're doing with it. Data that's just kind of lying around, almost like the warehouse at the end of Indiana Jones, the boxes upon boxes disappearing off into the distance.

Rob Dwyer (32:45.331)
Hehehehehehe

Tim McElgunn (32:48.261)
Well, that much data is being created kind of on an hourly basis now. So, so there's just, you know, the, the challenge of actually tracking and knowing where that is and knowing whether it's, it's personally identifiable and how personally identifiable and what is, you know, what have I agreed to, what have I not agreed to? All of that stuff remains out there. And I think for context, it is just by the nature of the work, we are constantly asking folks for their information because that's the only way

you can provide a level of service that they're expecting. So it's sort of just baked into the model that you're gonna have some of these issues. And I'm not saying that they're all gonna be problematic. I'm saying you better know whether or not they are as you're building your strategy around these tools.

Rob Dwyer (33:36.534)
Yeah. So we're talking a lot about risks. Let me flip the coin on you. Where do you see really good opportunities to leverage AI within?

Tim McElgunn (33:41.513)
Yes.

Tim McElgunn (33:53.401)
You know, I think the conversational AI stuff, and again, there's all these caveats, and we've just been spending half an hour talking about those caveats, but yeah. The shiny, beautiful future that the vendors are describing is not fantasy. It's not being made up out of whole cloth. These are functionalities that, if implemented well, are going to make a huge difference.

you're going to be able to understand what folks want more quickly, more accurately, and surface the hopefully the best possible response to their requirement in real time. Real time is where we're not there yet, I think. It's pretty close. It's fast. But if you even look at things like transcription, meeting transcription, right, every meeting system you use now will supply you with a transcription. Well, I've looked at some of those transcriptions. And like many, many...

transcriptions, they require hours of human work to figure out what's really there. So there's work to be done there. But I think that ability to capture, collate, and then take action based on the information that comes out of sentiment analysis, out of voice, out of all the things that these pattern recognition machines, which is really what they are. They're just.

really, really fast pattern recognition machines. All of that is gonna be extremely valuable to contact centers. But again, I'll go back to a very early AI project that I'm aware of. A guy that I did some ghost writing for was sort of a pioneer in AI. And he and his very high powered PhD teams went off to India and they were working with the company. I believe they did things like...

parse resumes, right, which now of course is almost the standard operating use case for many, many AIs. But this goes back to sort of the dawn of things. And so they spent months and they analyzed everything and they came back, they did the business case and they came back and they said, listen, you're at like 75% accuracy. Your business model doesn't require anything more than that. So basically,

Tim McElgunn (36:18.437)
what our recommendation is that you fire us. So they'd spent all this money, they'd spent all this time, and at the end of the day, the business case wasn't there for that particular solution. And I think that is a pretty good cautionary tale for anything around this AI. And it goes back to things that we talked about at the beginning of the discussion. Do you know what you're trying to accomplish with these or any other technology tools?

Rob Dwyer (36:49.726)
Yeah, it's really important that you have a clear picture of what end result that you're trying to get to and what specific milestones are indicators of success for you. Because if you don't do that at the beginning, you're really just spending money and hoping something good will come out of it.

Tim McElgunn (37:19.285)
Yeah, and even separate from AI. I mean, we see this all the time with KPIs and with metrics and this, hey, why are you measuring that? Well, because when we installed this system, 20, I won't say 25 years ago, five years ago, these were the metrics that made sense for our business. So yeah, we just keep on measuring them and we keep on rewarding our employees based on these metrics.

And at the end of the day, they have nothing to do with our strategic competitive advantage. So, I think that becomes even more accelerated probably as you start to use some of these really, really fast tools. Excuse me. I did want to go back, you know, briefly you were talking about, you asked me about the sort of the positive side of this stuff. Well, again, when these systems work well, when they're implemented well, when they're focused on the right outcomes.

You know, we talked about the difference between a confident agent and a unconfident, unhappy agent. When these tools work well, they are going to make agents feel very, very confident. They're going to know that they are getting, you know, they're going to at least think that they are getting the best possible response to share with this particular consumer, customer, business partner, whatever it happens to be.

And that is going to make them feel great about the work they're doing. And, you know, we know what that means. A happy, engaged, excited employee makes for a happy, engaged, and excited customer. So I think that those kinds of, you know, in-ear coaching, as people get used to it, and as the results are shown to be really positive, that's going to be, you know, a huge, huge benefit to the contact center.

industry as a whole.

Rob Dwyer (39:18.254)
I believe you're right. And I also believe that change management process is going to be the hardest part because I remember it's been probably a year ago. I had Serial on from XFIND and we were talking about his solution and one of the biggest challenges and this isn't a whisper in your ear, right? Because this is non-voice support.

Tim McElgunn (39:46.043)
Mm-hmm.

Rob Dwyer (39:49.67)
And the idea was that his solution is pulling in the best knowledge base articles for the particular issue that a customer is having. And he said the biggest challenge was the agent confidence that they were getting the right information, even though that was the whole point of the solution, was to get you the best answer. There is this gap of confidence between the way

I as an agent have previously gone and gone about my process to find what I believe is the best answer and this new process where the answer is being served to me. And I think solving that change management process is going to be the biggest hurdle and it's a hurdle that requires great leadership and great people who understand

Tim McElgunn (40:25.759)
Mm-hmm.

Tim McElgunn (40:45.233)
Hmm.

Rob Dwyer (40:48.056)
the psychology behind that.

Tim McElgunn (40:50.829)
Yeah, because it is one thing to sort of have gone through the process and come out the other end and said, OK, I feel I feel confident in delivering this solution, this answer to my customer. There's another where it just pops out, you know, sort of pops out of the toaster. Here's your answer. I don't see the process. I don't understand what the, you know, the inputs are. I don't understand what the, you know, sort of historical background to this thing is.

And if one of those goes sour for any agent, now, first of all, who do you blame? Who's gonna get the blame upfront? Because you look at the metrics and you see somebody who's really dissatisfied and you track that back to agent X and agent X now needs to explain themselves. Well, a couple of those, and again, you lose trust. And I think that trust in this technology is gonna be more of an issue than it has in many others just because of the apparent

sort of humanity and the apparent intelligence in these systems.

Rob Dwyer (41:57.058)
Yeah, I think that leads me to think about the one thing that I don't feel like we have a good handle on either is the accountability piece. Who's accountable when we provide the wrong information, the wrong solution, we give a refund in a situation where we shouldn't give a refund, all of these potential things that we

Maybe looking at automating through the use of AI, who ultimately is accountable? And I don't have a good answer for that. I don't know what your thoughts are on that, but I have a pretty good feeling.

Tim McElgunn (42:40.149)
Well, you know, a bad boss with a powerful tool is just a more powerful bad boss. So there's that aspect of it for sure. And let's face it, contact centers are not necessarily the most warm and fuzzy parts of a business. And there's all the reasons for that. There's the turnover. There's the, again, depending on your business model, it's one thing if you're, you know, coach handbags.

the level of support that you're going to be expected to deliver And the training you're going to give those folks and all the rest of it is going to be so high But you know if it's sort of like a quick and easy what's wrong with my PC? kind of call center You're not going to have that same level of

Tim McElgunn (43:30.565)
capability of experience and ability to deliver that level of service.

Tim McElgunn (43:40.161)
So, and in those situations, there's a lot less investment in the people being treated properly. And so if these devices, if these device, device is not the right word, if these technologies are used to excuse the company for the decisions and the processes and the, you know, the things that it's put in place, and, you know, we talked about diversion. Well, if you're diverting blame,

Now, you know, that becomes really, really unhappy for the folks that are still working for a company like that. And I think that will be, you know, there's just by the nature of the contact center industry, there is a lot of, you know, very people heavy, but very low cost kind of models. And you know, if that starts to turn turnover even faster before we get to the point where these

technologies can take over that much of the work, that's gonna have a real impact on people's ability to stay in business, I think.

Rob Dwyer (44:46.154)
Yeah. Well, Tim, I am so honored to have you on the show. Thank you so much for joining. If people want to get in touch with you, we'll definitely put your LinkedIn information in the show notes. Is there another way that's also good for you?

Tim McElgunn (45:01.585)
That would be great, yep.

Yep So t McAlgon at gmail.com will do the job right now Will certainly get to me So yeah, I'd be happy to you know continue a discussion along these lines with anybody who's interested in hearing what I have to Say hopefully after this conversation, which I really appreciate you have me on for For one reason it's fun It's always good stuff when we get together and doodle around about this stuff. But yeah, I am I'm happy to

answer any questions folks have or even entertain their arguments if they think I've got stuff wrong.

Rob Dwyer (45:41.33)
Awesome. Well, Tim, thanks again so much for joining Next In Queue, and we'll talk to you soon.

Tim McElgunn (45:47.469)
Yeah, Rob, it was a pleasure and just great to talk to you always. Take care.