Show all episodes

Good Against Remotes is One Thing featuring Kaspars Kirsis

Released on APRIL 26, 2024

Despite Han Solo’s skepticism in the 1977 classic, Star Wars, the Marksman-H training remote was used to train Jedi for roughly 900 years. They were quick and unpredictable just like a living opponent. They provided instant performance feedback and they could be used at scale, offering unique but simultaneous training experiences to an entire class of Padawans (Jedi trainees for you non-nerds).

The training remote is the mock call of the contact center world, but as long as contact centers have existed, most mock calls were carried out between two people and getting everyone the practice they needed was incredibly time-consuming. But the emergence of conversational AI allows training classes to execute mock calls, one of the most effective training methods, at scale. Kaspars Kirsis is the Co-Founder and CEO of Ramplit, a company using AI to speed time to proficiency in the contact center by providing automated call simulations.

We discuss:

  • Benefits of Mock Calls vs Other Training Techniques
  • Challenges of Traditional Mock Calls
  • Number of Simulations and Focus Areas
  • Agent Performance Evaluation and Feedback
  • The Impact of AI in Training and Contact Centers
  • The Exciting Opportunities in Agent Assist Solutions

Connect with Kaspars on LinkedIn

Ramplit

Music courtesy of Big Red Horse

Transcript

Rob Dwyer (00:03.337)
Kaspars Kirsis. Thanks for being Next in Queue. How are you today, my friend?

Kaspars (00:09.444)
Thanks Rob, doing great. Yeah, very excited to be here. So thank you so much for your invitation.

Rob Dwyer (00:15.769)
Yeah, well, you know, you've just wrapped up some amazing international travel across the globe. Now you're back home so that we can talk about mock calls. But before we do that, tell us a little bit about you. You have a pretty fascinating background.

Kaspars (00:38.706)
Yeah, absolutely. So my background, it's actually not that not that like, long and extensive. So my main experience, I worked for five years at an eCommerce company called Printful, which at some point it was one of the fastest growing companies in the world. We grew the company to 300 million in nano revenue. And I was head of sales there and led the enterprise segment.

And yeah, I quit my role there in May last year. And since then I've been working actively on other issues that I've seen in customer experience phase that I'm excited about. But yeah, that is on a high level my background. So I've been mostly working in sales and as well as customer support as well, but mostly in sales.

Rob Dwyer (01:33.437)
So what made you decide to go from sales at a very successful large enterprise that you had grown to just starting your own company? That's a big jump for some people.

Kaspars (01:50.918)
Yeah, no, that's true, Rob. I mean, I always knew that I'm going to go back and build again. So I'm a startup guy in my heart, I think. And I was extremely lucky enough at Printful, I sort of, I was leading the enterprise segment and it was a startup within a startup. So I kind of got that experience, but I always knew that I'm going to go back and build. And I think I just, I mean, this is the most exciting time. It's very

turbulent times in customer experience space with obviously with AI and everything but I don't think there's ever been a more exciting time to build something useful for the industry So yeah, I mean that's really the reason actually I thought this is a great time to go and build something. So, uh, yeah And now i'm pretty much here

Rob Dwyer (02:36.537)
I love it. Well, I know that you and I both share an optimism that agents, human agents, are still going to be around next year and the year after that. And while AI can help with a lot of things, really focusing on how it can improve agent performances, definitely something that you're doing, something that we're doing.

So let's talk about what you are doing, but first. So, Ramplit, you're doing AI enabled mock calls, but let's start with what is a mock call for those that have never been in the contact center industry. Tell us what a mock call is.

Kaspars (03:30.006)
Yeah, absolutely. I think a mock call, I think a simple term is role play. That's something I think that everybody understands. Mock call seems like more of an industry terminology. So yeah, think about it as just like role playing. And you've been doing role plays as a kid, and you can do role plays in a professional manner as well. Role playing how to speak with a customer, for example. And that's really what a mock call is. So it's just a.

a type of role play, I would say. And obviously, customer experience space, that is something that people have been doing for a long time. Some people have been doing it more, some less, but I personally think it's just, it's so valuable to do that in order to teach people how to actually do that and give them some practice time.

how to actually speak with customers and those role plays and slash mall calls are incredibly helpful. So yeah.

Rob Dwyer (04:33.785)
Yeah, I mean, let's dig into that because you're absolutely right. That that practice time is incredibly beneficial. Can we talk more about what exactly I get to accomplish in the world of training when I'm using those mock calls or role play versus just kind of the other techniques that I might use in training, which is just, you know, listening to calls or.

you know, practicing in a system without talking, what do I get out of a mock call that I can't get with other techniques?

Kaspars (05:13.854)
Yeah, absolutely. I think, I mean, it's just a common sense that you learn best by actually practicing, right? And what we actually see in customer experience space, and that's how it's always mostly been, is that learning is quite passive. Like you just read some materials, some guidebooks, maybe some presentations and something.

And then, and that's pretty much it. So, and then obviously sometimes some companies during the nesting phase, for example, they do some mock calls or something like that. Um, but, but typically there's very, very little actual practice time. Um, and I, I do believe it's actually, it's just the best way for, for people to actually learn something.

And what, you know, mock calls do essentially, they give people and more specifically agents in the customer experience space, this ability to really see how, you know, how this is going to really play out during real life. And, and, and that can have different implications, right? For example, you can really start developing soft skills during these mock calls, you can learn how to navigate different situations and get prepared.

you can really learn much better about the product itself. It's much faster learning curve if you read about a product and that's it, or you read about it, but then you're actually practicing out how it would be to actually answer some questions about that product, right? And not only just with a manager who's asking you some questions, but actually simulating your real customer conversation. And that's, I think, is a very sophisticated learning process which has...

never been really possible. I mean, we've been doing more calls like human to human, which is great. There are some disadvantages to that process as well that I'm happy to touch later. But yeah, I think it's fascinating what we now can do in terms of helping agents to get better. And as you said, Rob, previously, right, like I 100% think that humans will play still obviously a master role in customer experience. They're not going anywhere. And I'm

Kaspars (07:30.71)
really, really excited about helping agents to get better using the latest technologies. And that's actually one of the best applications that I see right now.

Rob Dwyer (07:42.177)
Well, you just hit on something that I want to dig into, and that is the disadvantages or the drawbacks of kind of the traditional method of doing a mock call, which is human to human in the training room, if you will, that there are less and less actual training rooms and more and more virtual. But let's talk about some of the challenges presented by the old school method. If you and I...

Um, where if you were training me and you were going to be the trainer and I was going to practice, like what are some of the things that we run into?

Kaspars (08:23.346)
Yeah, for sure. I, so first of all, it's interesting actually that for some reason I thought it would be different, but not all customer experience organizations are actually doing mall calls and, and interestingly enough, for some, there are more like operational reasons for some, like they just might not see a big value in that, uh, but, but most of them actually do some type of role plays and, and they, they feel like they are incredibly useful.

But I think that the problems that we've seen or I had personally at my previous roles is like there are some operational challenges, like, I mean, scaling that process up, like if you think about it, right, like you have in some companies or organizations, you can have a hundred agents per one trainer.

Right. And obviously this one trainer can do proper role playing or mock calls with each of those agents. Right. It's just impossible. And then you do, and then you try to give agents to do role plays with each other. And I have like, I have big respect for companies trying to make that effort and trying to do that, but, but the reality is just, it just doesn't scale and it doesn't scale in terms of the process, it doesn't scale in terms of the costs. Um, so there are some operational challenges, but also

Rob Dwyer (09:17.469)
Hehehe

Kaspars (09:42.806)
what we found and actually something that I didn't anticipate before we actually started working on it and seeing how agents are using this solution is they loves for some reason, they love role playing with AI better than with a human. And that is mostly rated like sometimes agents feel like role playing with their manager is a bit awkward. They don't feel completely safe doing that.

So there are different types of elements here like that as well, which I think are very different. And on the other hand, when you're role playing with an AI, you feel completely safe. You have zero, absolutely zero, you're not afraid at all that you're going to make some mistakes and you feel very comfortable. So yeah. And also sometimes human trainers, when they're doing mock holes.

they can be a bit biased, right? And they're not prepared to carry out very different kinds of simulations. Whereas AI, we can train AI to be very good at carrying out different mock-alls on a very good level. So, yeah.

Rob Dwyer (10:55.429)
You hit on just a number of things and I can tell you in my own experience that I've seen all of this. So number one, when you're doing human to human, whether that's the trainer or the manager or agent to agent mock calls, and I've done both.

We all as humans sometimes just feel like we're being judged, particularly when we're learning something new. And so there's this inherent fear of that other person judging me, right? They're listening to me, listening to my responses, and maybe I'm gonna mess up. And that's expected, right? The learning process involves mistakes. But as humans, we feel

sometimes a little bit of shame when we're making those mistakes in front of another person. And so there's that aspect and you talked about the AI is very safe and I can I can make those mistakes without being judged. That feels a lot better. Yeah the other thing that you touched on is just the scalability and I have seen this firsthand.

Kaspars (12:03.882)
Yeah, that's kind of the point. Yeah.

Rob Dwyer (12:13.381)
in a lot of ways, I've seen some really creative ways to try and make this scalable, but it always presents some incredible challenges. So if I am as a trainer going to do. Just one-on-one role plays with a class and that class could be 10 people. It could be 20 people. It could be to your point, a hundred people. I have a limited amount of time.

with each individual to do those role plays. And in the meantime, the other students are hopefully observing and learning, but they're not doing, right? So if I've got 10 people, that means that during the role play time, 90% of my students are not actively doing. And that is really not a great use of time.

It is good that they're observing and hopefully they're learning through observing, but to your point, it's the doing that really accelerates this learning process. And that's if I have 10 people, I got a class of 20 or a hundred. Obviously this, this creates some immense challenges. Now you touched on something that we can do, right? You can pair these, these students up, pair agents up to do some, some role play.

or even maybe in groups of three where you've got someone kind of facilitating some things, but agents don't, when they're brand new, they don't know what they don't know. Right? So crafting a situation and having something realistic for them to do can be a challenge. And I've seen people do some incredible things, some trainers do incredible things where they'll have like scenarios already prepared. And so they're giving a scenario to

one agent where like this is this is who you are this is your issue etc and that works but it takes an incredible amount of prep time and it's incumbent on the trainer to make sure that they've got everything kind of set up beforehand and there's just a lot of work that goes into it and it makes it really hard to scale so it's definitely a challenge

Rob Dwyer (14:38.429)
Those in the training world understand the value, but they've also for a long time really struggled to make it work and work effectively because. There's just so much time in the day. That's all there is to it.

Kaspars (14:54.854)
Yeah, that's exactly Rob, why I was saying before that I have huge respect for managers and trainers that are actually putting that effort to trying to facilitate that process. But it's not easy to really make that process work very well at scale. And then yeah, to your point, like you can...

listen and see those examples, how your peers are doing all calls, and there's some value to that. But I would say then what you should be doing best is like, hey, you can listen to some very good examples like recorded calls or something that shows like, hey, what is a really good conversation. And then that's it. And then the next thing you should be doing, you should be going out there on your own in a simulator and start practicing. And another great thing that we've been able to achieve is...

it's not only just practice, right? Like for example, when you do mock calls, like it's, you have to do the mock call, but then you actually also have to give feedback, right? On how the agent is performing, which is another thing, right? Like a trainer needs to do that, like to listen to everything and needs to give feedback. So we've also managed to add, include that as part of our platform where agent can like immediately after the call receive feedback on how they're performing.

And I think that's like a very, very important part of that entire process as well to make it more scalable.

Rob Dwyer (16:23.077)
Yeah, absolutely. So let's talk about scenarios. Like how many different kinds of scenarios can you build out to have agents practice?

Kaspars (16:36.45)
So this is, it's actually funny. Typically, most of our partners, when we start engaging with them and they start asking these questions about the solution, everything they ask, okay, but then how do we upload 10,000 simulations for every single situation? It's like.

And then obviously we start talking like I'm explaining, listen, the point of the simulation is not to play out every single scenario, even though that's theoretically possible, right? That's not the point. The point is to teach your agents, for example, soft skills and you don't need to have 10,000 simulations. You can have much smaller number, or then you can also teach about some product group or solution and just more on a high level for them to navigate those situations within those domains.

And so you don't need that big number of these simulations. But anyway, you can have hundreds or thousands of simulations. That's not the problem. And creating those simulations is also rather simple. It obviously depends on different situations. Like if there's something very complex, a very complex conversation, it might be difficult to.

construct that simulation and recreate that conversation. But otherwise, it's rather simple, and you can have pretty much unlimited number of simulations on the platform. So.

Rob Dwyer (18:08.657)
You hit on something that I do think is important for trainers to realize organizations to realize when they're talking about creating training and, and what they focus on. And I have, I can't tell you how many times I've seen this where, um, it happens more with the, with the students, right? That they, they throw up their hands, whether that's virtual or in person, they're like, well,

What if this happens and they throw out some wild edge case? The reality is that the chances of them actually having to deal with that particular edge case are slim and done. It's rarely going to come up. And when it does come up, right, you can just reach out for some help with that. I don't need to know how to handle every single edge case because the vast majority of

my interactions with customers are going to fall into a really pretty slim amount of scenarios. Depending on the complexity of the type of calls that you take in your business and maybe how much cross-training or how many different skills I'm in, there's really going to be a pretty limited

Rob Dwyer (19:34.029)
Executing within those, executing at a high level, and executing skills that are transferable across, across difference. When I say skills or cues or call types, right? Being nice and being empathetic and, you know, expressing willingness and those kinds of soft skills, those transfer across all call types.

It's not scenario based. It's something that I just need to do.

Kaspars (20:01.174)
Exactly.

Rob Dwyer (20:06.569)
Can you talk a little bit about, have you integrated with any other systems? So for instance, specific systems that agents have to navigate to kind of extends that role play into not only am I talking to someone, the AI someone, but also navigating a system because that's a skill that I feel like most agents struggle with early on.

Kaspars (20:36.802)
Yeah, no, that's a great point. And it's actually extremely, extremely important to create that kind of immersive experience. Because if it's just like voice conversation, but the agent needs to actually take some actions. So that's, I think is very, very important. It's a bit challenging for us to do that because, you know, these systems can be very different from, you know,

from partner to partner. And we do have some like basic ones that we can provide, but most of our partners really try to recreate that experience using their own systems and sort of like have their own little sandboxes where the agents are playing around when they carry out these simulations. It is a bit of sort of a burden maybe, I can acknowledge a burden to our partner to facilitate that part.

You know, we might, we have some ideas how to solve that in the future, but right now that's like really how it works. So yeah, if you are a customer experience organization, you would need to really build a small sandbox for your agents within these simulations to play around. Um, so, so yeah, that's, but, but it is, it is very, very important. Um, otherwise it's sort of like, um, yeah, it's, it's not, it's not realistic enough.

you have to take some actions, right, to carry out a full conversation. So 100%.

Rob Dwyer (22:03.1)
Yeah.

Rob Dwyer (22:11.429)
Is it possible to, so I understand obviously we can have potentially tens of thousands of different types of interactions, but that's probably not realistic. And I imagine most of your partners really focus on a core number of different simulations. Is there variation within each specific simulation and how does that play out with the agent?

Kaspars (22:39.882)
You mean like how, whether the simulations are scripted or they can be a bit more flexible in terms of where the conversation can lead to? Yeah. I, so you use, so the way how it works, you build a framework and you add some context in the system so that AI understands, okay, what this conversation is about. So for example, you add information like, okay, the customer is going to call about an issue they have with their...

Rob Dwyer (22:47.311)
Mm-hmm.

Kaspars (23:09.366)
purchase order, which they placed in an e-commerce store or something. And then you add a few other details, the customer is angry, some verification information, and sort of a bit more context. And then what happens is each conversation is unique. Our simulations, they're not scripted because we use large language models that are based on neural networks, so it's not scripted.

So each conversation can be rather unique. Um, and, and it can go different directions, like, but, but the great thing is that our models are capable of, of really using that sort of, I can say common sense and understanding that like, if, if this simulation is about a customer calling an issue with that they have with their order, and if an agent starts asking questions about, um, something completely random, like how is your day going or something?

then the AI understands that's not what this conversation is about. So there is a limit, obviously, where this conversation goes. But the point here is that each conversation is rather unique. And that's great for agents, so they can practice different experiences.

Rob Dwyer (24:28.361)
I have news for you, Casper's. Customers are never angry. I don't know if you knew that, but I...

Hahaha

Kaspars (24:37.941)
Yeah, I mean, that's no Rob, like I don't, it's true, but that's the thing. What I'm not sure whether agents should be thinking that way. Like, and for example, what are, what are also these, I think AI mock-alls do very well. And that's something you mentioned previously, right? We don't need to teach agents every edge case. And that's also not what or.

simulation does, but what it does, it prepares you. It prepares you to have that difficult conversation, right? And in, and what we've seen also is that in a lot of organizations, agents don't feel prepared for that. And that can lead to very practical implications like attrition rate, which is rather high in our industry in the first like 90 days.

Rob Dwyer (25:09.13)
Mm-hmm. Yeah.

Rob Dwyer (25:29.801)
Mm-hmm.

Kaspars (25:29.846)
In a lot of cases, obviously it's maybe because agent just, you know, maybe it's not the right thing for me, but in a lot of cases, it's also because agent just is discouraged. They don't feel like they've been prepared. They feel like they're doing a terrible job, where in fact they're actually doing great job. It's just they didn't get enough practice and really getting prepared for those types of conversations where again, these types of simulations work very well for that.

Rob Dwyer (25:59.433)
Yeah, I want to get into some challenges that I'm curious how you've approached those and if you've been able to solve for them. The first one being, how do I keep agents from taking things off the rails, so to speak? I mean, obviously we know that large language models can, with the right prompting.

a go awry sometimes. So how do you approach making sure that agents don't try to, you know, actually get the AI to behave in a manner that is not what we're looking for?

Kaspars (26:49.686)
Hmm. Um, interesting. That's a good question. I haven't personally seen that type of behavior so far, to be very honest with you from agents. I know maybe that will start happening. Yeah, 100%. Maybe that could happen. But I mean, it's like...

Rob Dwyer (27:03.482)
Okay, just wait for it.

Kaspars (27:15.146)
it's not like you have a lot of time to look for those loopholes, right? Like if you're doing the training, right? And then you also, you're getting scored against rather specific things and immediately after the mock call. So it's not like you have a lot of room to try to mess the system up. Right. Like, and, and also maybe we're just lucky by working with, with great organizations, but really, we really haven't seen agents trying to, um, to try to.

somehow navigate a system. It's maybe a bit more in the sales world where, we also work with some, for example, large public companies that are large B2B sales organizations and those folks, they might try to sort of, and what they have to do, for example, they're one of the KPIs for example, book a meeting. If they're practicing an outbound conversation, their KPI within the simulation is book a meeting.

Rob Dwyer (28:10.967)
Mm-hmm.

Kaspars (28:13.234)
And then, then they're trying to, uh, there's very specific goal, right? This to get AI to agree on a meeting and then they might be trying and, you know, different types of methods, which might, and the issue with that is that it's just not realistic. And that's also obviously a limitation with our solution, right? Like if it was like, okay, I'm trying different creative methods and that can also work in real life, that's awesome. That's perfect. Right? That's, that's what this is all about. But.

Rob Dwyer (28:30.301)
Right.

Kaspars (28:42.262)
But obviously, we're not there yet. So sometimes AI can behave in ways that are not realistic, and agents can try to exploit that. But fortunately, we haven't seen that much, to be honest with you.

Rob Dwyer (28:45.152)
Mm-hmm.

Rob Dwyer (28:57.481)
Yeah, I imagine the scoring probably helps that, and that actually takes me to really what I wanted to talk to you about next. And that's just kind of as a trainer, how do I see agent performance? Like what kind of dashboard do I have? And do I have some expectations of, hey, once we hit a certain threshold, that

Kaspars (29:03.982)
Thanks for watching!

Rob Dwyer (29:26.281)
tells me like we've reached a level of proficiency and I can move on. What does that trainer experience look like?

Kaspars (29:35.134)
Yeah, sure. So I think that the way how we score or mock calls, it's rather similar to actually how calls are being scored in production in a lot of cases. And that's actually a very well sort of explored area, right? There's a ton of companies that are doing...

Rob Dwyer (29:56.093)
Mm-hmm.

Kaspars (29:58.742)
Out of QC, right on production calls. And we both know, right, there's a lot of limitations to that. Even though like the technology right now is, is much significantly better than it was. There's ton of limitations. So, um, and it's the same for mall calls, right? Like what we can do very well is score mall calls on first of all, on things that are very clear, whether they were done or were not done, for example.

Did the agent introduce themselves properly? And then we can give a context what proper introduction means. And that's very clear. AI can easily score that. Um, where it gets more tricky is for example, um, did the agent or did agent use positive language? It's, it's, it's not like you can define perfectly what a positive language is. Right. And there are some nuances to that. It's maybe specific keywords or something else.

Um, but there's, there, there can be some situations where obviously the way how our system gives feedback to agents. And then a training manager looks at that and feels like, okay, this is not like, I understand how, why AI did that, but like in reality, I wouldn't give that type of feedback or score to the agent. Right. Um, and so, so there, there are things like that, but otherwise, you know,

Rob Dwyer (31:11.997)
Yeah.

Kaspars (31:24.826)
I'm very excited about this space as well, about how AI is capable of giving agents this immediate feedback. I think it's incredible. I truly believe that can significantly increase the quality standards within our space and have this process much faster, this feedback loop, so to say.

Rob Dwyer (31:51.433)
Yeah, yeah, it's all very exciting. And there's obviously, I think all of us kind of in the space dealing with AI recognize that there are still some limitations. But we're seeing those limitations overcome by sometimes leaps and bounds much faster than maybe any of us would have expected just a short time ago. So it's.

pretty obvious to me that the technology is only going to improve. And the more it improves, the more comfortable the agents will be, uh, interacting with that. And the more confident organizations can be that we're really delivering on something that can reduce the, um, the time to proficiency for agents and for, for contact centers in particular.

That is where a big part of the cost comes from is time to proficiency. And you mentioned earlier, right, the turnover. If I'm losing people because they don't feel prepared when they actually hit production, I've sunk a pretty big investment into them and now I have to go do that all over again.

but also that time to proficiency, how quickly they can feel confident to just do things on their own, makes a huge impact organizationally. And so if companies aren't looking at this type of technology today, I would say, right, you need to open your eyes a little bit and pay attention. It may not be exactly right for your organization, but you should be evaluating that.

Kaspars (33:36.014)
Thanks for watching!

Rob Dwyer (33:41.905)
for sure because it can make a huge difference. What have we not touched on that you'd like to talk about today?

Kaspars (33:56.6)
I just, I know group question. I feel like also unrelated to maybe what we do, but obviously agent assist solutions are, I think there's like incredible opportunity there as well. And you know, there's, it's, it's not like it's completely new, but it's something that more and more organizations are trying.

are really adapting. And that can also, to your point, write about a time for proficiency, that can significantly cut that down as well. So it's not what we do right now per se, but that's something that is very exciting, I think, in the space. So yeah, I just feel like there's never been a more exciting time in our space. So yeah, 100%.

Rob Dwyer (34:46.713)
Yeah, I am with you there. Well, Casper's, thank you so much for joining Next in Q. If someone wants to get in touch with you and find out more about what you're doing, find out more about Ramblet, is LinkedIn the best way to do that?

Kaspars (35:05.642)
Yeah, LinkedIn is fine. You can just go also to ramplit.com. There's some contact details there. But LinkedIn is fine. If people can remember and spell my name, that's fine. But yeah, so. Okay, all right.

Rob Dwyer (35:18.609)
They don't have to, because we're going to put a link down in the show notes that they'll see exactly how to spell it and they'll be able to click on the link and follow you and connect with you. And we'll also put a link to the website so they can go there and check that out. So thank you so much for joining today. It's been a pleasure.

Kaspars (35:41.706)
No, thanks. Thanks Rob to you. Yeah. Big fan of the work you do. So really appreciate all the contributions to the industry. Yeah. Thank you so much.