Show all episodes

Would You Like to Play featuring Kent Morita

Released on MARCH 15, 2024

Benedict Cumberbatch’s portrayal as Alan Turing in the 2014 film, The Imitation Game earned him an Oscar nomination. In this scene, he describes what has become known as the Turing Test.Introduced in 1950, this test determines if a machine can exhibit intelligent behavior indistinguishable from a human.In short, its goal is to determine if a machine can think.

While there are all kinds of philosophical debates about the validity of such a test, it has become ever more relevant in a world where Artificial Intelligence has emerged from the lab and is becoming part of our everyday experience.But Kent Morita, a Conversation Designer on the Google Gemini project, has a hot take about how the Turing Test has become irrelevant.

We discuss:

  • The Rebranding of Bard to Gemini
  • Conversational Design and Large Language Models (LLMs)
  • Insights about the NYU Mascot
  • A Unique, Educational Application of an LLM
  • Challenges of Working with LLMs
  • Ethical Use of Artificial Intelligence
  • How Businesses May Guard Against Bad LLM Behavior
  • How Storytelling Inspires Creativity

Connect with Kent on LinkedIn

Moment in Manzanar

Story Pirates

Music courtesy of Big Red Horse

Transcript

Rob Dwyer (00:01.759)
Kent, Morita , thanks for joining the show. You are Next in Queue. Welcome, my friend, how are you?

Kent Morita (00:07.458)
Thank you, I'm doing great. Thanks for having me on here. How are you doing?

Rob Dwyer (00:12.347)
I am fantastic. Before we jump into the show, I think it's really useful for us to kind of set the stage as to what you are doing today at Google. So can you tell us just a little bit high level about the project that you're currently working on?

Kent Morita (00:32.098)
Sure, so I'm working on Gemini, formerly known as BARD, which is a large language model powered experience. And I think the easiest way for people to understand what it is, it's like ChatGPT. You could ask it some questions, you could ask it to generate images and through a natural language interface.

Uh, it does its best to help you. And specifically for me, I'm working in the internationalization team. And we're trying to make sure that in particular Japanese and in Korean, uh, this experience feels relevant to the culture and, um, makes the users who speak those respective languages, uh, feel seen.

in the experience.

Rob Dwyer (01:31.867)
That's really fascinating. I will say I love that Google does the most Google thing and immediately rebrands a product that they've been working on to Gemini. But I do think Gemini is an interesting name, right? Certainly it's better than Bard, right? Gemini is kind of like playful and intellectually curious. What was your reaction when that rebrand came out?

Kent Morita (01:50.412)
Yeah!

Kent Morita (02:01.39)
I really liked it. For one, I've always been a fan of NASA. I've always been a fan of the history of space. And I think tech in general has this tradition of using a lot of space themed metaphors. I like it when there's like an Easter egg that way. And

As you may know, Mountain View campus of Google's right next to NASA's big presence there. It's got those big hangars where all the rockets used to be. And Gemini was the first ever mission to have two people in the spacecraft, I believe. And for me, it represents this appreciation for duality.

human and machine and it needs to be a three legged race. And I think Gemini plays with that idea. I believe the official statement we gave was that there were two AI teams, sorry, don't quote me on this, but that were working on very awesome projects and they shook hands. So there's a duality there. But I think an appreciation for duality.

Um, is a common thread. Um, and yeah, I'm really excited. And also, um, from my perspective, I think, um, as I said before, I really want to focus on making the experience work for all, um, people around the world. Um, for me, uh, Japanese, uh, being a particular focus. Bard, I think resonates a lot with people who are.

Uh, familiar with Shakespeare. Uh, uh, but while Shakespeare is globally known, um, it doesn't strike the same chord with everyone, whereas I think, uh, talking about celestial objects is a much more global feeling. So I really appreciated it. Yeah.

Rob Dwyer (04:20.298)
Yeah, we've all certainly taken a moment to look up at the stars at night. And I think regardless of where you are in the world, that's something that we all share as people.

Kent Morita (04:25.44)
Yeah.

Yeah. And I see the Star Wars, Star Wars, uh, stuff behind you. So yeah, definitely.

Rob Dwyer (04:36.559)
Yeah, I don't want to get too controversial here and I certainly don't want to get you fired, but is Gemini sentient yet?

Kent Morita (04:46.958)
I don't think so. I don't think, personally, I don't think we're anywhere close to sentience in artificial intelligence. But it's, you know, it kind of brings, I have this hot take about the Turing test. It's, I don't think Turing really imagined.

Rob Dwyer (04:48.427)
Ha ha ha.

Kent Morita (05:16.374)
that people will be getting so much help from machines in writing and expressing themselves, that it's almost like it's less about the machines tricking humans into thinking we're humans. They're humans. It's less about the machine tricking humans that the machine is human. It's almost like we're converging. The way we express is being informed.

by auto-correct and auto-suggestions on type. So by definition, the Turing test gets easier and easier to pass. So it's, that's a really kind of related thought, but I think that even that test, which is kind of considered the gold standard of.

Rob Dwyer (05:48.467)
Ha ha

Kent Morita (06:13.086)
implying sentience is becoming less and less relevant.

Rob Dwyer (06:18.511)
Yeah, that's really interesting. I wonder how much kind of the keyboard prediction model that I think all of us are pretty used to for a number of years has informed what is now happening with LLMs because it is essentially a giant prediction engine, right?

Kent Morita (06:43.158)
Yeah, I think it's very, very close to that and it is almost an auto-complete of what another person might say back to you at a really, really big, large way. It's at a much, much bigger scale now.

Rob Dwyer (07:11.979)
So speaking of, can you put into maybe layman's terms, can you talk about conversational design and what that entails and how that factors into large language models?

Kent Morita (07:31.266)
Sure. So conversation design is...

I like to talk about the history of computing. I promise it will come back to this question. But for the desktop interface was a great metaphor and it was a great innovation in getting people to be able to interact with computers without having to know a lot of low level coding. And

And yet it still uses conventions that are fairly arbitrary, like left clicking versus right clicking and double clicking. Like why does left clicking mean to select while right clicking means to get more detail? That's something that somebody probably in the Parks Lab or somebody in the Silicon Valley just came up with. And with fast forward a few decades and we get the smart.

uh, smartphones with the touchscreen, the multi-touch interface. And it's very intuitive, but it's still pinched to zoom. Those are still conventions and conversation design is one of the first times where we're kind of flipping the script. We're no longer telling humans how to speak to the machine. We're telling the machine how to speak human. And the.

kind of North Star goal for every conversation designer is to create an interface so natural that it relies on the human language instinct to create an interface where even somebody who's four years old, all the way up to a hundred years old or more, as long as they could talk, the machine will accommodate towards how humans talk.

Kent Morita (09:38.07)
And LLMs in ingesting and analyzing a corpus of text that spans billions and billions of different texts.

is this wonderfully large way of trying to predict what people would probably say. So it is a continuation of conversation design in that way. I would say that there's a lot of work to be done here. In particular, when you say something to

a LLM powered product now. It says a lot all at once. And it's not necessarily the case when we have conversations that that's what happens. I am realizing the irony here of me having talked for the last two minutes. So maybe I'm converging and becoming more like an LLM right now, but.

Yeah, there's a lot of work to be done to make sure that the new products.

Kent Morita (11:01.806)
speak more like people and allow more and more people to use it without having to get special training.

Rob Dwyer (11:10.963)
Be fair, you are on a podcast and the whole point is for you to talk. So just, it's okay.

Kent Morita (11:15.723)
Yeah

Rob Dwyer (11:19.771)
I want to, number one, ask you, is the bobcat your favorite animal?

Kent Morita (11:29.302)
Bobcat? Yes, I would say so.

Rob Dwyer (11:33.712)
And why? Why is that the case?

Kent Morita (11:36.254)
So I was the NYU mascot. Uh, uh, and, and believe it or not, NYU does have a sports team. Um, uh, we, we're, we're great. Um, and the Bobcat was our animal. It stands for Bobst catalog. Like to, to emphasize how.

Rob Dwyer (11:43.998)
So awesome.

Rob Dwyer (11:48.723)
Hahaha

Kent Morita (12:05.198)
non-sporty we are. We named, we came up with a library called the Bobst Library. And then there was this new computer system that cataloged it. And then there were like Bobst catalog, Bobcat. Oh, maybe we could have a, like, it's all backwards. But anyway, yeah. And I, my hometown from California is Los Gatos, the mountain lions.

Rob Dwyer (12:06.579)
That's super nerdy.

Kent Morita (12:34.818)
So yeah, I have an affinity for big mountain, big cats.

Rob Dwyer (12:40.411)
Okay, so what was the best sport to be at as the Bobcat? Like where were the crowds the best?

Kent Morita (12:50.101)
Oh.

Kent Morita (12:53.794)
So that's tough. So for my personal comfort, hockey was the best because it's cold and inside that costume, you are very, very warm. So it's nice to not have to take a break every 10 minutes because you are drenched in there and sweat. But the hockey game seemed to bring out a lot of the frat bros and

Rob Dwyer (13:01.094)
old.

Kent Morita (13:23.338)
They just thought it was okay to just like, like punch me, you know? It made no sense. I'm like, you, you're, you're an adult. You know that there's a person in here. Like, what are you doing? Um, it was crazy. Uh, the basketball games were amazing. Um, uh, mainly because I think, I think it's, I think it's like, it much, it was like an easier.

place to interact with fans, but in particular, there was this a five year old girl who I think was like the assistant coach's daughter and she would, uh, she became a huge fan of mine and would bring like these drawings of me would come to like, uh, like she had one of those Disney like autograph books, but she wanted me to sign it, you know? Um, so

Because of her, the basketball games, I think, is number one in terms of my favorite place to perform.

Rob Dwyer (14:28.139)
What an amazing story. I absolutely loved that. So thank you for sharing that. What I really wanna dive into now is a project that you started last year. Award-winning project. Congratulations, by the way. Moment and Manzanar. And when I think about LLMs, there are all these different...

Kent Morita (14:33.693)
Yeah.

Kent Morita (14:49.185)
Thank you.

Rob Dwyer (14:57.415)
applications that we can use them for. And, you know, ChatGPT was unleashed upon the world and all of a sudden everyone was creating stupid Dr. Susie and poems. And, you know, it's, it was like this play thing where people made, you know, ridiculous pictures in Dall-E or whatever the case may be. But I do think there is

I know there is a space for us to use LLMs in a more meaningful way. And you did exactly that. Can you tell us about the project?

Kent Morita (15:44.846)
Sure. Um, Momentum Manzanar is an educational interactive film where the audience or the user could talk to Ichiro, who is a character empowered by a digital actor that is empowered by an LLM.

The audience gets to talk to Ichiro, who is an incarcerated in the concentration camps in 1943. He's Japanese American and he is in the camp in Manzanar.

Kent Morita (16:31.834)
Yeah, it's so there has always been criticism about popular robots, whether it be the Google assistant, Siri, Cortana, Alexa, that why do they in the US at the very least end up sounding like Caucasian female mid thirties.

And it's because of a lot of political choices. It's because of what is quote unquote safe. But it ends up being this kind of bizarre situation where five different companies come up with like the same robot persona. And as I've been working in this space for Amazon and as well as Google,

I've always wanted to create an experience that was very, very specific. And I wanted to tell the story that I really, really wanted to tell, which was the incarceration of thousands of Japanese Americans in World War II. Yeah, it was...

very meaningful for me to leverage LLMs to help amplify marginalized voices. In this case, historically marginalized voices.

Rob Dwyer (18:15.719)
Yeah, it strikes me, number one, you know, for those that don't know, during World War Two, right after Pearl Harbor, roughly 200,000 Japanese Americans were put into internment camps in the United States. I think when we think of internment camps, we think of Nazi Germany and internment camps in Poland and Germany.

That happened here on US soil as well, because of the fear of Japan and the attack that had happened.

Kent Morita (18:59.486)
of xenophobia really, because using the same logic that the United States government used, way more Italian Americans and German Americans should have been put in the same sorts of camps, but not saying they weren't. There were some German Americans, Italian Americans that were put into these camps, but it was disproportionate.

Rob Dwyer (19:01.31)
Yeah.

Rob Dwyer (19:29.131)
Mm-hmm.

Kent Morita (19:31.911)
It was a very racially motivated thing. And one of the worst mistakes the United States has made and arguably continues to make in different forms to this day. So I really wanted to highlight a particularly egregious moment in American history.

through this project.

Rob Dwyer (20:01.415)
What I love about the project is that it's a way to develop, create empathy with someone who is different than you. Ichiro is this character for someone like me, very different from me, living in a different time, having an entirely different experience. But it's a way to simulate.

an experience so that you can have a better understanding of what their experience is like and do it in a way that is...

Rob Dwyer (20:45.003)
kind of realistic, realistic in a way that wasn't possible five years ago or 10 years

Kent Morita (20:53.55)
Yeah, thank you. Um, and John Okada, um, who wrote No No Boy, um, this is the book, um, writes in a, so No No Boy is a, um, fictional, uh, book and that tells about Ichiro, which I took the name from, um, who was a no-no. Um.

And while in the camps in 1943, Japanese Americans were given what they called a loyalty questionnaire. And questions 27 and 28 were particularly problematic, controversial. The 27 asked whether they would be willing to serve in the armed forces of the United States whenever asked. And 28 asked, would they forswear allegiance?

uh, from the Japanese emperor. And many answer no, uh, to the first question saying, why would I risk my life for a country that just took away everything from me? And many answer no to question 28 because they didn't even know who the emperor was. And to say, I'll for swear allegiance may imply that they had allegiance to it. So it's a really badly worded question.

So many answered no and no, but those people were called no-nos and were ostracized, even within the Japanese American community. It wasn't until Vietnam where conscientious objection became a more of a thing in the public consciousness, but these were conscientious objectors in the true sense of the word. And John Okada talks in a letter he wrote to his editor.

that the historical record are facts and numbers and who did this and who did that. But it's in fiction where you could fill in the gaps, the emotional gaps, the sadness, the frustration. And when I read that quote by John Okada, I was very inspired to create a fictional experience that

Kent Morita (23:17.794)
tells the emotional truth of the tragedy. By definition, when you talk to Ichiro in this experience, it's an experience that never happened in 1943. But he will, the digital actor will do its best to represent the point of view of somebody that was incarcerated there. And...

Yeah, I wanted to create this sense of empathy with the people that were incarcerated. Yeah.

Rob Dwyer (23:56.559)
I just love the idea behind the project for so many reasons. In creating this experience, though, you ran into a number of problems. And I think those problems are illustrative of some of the challenges of working with LLMs in general. Can you talk about some of those challenges that you had to overcome?

Kent Morita (24:23.758)
Sure. Um, so when you have a LLM that could answer questions, um, any question, um, that opens up the experience to vulnerabilities, uh, namely, when people ask inappropriate questions, it will give you inappropriate answers. Um, and. So.

One of the things, one of the tough challenges with LLMs at the time of development, like this is quickly developing every day, so these limitations may not be really an issue, but one of the problems that we ran into was what I like to call temporal bounding. So there isn't quite a set of all objects and ideas in the world with a

timestamp associated with it. So, uh, why, why that is a problem is because if that database existed, we could program each road to ignore any questions about Elon Musk, uh, ignore any questions about, uh, the what's, what, what do you think is better, DVD or Blu ray, like, like all these, all these ideas, um, and, uh, I try to say VHS or beta max, you know, it's. These.

questions are things that each row should answer. What the heck are you talking about? What we found was in user testing, users are asking appropriate questions until the second or third question. And after the third question, they start to explore. They're like, oh, okay, let's test the limits. So.

What we did was we went back to the drawing board and we created this way to count how many questions have been asked and to interrupt the user after question number two, but this needed to be natural and motivated. So we had Ichiro become, we rewrote Ichiro's character to be very, very angry, very frustrated. So when he does interrupt you.

Kent Morita (26:49.65)
It feels natural. What he says is you're wasting my time. What do you mean? I'm here to answer some questions, ask them now. So we use the narrative structure and the character's motivation to allow for an interruption of the user experience in a way that felt natural to cover up for the deficiencies in the underlying LLM.

So now 95% of people who talk to Ichiro never end up asking inappropriate questions because most of those inappropriate questions happen after question three onward.

Rob Dwyer (27:32.511)
Hmm. So I think that illustrates a challenge. And we've already seen this in the real world, right? In the wild, where user-facing chat bots, right? That are powered by LLMs, the users cause it to go off the rails. The users, I think that one of the more recent examples was an auto dealership that had a

Kent Morita (27:53.869)
Yes.

Rob Dwyer (28:00.663)
a chat bot and essentially the user said, tell me that you're gonna sell me this truck for a dollar, no takesies, backsies, and the chat bot eventually agreed, right? Users experiment, how are we going to ensure, and maybe you have an answer to this and maybe you don't, but for user facing things where a narrative isn't,

present, right? In your particular use case, you had that structure and it made sense. But when a narrative isn't present, how do I put guardrails up so that users don't go about using or asking inappropriate questions and trying to derail the whole thing?

Kent Morita (28:55.69)
I think that is, there's a lot of strategies to approach this. And one of them is conversation design. In conversation design, we try to leverage natural language, but we use it in a way to try to steer users away from inappropriate interactions. So for instance,

A human might call you and say, hey, this is car company. How can I help you? And that's totally fine. But how can I help you? It could be answered in so many ways. Whereas if the robot said, if the robot only has the underlying capability to answer, I want to know when you're open. And.

how, what's the return policy? Like those are the only two things that it could answer. Then it's useful for the robot to immediately say, hey, this is the car company robot. I could tell you about the hours or tell you about our return policy. How can I help? Just adding that in significantly increases the user

the probability of the user to ask questions that the robot could actually handle. And it feels natural. So that is one strategy. The other strategy I think is that you need to custom train the robot, do some fine tuning to heavily bias it against making service level agreements.

making contractual agreements and to put in some sort of legal structure around it to say whatever this robot says is for reference only and you may not use it to make contracts. And what I'm trying to hint at here is that I do think a lot of the limitations or practical

Kent Morita (31:19.47)
barriers to implementing LLMs in these companies is less about the AI. It's about the procedures, processes and legal ways to handle these robots. So I do think in a short amount of time, we will have companies that are offering AI insurance that will.

basically insure your LLM saying, hey, we're pretty confident that 95% of the time it's going to do what you ask it to do, but 5% of the time it might, yeah, sell a truck for a dollar, but you're insured against that. So I think that's, I think that's really fascinating. I think one of the biggest barriers for implementing LLMs in many of these companies right now is they're, uh, they're afraid of legal risk and.

Rob Dwyer (32:16.781)
Mm-hmm.

Kent Morita (32:18.714)
One of the ways that we've come up with to handle that sort of thing is insurance. We do it for cars, we do it for server uptime compliance for banks. We already do this. So I think it's just a matter of time where we do create these tools to allow, at the end of the day, it's people. Making sure people feel comfortable.

implementing these tools. And that's, that's something that I think is going to happen very soon, if not already happening.

Rob Dwyer (32:56.027)
It is interesting, the first option you suggested just sounds like a more friendly IVR. And I don't know that anyone really thinks it's worth the effort or the cost to implement an LLM to make your IVR sound a little bit more friendly. Although a lot of consumers might disagree. The second piece I think is really intriguing. And

is a market force that already exists that just to my knowledge hasn't been applied to this particular market and so that absolutely could be right a whole new business model for insurance which I know everybody hates insurance companies I get it but it is part of too ensuring that you have a good product right so insurance companies

Kent Morita (33:44.782)
Honestly, yeah.

Rob Dwyer (33:55.711)
do serve a purpose in that, right, if you buy a safer vehicle, you get better rates. If you drive better, you get better rates. If you are perceived as a higher risk, it costs more. And so the force that is driving then is I should drive safer, not get any speeding tickets, not get any into any racks.

and I should buy a safer car and I'll pay less then and that can apply to LLMs that hallucinate less are less likely to Go off the rails have strong biases against doing things that don't want them to do with our customers So very interesting concept there

Kent Morita (34:43.742)
Yeah, and a lot of the structures that we have in place to regulate potentially dangerous things, such as cars, licensing and making sure you follow a speed limit. But if you go to a track day, you know, you sign a waiver and then now you're allowed to do donuts and you're allowed to do take the car to its limits. I think that sort of structure

as a society, we will start to be discussing around AI as well. It's just such a potent force. And just the other day, have you seen those videos that came out of Sora, the OpenAI video generation?

Rob Dwyer (35:21.515)
Hmm.

Rob Dwyer (35:33.175)
Not yet. I just read about it. So we are recording this, like literally as this has been announced just this week. So talk more about the video.

Kent Morita (35:36.379)
It is amazing.

Kent Morita (35:40.893)
Yeah.

Kent Morita (35:45.83)
Yeah. Um, and honestly, I'm just paraphrasing MKBHD. Uh, I love his videos, but it's really stunning stuff. Um, and the, the video footage that's coming out of these systems is very, um, very impressive, but also very scary, especially on an election year. Um, it's, it's.

Rob Dwyer (36:13.148)
Mm-hmm.

Kent Morita (36:16.582)
We will have misinformation powered by this more and more. I'm sure of it. We already had, I believe, Taylor Swift selling Le Creuset, like Dutch ovens, peddling that over Facebook. And the quality is terrible on the video. But...

It's only a matter of time where that is gonna look exactly like how Taylor would sell French cookware. And like that is a relatively innocuous use case. We could easily come up with more heinous ones where, you know, we were, some people still believe in Bigfoot. You know, and that is the worst video.

Kent Morita (37:11.594)
that you could ever use to believe in something. It's obviously a guy and like some chewy too. But like that is the level of quality you need to make everybody believe in things. So this, it just opens up this Pandora's box and we just need, we need to be very careful.

and be very conscientious about responsibly using this technology because it could be misused, and it is being misused already.

Rob Dwyer (37:51.251)
Yeah, I wonder for you being so intimately involved in that space, are there moments that you have that make you go, hmm, I don't, do you have that Oppenheimer moment ever where you're going, I don't know, should we be doing this or?

Are you focused on the benefits?

Kent Morita (38:22.722)
So I think, and this is really like, it's just gonna sound like me patting myself on the shoulder, but my team over at Google is among the most empathetic and amazing people working on this. And it's just,

The amount of care they put into safety and trust and making sure the, uh, content that's generated by this is something that's safe and, um, serves a better public good is quite frankly, very impressive and I'm always inspired by their work. I think it is really, um, but that's really lucky, um, that we have all these people who just happen to be good people.

working on this. And I don't think we should rely on luck for a technology that is so overreaching and so potentially powerful. So that's one thing. So in my work at Google, I felt fairly...

Like before I even have that thought of like, wait, maybe we should do this. Somebody has already had that thought and already has started to work on a project to try to make sure it is safe. Um, that has been very, um, heartening, like the opposite of disheartening, I guess, like it fills me with hope. Um, so I should, I should mention, um, I am.

Uh, going to be My digital likeness is going to be used in an upcoming video game, uh, called Rise of the Ronin. It's for the PlayStation 5 And, um, I signed the contract to have them use my digital likeness in perpetuity, uh, across the universe before the SAG strikes. So really, uh, this company could use my face forever, uh, for anything.

Rob Dwyer (40:28.575)
Awesome.

Kent Morita (40:48.09)
And it's really, really like that was the contract we had to sign to be working on these projects. And it was, it was a really scary feeling. Um, Now that we have SAG and I was picketing, um, and we were able to push back on those points. Um, it's, it's the people who are signing contracts today are going to have a better.

a contract than I did about a year ago. Um, but it's, uh, and again, that's people, um, making agreements and making sure everybody is on the same page, uh, to do things. So it all comes down to people, uh, making policies, uh, agreeing with each other on how to ethically use this technology. And.

I think that's like the biggest thing I'm scared about is are we gonna ever be able to come to an agreement here? What's good and what's bad about how to use AI? And I really hope we do get to that point where we all start to kind of agree on, okay, like this is the way we should use this.

Rob Dwyer (42:19.251)
Well, I want to switch gears to something a little less scary, but maybe just as exciting. I've got my pirate monk with me today for my coffee. Just for you, can you tell us about story pirates?

Kent Morita (42:28.928)
Hahaha

Kent Morita (42:33.001)
Thank you.

Kent Morita (42:37.234)
Yeah, I love talking about the story parts. Um, so I've been doing improv for 12, 13 years now. Um, and I love improv. It's, it's what gives me a lot of joy and story pirates is this organization. Um, and we go to these public schools and we do an improv show for kids. And our goal, um, is to kind of.

give the kids an aha moment of they have these silly ideas. They're like, uh, I wish there was like a sponge who is a princess. And, um, they really want to eat hot dogs. They have these ideas and we say yes. And, and we play that character. Uh, that's literally a character I played, um, because of a kid. And we, we just yes. And, and fully commit. And for 45 minutes, we create an entire play.

a comedy play based on these kids' ideas. And you can see in the kids' faces like, oh wait, my ideas could be a thing. And then we take them through story writing seminars and classes and just get them excited about making things. And yeah, it's something that I'm very, very passionate about. I love working with kids. I love...

Just trying to inspire them to use their voice and express themselves. That's one of my favorite things.

Rob Dwyer (44:13.267)
Reminds me of Whose Line Is It Anyway, but with an audience full of children, which just sounds like so much fun.

Kent Morita (44:16.618)
Yeah, yeah, yeah.

Kent Morita (44:21.482)
It's so fun. And these kids are just, just geniuses. You know, they, they would come up with ideas that are like, one kid was like, you're Sarah, you're a sponge. And we're like, so what does Sarah want to do? And they were like, Sarah wants to go, but can't. And that's, I'm like, this is like a colon. This is like, what does that even mean? It's, it's so trippy. So like.

We had to play it and I'm like this character that wants to move to stage, right? But can't or like wants to say something, but can't. And it's just created this like super trippy scene that's really funny. But I'm like, oh man, that kid's a genius. He's dropping some Nirvana bombs into the set. It was amazing.

Rob Dwyer (45:08.171)
How long before they no longer need you on the stage and they're just entering prompts?

Kent Morita (45:15.25)
So I honestly think it's, that's something that's really cool. Um, if kids not, not like, I think there's always value in seeing adults do things and, uh, make something funny. And there, there is like this magic to it. One of my favorite moments doing story parts, I went to a public elementary school in the Bronx.

And I did a scene and in the crowd, there were some kids that were reacting, especially differently. And I couldn't quite catch who they were or what they were doing. But the next scene, I get to face the audience and address them and do this like little monologue. And I looked out for those little pockets and they were all East Asian kids that were laughing extra hard.

at me. And it was like the most gratifying experience because I was like, yes, I remember being in your seats at assemblies and I didn't see people like me doing silly things on stage. And that sort of experience comes from people doing things in front of you. So I think I think there's value in that. And I think I think that's where humans.

Rob Dwyer (46:28.208)
Mm-hmm.

Kent Morita (46:44.566)
are always going to be inspiring, but story parts can't go everywhere all the time. Now we have a podcast in an effort to scale and give that mind-blowing experience to kids everywhere. But I think like chat GPT and Gemini, kids could literally come up with, I want you to write a story about a hot dog loving princess who just wants to like ask

mustard prince out on prom. These like LLMs could do that. And then it will show the kid like, oh, wait, that could actually be viable. My silly idea is now something. And just showing that zero to one is possible. I think it's a big value. And yet I think that's why this is something that I've been trying to work on.

It's a real nascent idea, but I want to create a kind of a guidebook for teachers to use these LLMs in classrooms to get kids excited about how their silly ideas might become real, but then hopefully integrate it with the common core curriculum to get them to take that zero to one and then teach them the techniques.

a writing and everything to get that one to 100 and make it really their own. So I think there's really a lot of exciting, hopeful things there of getting kids to be excited about creating things. So yeah, I'm fairly optimistic and yeah, I'm fairly optimistic in the...

value of life performance as well.

Rob Dwyer (48:43.995)
Well, I love that. And let me tell you, Kent, you are one of the humans that inspires me. I have had so much fun talking to you today. It's been informative. It's been fun. And I encourage everyone to go check out your project, Moment in Manzanar. And I will put a link in.

Kent Morita (48:50.248)
Thank you.

Rob Dwyer (49:08.603)
Thank you so much for coming on Next In Queue and just sharing all of this. It's been fantastic.

Kent Morita (49:16.382)
Yeah, thank you so much for having me here. Um, it's always a pleasure. And yeah, I, I got into conversation design because I love conversation. And this is one of the, uh, most lovely ones. Thank you.

Rob Dwyer (49:31.839)
Thank you.