Understanding Cultural and Social Nuances in AI Adoption and Human-AI Interaction
by StratMinds
- Full TranscriptUnderstanding Cultural and Social Nuances in AI Adoption and Human-AI InteractionSoojin Jeong
I'm part of AIUX leading user research team. Let me check quickly about audience composition. Who's the UX professional here? Oh, wow. So a lot of UX professionals. Who's the builder or startup? Wow. That's amazing. Who's the investor? OK. That's so cool. I've never been in front of investors. So I was not sure what are going to be the good point. But today, I'm going to introduce what we learned for the past one year, one and a half year when it comes to Gen.AI.
First, I'm going to start with a quick question. If AI were an animal, what would that be?
Honey badger. Fast one. OK. So we're going to honey badger. Why? Because honey badger don't care. Yeah. That's amazing. Someone else?
Familia. Familia. Why? It's constantly adapting. Constantly adapting.
One more? Octopus. Yes. Why? Because they are intelligent in ways that we don't understand yet. Life is smarter than us. It has eight arms to do multiple things. It kind of skips through the holes.
Great bear? Great bear. A great parent. Just to be good. Because sometimes parents sound incredibly human, but they're just not. Oh, I see. That's amazing.
OK. That's so interesting. This is the way we ask futures about AI. AI is something that oftentimes, readers don't know. They don't articulate. They cannot articulate what is their needs or what it is even. So we try to understand the creative and interesting questions go deeper into their mind.
Let me go through it next. So AI UX is part of R&D. So Google Research and Google DeepMind, we are leading the technology roadmap for Google. And as you can guess, a lot of them are AI. When we think about technology roadmap, it should happen many, many years before because we have to make a big investment and we have to make a decision. The leadership should make a decision. But as you can guess, technology is, AI technology is very nascent. So when we make a big decision, it's not just about what technology can do. We really try to envision when it lands to humans' life what it's going to be like. So UX play a role there to understand and envisioning technology possibilities and how it is going to be used within people's life.
I'm going to skip this. OK, so I'm going to start with my personal experience. Before I came to Google, I was at hardware company, Intel and Samsung. And then I moved to software company, Facebook and the Google. You know, hardware company, they really think through the future innovation. They're very serious. When we talk about long term, it's about 10 years from now, 10 to 15 years from now. Why? Because they have facts. Intel has like facts and it takes multiple years to even build it and like to envision it.
So I was part of Intel team, thinking through what are going to be the future of computing and what we should be prepared like 10 years, 15 years from now. That's very rigorous process. I was very shocked when I came to Facebook and Google. Their long term is two years or one year oftentimes, right? So it's very different scale.
I was a user researcher like all my career. And I remember like way before, even 10 years before iPhone came, we studied a lot about the smartphone. And the leadership want to understand should we invest it? It is going to come, like how should we make it? And we did a lot of study, a lot of study. And again, users cannot articulate their needs and future thing. We used a lot of stimulus, a lot of different, like the UX examples. Because every time we do study, was very pessimistic. Users say, no, I don't want to use this. I have my PC, much bigger screen, much easier. Why should I use this small screen? And the quality, it's like, oh, I can't even do that. I don't envision it. Maybe business people, they're very urgent. They're going to use it.
So as a user researcher, I have to go to the leadership team that like, you know what? Users don't want it. And I don't know why. It's something's missing. Like there is a technology possibility there. There's something missing. Users don't see that. And then many years later, iPhone came. And I felt like, oh my god. I got to know that what's missing. What's missing was when we study a lot of our BlackBerry smartphone study, our paradigm was all about computing. So we tried to put all the computing UX into the smartphone, to the device. Folder was there, like everything. We are doing it computer. We're just trying to force it to fit into the BlackBerry. And it's too much. I mean, why do you use?
iPhone did the paradigm shift. They didn't limit themselves to the existing mental model, existing way of looking at the computing as a smartphone. They look at the whole, even real estate. They used the whole real estate to leverage it. Instead of folder, they came up with app. So these are the moments I feel like, yeah, sometimes when big changes are coming, we have to think about very differently. It's like the paradigm shift. And without it, people are going to stick to their existing mental model. And it's not successful.
With AI, I feel like that too. I feel like we are having amazing technology and AI. But are we really having the right UX, right UI for AI to let people envision the tech possibility? Maybe no. So I think that's something I wanted to remind.
This is another question. When we talk about AI, we talk about how do we communicate what's AI to people? And there are multiple, even within the same company, there are multiple parties of team prefer one another. So we did user study, not just in the US, in other five different countries. And it's because whenever we do qualitative study, we couldn't find, because people always have pros and cons there. Can you guess, in your mind, what should AI be like? Any specific answer?
Agent. Agent. Why? I'm seeing a lot of conversation around the expectations that people have that AI will be an assistant, an agent, something that can help you do something. OK, that's good. Something to help you on behalf of you. That's right. That's awesome.
Fools? John? I want to say coworker, but I think it depends on who you're talking to. I could imagine that lots of advertisers use being more appropriate, where at this point in time, they might leverage AI just to drive inspiration, so that human can then go into the actual job. But if you're asking an average user how much energy they have. Cool. Coworker. OK, the good reason.
I think those kind of all still fall in the trap of current mental model. Because when we started working in VR, most of the industry was trying to simulate, and we leaned into stimulate instead. So what I would like to see AI be is an expansion of the self. Oh, I see. And also a collective consciousness. Like, something we can't do naturally and organically now. I see. Yeah. Yeah, that's great.
One more? One more? Yeah, I just wanted to say it depends ultimately on the user and usage. But if I were to add one more, I would add a tool. Tool. OK, interesting. We will talk about tool later.
OK. Sorry, I just want to add, these are all very professional business cases. But the fastest growing consumer products we've seen on the investment side are all about companionship. And I see a friend is the one that's missing. OK. That's great. Great.
All these answers we are hearing from the consumer today. So if I give you the answer, like after we did qual and quant study in five different countries, the answer is none of them. And all of them. So it really depends on the person. Like, we didn't see company differences. We didn't see the country differences at all. We didn't see gender differences at all. It's coming from their existing mental model. It's coming from their preference of what AI should be for them. It's a relationship for them. And everybody have one different relationship. One thing to learn. Go ahead.
Good question. Did you ask people who had a previous experience using AI? Yeah, both. Both. Yeah. So some people are more familiar with AI. They're using LLM right now. So yeah, there were some differences between who understand AI versus not.
So but one thing we learn is that users, I mean, we talk a lot about what is AI and how should we talk about AI. But users are not only interested in what is AI. I think it's enough. They're interested in their role. What's my role with AI? What should I do? Am I sitting at the driver's seat? Am I like, you know, a companion? Am I like, you know? So I think we, like industry, we haven't really talked about that much. We always just like, AI can do this. You know, like AI is this. But we never talked about how user and their role is transforming, right? And you know, what is the best way of working with AI?
So I think something that we emphasize a lot within Google, through AI UX, is that we can keep humans in the center of training, decision making, and action. For the past one and a half year, like especially right after the chapter PT, you don't see that like so much disruption, so many happening in the technology side. But also, I have to mention that it's not just about the technology side. I see so much disruption in consumer side. Their attitude and their perception and their understanding.
So I think we gotta equally look at not only like how AI technology is evolving and moving fast, but also how consumers' perception and their needs is evolving. Their mental model is evolving. That's really important. I think right now we're very unbalanced, right? We only talk about technology. So yeah, the opportunity to capitalize, you know, but also understand like areas that we have to get it really right.
So today, let's think about AI. Most time today, AI is in the system, within the system, meaning that users try to come up with what they want and they're not really able to do it. So if you go on, ask AI, AI come back to users with answer. Here's the lightly copyedited version of the transcript, preserving all original content:
There is a distance between users and AI. What if we bring AI to the human side so that we understand their context, their preferences, their goals much better? I think it's gonna be like much better equation. Think about when you go to, when you Google, like how do you search? So many people, go ahead. So the recommender systems are there or like when you classify recommender systems? That's right. But like recommender system is also there and then recommending it. We don't have enough room for users to evolve in it and you know, like in real time refining or you know, that type of thing.
Yeah, so I think it's coming here to the user side, but I think there is differences between, they analyze it and then you know, come back versus user and AI working together in real time. But I think, you know, we just talked about when people, for example, Google, how do you Google? Like how many words do you use? You know, we see that people use like three to four words. Majority of people like fly to Seattle. What is the best restaurant here? And like one word always is most, best. And then we gotta infer things based on maybe two to three words max, right? How can we fulfill users in that sense, right? And oftentimes, users don't even know what they don't know and you know, like they don't know what their real question is.
So the thing we are talking about, the collaborative AI is something that how do we bring people to the side of AI by sharing their goals, sharing their intent, sharing their context, sharing more. And then AI as well, not just one way, taking it and analyzing it, and you know, like bring it back. How can AI also come to user side, co-shaping their intent, their questions. And you know, like how do we do that? That's the concept of collaborative AI.
And as we discussed earlier with Blackberry to iPhone, when it comes to AI, UX for AI has not been invented yet. That is something, you know, we really feel that maybe this is an opportunity. But also we gotta keep reminding that let's not approach it with the previous computing mindset. That is something that I'm always asking my members, UXR members, are you guys testing, you know, like the new AI product, you know, with a very, what is it, like the existing methodology or like existing computing, right, like the mindset. Because it's very different.
Chat interface is important. I think that was first time that like people start to realize the benefit of AI. It's interesting, because I was in AI for a while, like long time ago, maybe 10 years ago, when we asked people like, what is AI? And then people talk about robots or Frankenstein or, you know, like something very different. Then maybe five to six years ago, people start to talk about assistant, you know, like hey Google or, you know, Alexa, Siri, as AI. Now, when we ask people what is AI, they all talk about like LLM, right, like chat GPT, Bard, everything like that. However, chat interface is just the beginning of AI UX. And there must be better one and, you know, multi-modal one, I think, you know, we've got to really envision that.
I was very happy to hear the CEO mentioning that like, indicator for the good AI product gonna be UX, I totally understand, because there are so much things that like UX can help. So what are gonna be the future of AI? And we feel that post-gen AI is collaborative AI. Because think about, like this is from the conversational AI first to came to virtual assistant to generative AI. It took seven years, it's not a new concept, seven years. But, you know, like when we study people with generative AI, we also hear a lot that like, you know what, I just feel like, you know, they went too far, like I give them, you know, one thing, one ask, and then they generate something, and I have to reprompt, or I have to learn prompt, I have to reshape. I don't feel like it's really getting me, right? That's not the ideal way of AI, right?
So the true AI, or ideal AI, is how AI and human can work together, interact each other, and bringing and, you know, like reaching the goals together, define the goals together, and then accomplish together. So that's something. Something we need to think about is the problem we are solving. The problem we are solving is users don't know how to collaborate with AI. And that's a big issue. That's a big issue, because AI has so much possibility. You know, someone mentioned that like, octopus, like, we don't even know what today, you know, like what is capable. Even the best AI engineers within, you know, like Google, telling that, you know what, AI is not discovered, AI is not the invention, it's almost the discovery. We learn every day that what it is possible, right?
So I think there are like certain things that we don't know. And users should see that possibility at the same level as, you know, like the scientists, or like, you know, those us making it, but they don't see it, because they see it as another improved version of computing, which is not true. Someone mentioned about the tool, they see it as a tool. But tool, it's not evolving, right? I mean, you know, it's not something you nurture over time and grow over time. AI becomes so much powerful when, yeah, when they get to understand you, when they get to, you know, work with you, and AI can do so much more when you let them to know about you, know about your goal, but also when you let them to act on behalf of you. But, you know, it requires trust and time.
Misalignment is another big thing, you know, between user expectation and AI possibility. Usually, when we think about like the quality of the product, like the perceived quality, it's about reality minus expectation. So your expectation is really high, and then the reality sucks. The quality is not there. Oftentimes, technology do the bad job there. Like, you know, AI is one of the example too. Like, we talk so like futuristic about AI, that AI can do everything. By the time people use AI in certain form, they feel that, no, you know, it's not true. It's not ready, right? And that gives them, you know, like the perception, the poor perception.
But like, when we think about the typical product, their mental model and then, you know, system capability, it's pretty slow, right? It's not like, you know, shifting so quickly. But when you think about AI product, like they're constantly evolving. So you, you know, you used your AI today, and then you're disappointed. Maybe one month later, if you use it again, it are gonna be totally different. But people don't use it because they feel that like, I already used it, but it was not good, not for me, right? So how do we teach our users about this evolving character of AI? And also, how do we set the right expectation from the beginning that people don't feel that it's a failure experience? And then, you know, like they try to willing to, you know, provide guidance and, you know, grow it together.
So the overestimation gap, you know, I'm disappointed is usually happening here. And then under estimation gap, they're like, why would I even try it again? I did it, right? So these are different communication challenge, and you know, like you guys should think about it. So how to define what is collaborative AI is really important. Every company should have, you know, like different definition according, you know, like within you so that you can build it. What do you mean by mental model is something that, you know, like the user researcher and, you know, like the UX team, we also get together and have shared understanding.
I think like something that we have to think about is, like what is the current expectation of the AI? And then what is AI capability? If there is a gap, what is this gap? And how do we fill in this gap? I think it's important. Like on the right side, you don't see, you know, there are a lot of different ways of studying the product or studying the user needs. A lot of cases, we often look at just bottom level of like, what is the pain points, you know, like why it's working, it's not working, you know, and then we have to vary like the small level of usability, but with this big paradigm shift, we gotta look at from the mental model side because I think industry is shifting big, but like nobody is really shaping users' mental model, like, you know, like together and correctly, right? And just like leave it as their job, but it's not. It's totally different, you know, like think for users.
So I wanna mention that solving AI technology, from my perspective, you know, like from user research perspective, when we talk to users and, you know, look at how things are moving, solving AI technology doesn't solve current AI issues among users. AI issue is about people don't adopt AI. There are multiple reasons. I went through it because I was at Samsung, I was at Intel and, you know, like the Meta. I saw a lot of the product cycle and even, you know, very advanced product cycle. Usually, you know, like there are early adopters and then, you know, like the early majority and then there is a chasm and then, you know, we go to next level. AI is one of the rare case that the chasm happening very early stage, even those, you know, tech savvy people, it's hard for them to, you know, like get in and we gotta really think about like what it is and how we feel in that gap.
People don't know how to collaborate AI. Is it their fault or is it our fault, right? I mean, you know, we're bringing totally new mental model paradigm shift product to them and we don't really explaining their role. So again, like, you know, what is their role? Like how do we guide them? Trust building is everything actually, right? Here's the lightly copyedited version of the transcript, preserving all original content:
Because AI's potential, it's only happening when people give permission to AI that, "I'll let you know my goals, I'll let you know my intention. You know, like this is what I'm doing or like can you help me to define it?" Also, it's all about letting them act on behalf of me, right? Because think about without AI doesn't have authority in acting on behalf of you, the possibility is limited too, right? So I think the biggest possibility is coming when you just give permission to know, permission to act on behalf of them.
It's not happening, it's because people don't build trust, right, and oftentimes our product just expect them to do something right away in the beginning. So I think we have to build the user journey in a way that like how do we build this trust over time, right, and then step by step.
The paradigm shift, we talked about it, it's an opportunity to redefine and I think it's a very confusing stage, but also it's very, very exciting stage. I remember that I have a daughter and when she was little, we took game and then when she's not in advantage, she changed the rule, right? But actually industry does that too, think about it. All this disruptive product and industry, they change the rule. Mobile is like that, AI is like that too, right?
So if Gen.AI is right now is something really popular, I think it's a great, they did a great job to let people really get to experience AI, but not yet there, it's not the best ideal case. How would you change the rule of the AI game, right? Like how do you make it? Something that we should think about it and like we have to move it really fast.
One thing I remind about Samsung case, when I was at Samsung, we go to Samsung by executives and ask them, hey, these are the trend and we see that we have to do this. Oftentimes they look at the risk when they act on it, right? So, okay, what if it goes wrong? You know, like if it does things. And we have to remind them all the time that, hey, it's both risk. Like, you know, it's risk taking if you're doing this, but it's even bigger risk taking if you don't do this. You have to think about the consequences together, right? And there are multiple examples in the industry that didn't move at the right time fast enough, and then, you know, like the failed it. So like, we gotta think about how do we help them.
The trust is another interesting thing. I didn't bring all the lists, but this is one example that we work with our engineers and, you know, like UX designer and, you know, team about, like, how do we build trust? But, you know, there are like few examples about like humming trust and building trust, right? And then building trust, it's, you know, like sometimes it's easy source verification, ability to cultivate over time, you know, like revisibility, transparency, revisability, transparency, things like that.
One thing interesting we learned was that trust is actually building when you have friction. People actually expect friction, especially at early stage. Think about it, your best friend, you know, if you knew about like 10 years, more than 10 years, you never fought with them. Is it really meaningful good friend? No, you get bonding, you know, after you have some friction. The same thing, users at very early stage of going through this AI journey, they tested, or like, you know, they wanna have some sort of friction, try to understand each other, AI understand me, understanding them.
So as a designer and, you know, product people, you guys also think about how do you design friction very thoughtful way from the beginning to, you know, like earn trust, to build a relationship, have a lot of small wins will help to build relationship. At the end, if you have enough trust, you can delegate it, right? But like step by step is something that you have to think about it.
Okay, so I'm almost done. Yeah, so the ketone locking human AI collaboration is a trust. And then like what we do a lot is we actually look at human side, you know, to really understand human AI interaction, we gotta really understand how people interact and you know, like to build it. And then we apply that with AI. And then a lot of AI right now, it's in the like left part and you know, bottom part, but like the collaborative AI is not there yet.
So, you know, we think about it, collaboration is more than conversation, it's more than back and forth, you know, like it's not just about answering. I always mention about all throughout Winfree, you know, like the difference between, you know, like it's not like you ask something and I answer immediately back, it's envisioning, you know, like the narrative and you know, like how to build better communication together. So we talk about like those things.
So I'm gonna end it with this, that we gotta think about like past computing was all about operator and tool relationship because user has to know exactly what they want and they use it. The future, the AI is not actually operator tool relationship. AI is partner to partner relationship. Do you need, another thing is not only shift our perception from tool to partner, we gotta also help users shift their perception from operator to partner and how do we do that? Okay, so I'm gonna end it here. Thanks. I think we can get some questions.
Yes, yes. So we actually. So let's take two questions and then we're gonna have a break so you can ask more question. Yeah, so let's do, oh gosh. I just have one quick, just clarification on your slide. You asked, you said that we're too human. Can you expand on that please?
Yeah, I think that's something that we discuss a lot. Users don't want AI to be too human. Yeah, it's a, yeah. Like they want AI to be neutral or something AI. AI itself is very unique. Like don't try to be limit human something unnatural. Yeah.
I think I can add something to this actually. All the time. But in robotism and robotics, there is this concept of uncanny body. When I'm drawing it looks like a person then people start to get scared. We talked about that in the simulated environments too. So yeah, anyway. Thank you, great talk.
I'm super, the trust that you brought, like this is really fascinating topic in here. Actually, it's on a more broad topic, right? How we, you trust, we can trust, right? Anyway, I'm just super curious, like when it comes to that shift, that's my question, right? From operator to partner to partner. I wonder how would that look like and that's related actually to the feedback that we, is it, can we build, how to build a system that basically gives the user the opportunity to understand that, oh, at that point, I'm teaching you too. I'm giving my feedback to the ears, make you better and then you're becoming my partner. I wonder what are your thoughts and experiences.
Yeah, it's a great question and that's the competitive edge of each company and there are lots of UX can play a role. So for example, within Google, like we go really deeply about like how do we, because like AI asking question, it's not just like always asking question. We have to understand when is the right timing, how do we ask it, how do we, like memorize, it's not like everything memorized, right? So I think there are a lot of things, a lot of tactics and we have to look at it from by stage by stage. It's not one size fits all thing. It's about when we build relationship and we first encounter this very different like set of situation that we have to act and a lot of UX designer and UX engineer, we're working together and then tested and iterated, right? So that's something that I feel that when you assess the company, like the AI company, AI product, look at the UX, that's the indicator, like how thoughtful they are. Because at the end, it's about user adopting it. It's not about just technology. I saw multiple example that like amazing technology fail because you don't know it's possibility. Yeah.
So I had a Robert, so we're gonna take the last question. Yeah. Yeah, I'll speak loudly. Great talk, really appreciated everything. Picking up on the trust point, super interesting that you talk about building trust, requiring friction. Yeah. My background, I wish you all about trust being delivering expectations recurrently and through that sort of that establishing of that behavior that I can go to this service or I can go to this product and I get the thing I'm expecting, that that's trust. And that sort of, I'm really interested in digging into the friction piece because often when we have friction, it's frustrating, it's throwing us into dealing with that problem when we wanna be somewhere else, right? So it's almost taking us from system one, low level automatic to thinking into, oh, I've got to deal with this. So how do you get that balance between how much friction you introduce to build trust, right? Without it almost destabilizing that relationship.
Exactly. I think that that's why it's not just a friction, it's a thoughtful friction, right? It's bring people's attention to that, but like thoughtfully educating them about what the system can do or AI can do, what can not do, but also confirming, right? I think it's very common thing that like, you just be human, say that I don't know, or like I cannot do it. Oftentimes AI over promise it, right? And give wrong things. So I think a lot of friction is related to how to confirming each other and sometimes giving some blind spot to do users because, and that's something users really appreciate that too. Like, you know, was not exactly related to what I asked it, but like by understanding the blind spot, they appreciate, oh, like I can think of it this way too. So this thoughtful friction, it's not an easy thing, and that's why. This is something like, you know, really competitive edge that like USers and engineers and you know, like the product people really work together about like, how do we do that? Here's the lightly copyedited version of the transcript, preserving all original content:
And I think to that point, it's sort of having the human in the mix, the human researcher, the human designer in amongst the data that we heard in the previous talk because you kind of need that human layer on top to be thoughtful about how you interpret the data, right?
Yes, exactly.
Okay, so we had one final thing you said you want to ask the audience.
Yeah. I wanted to ask you guys, have you ever said good morning or thank you to ChatGPT or Perplexity?
Yes, all the time.
So, maybe it's already happened. Because when you're actually, I am grateful for mental health, thank you, this is great. It brings nothing to the conversation, aside from the fact that I acknowledge, you know, those goals in my partner. But I do it so that they remember us. Yes, I do. I get to them. I just wanted to add a little.
But why are you doing that? They don't care, right?
I'm doing that because I don't proceed. I don't think it's over. I think that this part is actually partners having their own set of tools from each partner, right? Tools coming out and then kind of like use the same tools for this. So, yay, great. But, of course.
So, one thing that I envision of like these amazing people getting together is this spot.
Join Swell
At StratMinds, we stand by the conviction that the winners of the AI race will be determined by great UX.
As we push the boundaries of what's possible with AI, we're laser-focused on thoughtfully designing solutions that blend right into the real world and people's daily lives - solutions that genuinely benefit humans in meaningful ways.
Builders
Builders, founders, and product leaders actively creating new AI products and solutions, with a deep focus on user empathy.
Leaders
UX leaders and experts - designers, researchers, engineers - working on AI projects and shaping exceptional AI experiences.
Investors
Investors and VC firms at the forefront of AI.
AI × UX
Summit by:
StratMinds
Who is Speaking?
We've brought together a unique group of speakers, including AI builders, UX and product leaders, and forward-thinking investors.
Portal AI
Ride Home AI fund
Google Gemini
Metalab
Slang AI
Tripp
& Redcoat AI
Stanford University
Google DeepMind
Grammy Award winner
Portal AI
Ride Home AI fund
Google Gemini
Metalab
Slang AI
Tripp
& Redcoat AI
Stanford University
Google DeepMind
Grammy Award winner
Google Empathy Lab Founder
Blossom
Lazarev.
Chroma
Resilient Moment
Metalab Ventures
of STRATMINDS
Google Empathy Lab Founder
Blossom
Lazarev.
Chroma
Resilient Moment
Metalab Ventures
of STRATMINDS