The Culture Kit with Jenny & Sameer

How to Cultivate the Human-AI Sweet Spot for Innovation

Episode Summary

How can leaders put AI to work without stifling human creativity and innovation? Berkeley Haas organizational culture experts Jenny Chatman and Sameer Srivastava are back for season 3 of The Culture Kit! The season kicks off with Hila Lifshitz, a Professor of Management at Warwick Business School and head of The Artificial Intelligence Innovation Network. She’s also a visiting faculty member at Harvard University’s Lab for Innovation Science (LISH). Jenny, Sameer, and Hila dive into her pioneering research on open innovation at NASA, revealing how they transitioned to an open innovation model and the significant cultural shift it required. They also discuss new research with fashion company H&M that revealed a common pitfall when implementing AI, and how to avoid it.

Episode Notes

How can leaders put AI to work without stifling human creativity and innovation? 

Berkeley Haas organizational culture experts Jenny Chatman and Sameer Srivastava are back for season 3 of The Culture Kit! The season kicks off with Hila Lifshitz, a Professor of Management at Warwick Business School and head of The Artificial Intelligence Innovation Network. She’s also a visiting faculty member at Harvard University’s Lab for Innovation Science (LISH). 

Jenny, Sameer, and Hila dive into her pioneering research on open innovation at NASA, revealing how they transitioned to an open innovation model and the significant cultural shift it required. They also discuss new research with fashion company H&M that revealed a common pitfall when implementing AI, and how to avoid it. 

3 main takeaways from Jenny & Sameer’s interview with Hila Lifshitz

  1. Think like a scientist and use an experimental mindset rather than an optimization mindset. Managers should understand that we’re still in the early days of AI and be flexible to how these tools might fit into their organizations.
  2. Keep pushing on the expertise of your people: Ask them what they are good at, what they want to be good at, and how the organization can set them up for success.
  3. Allocate resources for this expertise: How can the organization lean on these areas of expertise to push the boundaries of innovation even further—while using AI for lower-level tasks?

Show Links

View Transcript for "How to Cultivate the Human-AI Sweetspot"

Episode Transcription

(Transcripts may contain a few typographical errors due to audio quality during the podcast recording.)

[00:00:00] Jennifer Chatman: Welcome to The Culture Kit with Jenny and Sameer, where we give you the tools to build a healthy and effective workplace culture. I’m Jenny Chatman.

[00:00:14] Sameer Srivastava: And I’m Sameer Srivastava. We’re professors at UC Berkeley’s Haas School of Business and co-founders of the Berkeley Center for Workplace Culture and Innovation.

[00:00:26] Jennifer Chatman: On today’s episode, we’ll hear from Hila Lifshitz, a professor at Warwick Business School and Harvard's Lab for Innovation Science who is studying how to best employ AI to enhance innovation—without dampening human creativity and ingenuity.

[00:00:43] Hila Lifshitz: We want people to be engaged. We've been striving and doing so much to get engagement and to have people feel this desire to champion their ideas. How can we have an innovation culture if people don't want to champion, like, their own ideas, if they don't feel it's theirs? And that's one of the biggest risks that people don't get right now that they're doing by strongly pushing the use for AI.

[00:01:06] Sameer Srivastava: Hey, Jenny!

[00:01:07] Jennifer Chatman: Hey, Sameer!

[00:01:09] Sameer Srivastava: Well, welcome back to the third season of The Culture Kit. I guess this makes us veteran podcasters now.

[00:01:14] Jennifer Chatman: I guess so. It seems like everyone who's cool has a podcast now. I sure hope we're cool, too. Are we?

[00:01:21] Sameer Srivastava: I think I'm not going to touch that one. But what we do want to do in this podcast is to help people, help leaders and organizations be more effective by paying attention to the culture of their organization.

[00:01:32] Jennifer Chatman: Very true. But what I really love about doing this podcast with you, Sameer, is getting to have these incredibly interesting conversations. Our last season closed out with a great episode on the relationship between art and innovation. And it also touched on using AI to unleash creativity. It was absolutely fascinating.

[00:01:54] Sameer Srivastava: I completely agree. And listeners who are interested in that should check out season two, episode five, if they missed it. But there are lots of fears also floating around these days about AI and how it might stifle human creativity. So, today, we're going to turn to a question that's quite related to the conversation from last time, which is about how we can build an innovation culture in the age of AI. How do you really leverage AI to enhance the creativity of your organization?

[00:02:19] Jennifer Chatman: Right. And to help us answer this pressing question, we're joined by Professor Hila Lifshitz of Warwick Business School, whose groundbreaking research has provided some crucial insights into how organizations can adapt to new paradigms in problem solving. She has this longitudinal study, super impressive, of NASA's journey toward open innovation and how it offers some valuable lessons for organizations who are navigating today's AI revolution. This is the focus of her current research.

Welcome to The Culture Kit, Hila.

[00:02:55] Hila Lifshitz: Thank you. So happy to be here.

[00:02:58] Sameer Srivastava: Great to see you, Hila. And I should mention that Hila and I overlapped when we were in our respective Ph.D. programs. So, we've known each other for quite some time. And so, Hila, I wanted to start off by talking about your award-winning research that Jenny mentioned on open innovation at NASA, because it really sets the stage for what we're about to talk about next.

And I should say that the term “open innovation” was really coined by Berkeley Haas' very own Henry Chesbrough, particularly in his 2003 book on open innovation. And at the time, Chesbrough was teaching here when he developed and popularized this concept. But Hila, how do you think about and define the term “open innovation?”

[00:03:36] Hila Lifshitz: Oh, the term, I think the problem is that when Henry did it, it was very clear because innovation was closed in many organization, meaning it was mostly done within the boundaries of the organization and maybe with few collaborators, like external collaborators, some ecosystem level. Moving from that to something more open is a very clear construct, very clear. But over time, I would say, today, people actually use it for anything, unfortunately, that is more than talking to one person, then we're open. So, I often have to remind people that open innovation truly means to open up the boundaries of your process to also unknown individuals or entities, meaning distributing the innovation process beyond the boundaries of those that you're used to collaborating. So, I tend to distinguish between collaborative and ecosystem innovation, which is what most companies do, and open and distributed innovation.

[00:04:33] Jennifer Chatman: Oh, that's so interesting, Hila. So, your research at NASA examined their transition from a closed to an open innovation model. And this must have involved a really massive cultural shift. Can you walk us through what prompted the shift and what people's reactions were within the organization?

[00:04:52] Hila Lifshitz: Definitely. And NASA is a classic example to what we just discussed. They are an open organization in the sense that they were always collaborative. They never worked only within the boundaries of their organization. But as I said, there is a big difference between having your process, your innovation process, collaborative with people that you know in advance and you know who they are and you have contracts with them versus what they did here, which they took their strategic challenges and said, for one year, we're going to keep on doing it in our collaborative ecosystem model, but in the same year, we're going to also try to open it to whoever, through online platforms, open innovation. So, this is much more the open source model in science and technology. So, they don't know who's going to come and try to solve those problems, but the same strategic R&D challenges that they were working internally, they put externally.

And I'm emphasizing this because this is brave. What most companies do, unfortunately, is they don't take their strategic challenges. They take a nice-to-have challenge. And then even if they have an effect, it's not a dramatic one. But when you take the strategic challenge that your company and your people and your experts are working on, and then you try to open it, then it's impossible, almost, to ignore the results when they are successful.

[00:06:07] Sameer Srivastava: But Hila, you've talked about how opening up these very strategic core challenges to the outside world also posed a threat to the professional identity of many of the NASA scientists and engineers. So, as you think about what really distinguished the teams that successfully embraced open innovation from those that didn't, what were some of the distinguishing factors?

[00:06:28] Hila Lifshitz: That's a great question. So, to connect it to what I just said, performance-wise, this was a success. But as you said, on the level of how people experienced it, it was very hard. It was very challenging. Many people saw it as a threat, and in my study we had 100 people, and they were, kind of, there was a spectrum of reactions. In the end, there were people that were really talking about it as a slap in their face. They were insulted by the fact that now their managers want them to open their innovation channel because this is what, you know, made them want to go to NASA. This is why they're here. They want to be the one that innovates. They want to be the problem solvers. Every time that they have a problem to put it outside for someone else to solve, like, “This is like cheating,” they would tell me. And on the other end of the spectrum, we had scientists and technologists that are saying, “Wait a minute. Maybe the way we've been thinking about how we work is wrong and we need to transition from the lab being our world to the whole world, all of a sudden, being our lab. We can work with anyone from anywhere. And we should stop thinking about ourselves as problem solvers, but instead think about ourselves as solution seekers. And we're seeking for solutions that can come from our lab, from a collaborator, from the ecosystem or from someone we've never met and never heard about through an open innovation platform.

And that was the pivot. That was the transition point that distinguished those that were able to successfully embrace open innovation from those that struggled. The nice thing is that it wasn't a top down, it was a bottom up. It emerged in the arguments between the scientists and the technologies. They literally talk about it. It's not my term or the manager's term. They said, “We need to change and this is how we need to rethink ourselves.”

[00:08:06] Jennifer Chatman: Yeah, so solution seekers. Let's now think about how this applies to organizations adapting to AI tools today. I mean, are there parallels?

[00:08:17] Hila Lifshitz: Definitely. So, the only difference I would say, there are two differences. The first difference is, okay, instead of opening it up to an open innovation platform where other individuals can come and solve your problem, now you're using tools that it's not necessarily individuals, but it's machines and technologies that are giving you potential responses.

What does it mean? It means a couple of things. First of all, some people, initially, two years ago, thought you can, kind of, ignore it easier, like, maybe no one knows because it's not another individual that you're paying them or, like, you're promising them. But actually, we're seeing the opposite because managers are mandating the adoption, almost, of these tools for R&D and for innovation, kind of, organizations.

So, they don't have the same choice. So, with open innovation, people have more choice to decide whether to embrace that potential model in many companies. But here, I see a very strong push from the management to adopt those tools in your R&D process. And there are good things about it, but also bad things. But in the sense of, what does it do for the professional, it’s very similar because it threatens them, it challenges them to rethink who they are. And they need to go through this refocusing, I call this process of moving from the “how I do my work” to “why I do my work” and then rethinking about my role. That's what led NASA folks to move from being a problem solver to solution seeker because they cared about finding a solution. At NASA, they have a very clear mission. They want to get to Mars. So, that, in their arguments, that's what people said. They said, who cares? In the end of the day, science is about finding the truth. We want to get to Mars. Who cares where the solution comes from? Let's rethink our role.

The problem with companies these days is that they don't necessarily have that higher mission so clear for everyone. So, they are over-attached to the “how.” And once you bring, you know, LLMs and generative AI to threaten this “how,” then it's very hard to then do this refocusing work and to rethink your role. And here, we do need managers. And we need the innovation culture. And we need engagement, both top down and bottom up, to really rethink the mission and the roles.

[00:10:20] Sameer Srivastava: So, an interesting theme that I think runs through both your NASA work but also your more recent work is that people orient towards new technology in different ways and they think of themselves as different types of problem solvers. That theme, I want to pick up on that in your more recent work. How should organizations really think about how to combine human and AI capabilities in ways that enhance their innovation capacity? And are there principles for how to break down problems that are best suited to each type of problem solver?

[00:10:49] Hila Lifshitz: That's a wonderful question. So, I have a recent field experiment and field study with Boston Consulting Group, the Henderson Institute, their internal research group, and with a wonderful group of co-authors, such as Karim Lakhani from Harvard, Ethan Mollick from Wharton, Kate Kellogg from MIT, Fabrizio Dell'Acqua, Steven Randazzo, and the BCG folks.

And what we see there is that there were three types of behaviors, which I like to call them with names of three types of people, but you can actually be the same person and throughout your day do these three different types, which we call them as “centaurs,” “cyborgs,” and “self-automators.”

So, let me give you a quick example, and this will make it clearer. So, the task they had was to analyze data and to make a recommendation to a CEO whether to invest in a fashion brand in the women, men, or children. And based on our data, there was a right and wrong answer, because we brought them. This was a simulated task, similar to their day-to-day job.

And what most people do today, they copy-paste. And the self-automators, I call them self-automators purposefully to provoke us, because most of us, when we use these tools, we, kind of, copy a specific task that we have, we put it into the tool, then we get the answer from it, we modify a little bit of polishing towards our style, and then paste it. And the problem with that is that we're basically, over time, automating ourselves on those capabilities. And we saw it in the data also quantitatively that these people made more mistakes. As I said, there was a right or wrong answer.

The other two types that are interesting and definitely were more accurate and did a better job were the centaurs and the cyborgs. So, centaurs are people who are experts and have a sense of expertise and want to keep that expertise and are willing to use these tools but only in order to empower them, to augment them, to make them better and faster in what they do. But they're not letting go of what they're good at. So, for instance, they would not let GPT do their analysis because, like, “I know how to do my analysis. Like, I just need you to give me the benchmark, help me learn faster, make the presentation more persuasive, etc.” But the core of their task, they did it.

And the cyborgs are these experimentalists that they want to work with the technology and test all its boundaries. So, they're actually giving up a little bit on their current skills, but they're learning new skills of how to work, how to do prompt engineering. So, this is really interesting to see, kind of, how there is more freedom today with this technology than we had with prior technologies and how we use it, still.

[00:13:21] Jennifer Chatman: What an amazing collaboration. I'm wondering if you could explain what the patterns that you surfaced from the study suggest about how organizations should think about fostering a culture of innovation in the AI era. Like, how should leaders think about training and supporting these different styles of human-AI collaboration?

[00:13:40] Hila Lifshitz: That’s a great question. So, I'll answer on two levels. First of all, what you asked about the training and then on the critical, kind of, on what they should do with the culture. So, on the training, I have to share because this is, kind of, a fun fact on our study, let's put it this way. Ethan Mollick, if you're hearing me, don't get angry that I'm sharing that. But we basically, Ethan did like a seven-minute brief training of best practices of prompt engineering, which we hoped would help people to do a better job in their prompt engineering, but it, kind of, backfired. And that's one lesson we've learned, is that a brief training is not helpful when it comes to using AI. You actually need to really do deep training. There is no shortcut of really doing a good thorough job of training on AI. Because what happens with the brief training that people get an over sense of confidence, and they, actually, it led them to do more mistakes. So, they overtrusted the technology. They did this copy-pasting that I talked about. We see that behavior more, actually, with the people that made mistakes. So, only 60% were right when they had the training versus 70%, you know, that had GPT versus 84.5% without GPT at all.

So, like, we do see that this tool is dangerous when you get to make critical decisions with it. The second thing is the culture. And that's really important because we want people to keep their sense of expertise, their sense of ownership of their ideas, and also their, kind of, capabilities themselves, because these capabilities, as I said, we can lose them. So, it's both, kind of, subjectively and objectively. So, the centaurs, for instance, are the ones that are making sure that they're not giving up on their capabilities. And they have this innate sense of expertise. So, I think that sense that cultures are cultivating is very important to stress. And what is your expertise? So, for instance, when I do workshops now with organizations, I tell them, “Let's make a list. What are the capabilities that you love having and you think you're an expert on and you're master of? Which skills you're actually happy to get rid of and you've been doing it because it is what it is? And which skills you may want to acquire which you don't have? Like, what are your new dreams, your new goal? Let's be more ambitious because some of the simple things we can automate now.”

And that helps move the culture. So, if you are coming with this kind of empowerment culture, as Sameer asked me at the beginning, how can we leverage it? And then you realize that you don't want to use it for the things that people are experts on right now, at least in this stage. You want to use it to automate the simple things, but you want to bring new goals and new skills in order to push us higher and to do more than what we've done before.

[00:16:18] Sameer Srivastava: So, this BCG study has really been, a model, a template, almost, for a whole body of research that has emerged since then, really trying to understand the conditions under which generative AI tools can enhance versus getting in the way of productivity. Many people are now working in the space, including some of our own PhD students like Nick Otis, who is on the job market this year.

But Hila, I wanted to actually switch gears and talk about another of your recent studies. This is with the retailer H&M on leveraging generative AI for innovation. And here I know you've really looked at the question of how to do so without impeding employees’ sense of ownership and their own desire to champion ideas. Tell us a little bit more about what you did and what you found.

[00:17:00] Hila Lifshitz: Yes, this is a wonderful, kind of, I would say, continuation to your question, also, Jenny, about culture, right? So, H&M, I didn't know that it has an amazing culture, I have to say, internally. Their people and their employees are… they truly feel a strong sense of belongingness to their organization. And they have a belief in their vision on sustainability.

And I got to them because of the basic fact that I started learning about sustainability and realizing that half of our garbage in the world is basically due to the fashion industry, and a lot of it is the fast fashion. But H&M, apparently, has been one of the leaders. But what can a leader do? And how do you sustain a healthy culture when, on one hand, you're telling, you know, your people and your employees that we want to be the number one on sustainability? On the other hand, your vision is to sell, first of all, more units, which we know is making more pollution, and to sell them cheap.

So, there is a strong contradiction within H&M culture. And what we collaborated on was, how can we leverage AI to help this challenge, basically? So, you can actually take challenges that are both strategic and culture-related, because this impedes the culture. If people feel their vision on sustainability is not real, then they're disengaged or they become cynical. If they feel the company is serious about it, then they are willing to put much more of themselves and have this kind of sense of ownership about, you know, what they need to do and to champion some new ideas.

So, H&M in one of their local countries, was brave enough to talk to us about their, I would say, their core challenge, but not only of H&M, of all of the retail industry, which is, what is the future of sustainable retail? One option, we just close them and everything goes online, because, you know, digital is taking over. But it's a shame. We have all these people and this real estate. And we can reimagine the future of the retail. So, that's what we're trying to say. Let's reimagine the future of retail when it's not about selling units anymore. Imagine a sustainable future and give us some ideas. So, we wanted the employees and the managers wanted them to know how much H&M is committed to this culture and to this vision. We wanted the employees to bring ideas and we got the commitment from the management to also apply one of these ideas in one of their future branches.

So, this was, kind of, an ideation challenge that is still ongoing now. They have chosen three winning ideas, and they're going to implement it. And behind the screen, and we also trained them, but we told them we're going to play with this and do an experiment on how and when you use AI, because it's no longer the question of whether to use it today. It's more when and how.

So, can I ask you, can we play a little bit? What would you guess? Can I tell you the conditions and you'll guess?

[00:19:40] Sameer Srivastava: Sure. Go ahead.

[00:19:41] Hila Lifshitz: Okay. So, what we thought, let's make the control condition that everyone has AI. So, we gave them the task to ideate about the future retail branch and you can use AI whenever you want. That's the control, because that's what I believe is now the case in most companies.

And then one group we manipulated that for the first five minutes they think alone with no tools. And then they can use it. And the second group, they start with GPT, but then they have in the end, in the time that they need to refine their idea and submit it, they have five minutes alone. And what we wanted to test was the impact on the sense of ownership on this idea and  their desire to champion this idea internally and to get resources and to own it and go to their managers. Which one, you think, had the strongest sense of ownership? The ones that had five minutes on their own, the ones that first started with GPT in the end, or the ones that constantly worked with AI, or maybe there was no difference?

[00:20:32] Sameer Srivastava: I would have guessed people who had time to think about their ideas on their own after getting input from GPT.

[00:20:38] Jennifer Chatman: Oh, I was going to say the other condition.

[00:20:42] Hila Lifshitz: Tell me.

[00:20:42] Jennifer Chatman: When they could the ideas on their own first, and then consulted with ChatGPT.

[00:20:47] Hila Lifshitz: So, Sameer, you're thinking first to work with GPT and then in the end to have some time for yourself?

[00:20:53] Sameer Srivastava: Yeah.

[00:20:53] Hila Lifshitz: And Jenny, you're thinking the opposite. And that's what I loved about this study, because we really truly, you know, had competing hypotheses from the literature. So, what we found was Jenny is right, like, big time.

[00:21:03] Sameer Srivastava: Oh, geez!

[00:21:04] Hila Lifshitz: Statistically significantly right. So, and I tried it in the lab with my students, now with H&M employees. So, this result holds. It's unbelievable how strong this result is. And even if you just give people five minutes, so it's not about the time, what we found now that we're going deeper, the mechanism there is the seed of the idea. If you had enough time to let you start developing the seed for the idea, even afterwards if you made it more comprehensive or deeper or you change it a little bit while working with AI, but still you feel it's yours. And it's, truly to a certain extent, you know, there's no objectivity here, but subjectively you feel it's yours. And then you're willing to put your heart into it and to work on it. If you don't, and if you feel it was GPT's idea, then it's not yours.

So, back to the question about organizational culture, we want people to be engaged. We've been striving and doing so, you know, so much to get engagement and to have people feel this desire to champion their ideas. How can we have an innovation culture if people don't want to champion, like, their own ideas, if they don't feel it's theirs? And that's one of the biggest risks that people don't get right now that they're doing by strongly pushing the use for AI. Because now people are excited and each individual feels a bit of a boost in their performance, but very quickly. And I've seen it. People tell me, “It's not really my voice. It's not really my idea. It's not really me. You know, and everyone can get the same ideas. So, like, what am I here for, right? So, maybe I should do something else.” And so, this is really important for the innovation culture — to give people first the ability to develop their own ideas, to ignite their process, to work together.

[00:22:44] Jennifer Chatman: Well, I think we should broadcast that finding across the university to all of our students. That's really profound. Well, this is just really interesting, Hila. The time went by so quickly. I felt I learned so much. We are at the end of our time. And we always like to end with some clear takeaways for our listeners. So, I would ask, what advice would you give to leaders who want to build a culture that embraces both human creativity and AI augmentation? Maybe, you know, two or three takeaways.

[00:23:14] Hila Lifshitz: So, the first takeaway would be to embrace an experimental mindset instead of an optimization mindset, because this technology is new and anyone that tells you we have answers of how exactly you should do, kind of, use AI for innovation culture is trying to sell you something. And many people are. So, I would say the managers should be smart enough and not to just buy into thinking that this will work or this, and to understand we're in early days and now we are in days of experimentation. So, how do you strategically experiment? You have to think a bit more of a scientist and not as an optimization kind of mindset that, “Oh, where is the best place to put all my resources to get the best results?” We don't know. So, you need to go with two potential solutions or three potential models. Even when you do your future scenarios. Now, when I'm doing it with managers, you need to have two or three. You cannot just have one when you're planning anymore. So, that's a big mindset shift.

And the second thing, as I stress, I think would be to keep pushing on the expertise of your people. What are those special expertise? What are they good at? What do they want to be good at? So, that's the bottom up. Like, to bring it with them and let them not use AI for this expertise but use it for other things. And the third thing is to have these new sources of expertise, new tasks, new missions. How can we inspire higher? How can we not use AI just for efficiency and just to go back in history and to go back to Taylorism, but instead to push the boundaries of what we think is possible with our people and in our organizations?

[00:24:45] Sameer Srivastava: Terrific, Hila. That was a really great summary of some exciting new research you're involved in. Thanks so much for sharing it with us today, and thanks so much for joining us.

[00:24:54] Hila Lifshitz: My pleasure. Thank you so much for inviting me.

[00:24:58] Jennifer Chatman: Thanks for listening to The Culture Kit with Jenny and Sameer. Do you have a question about work that you want us to answer? Go to haas.org/culture-kit to submit your fix-it ticket today.

[00:25:10] Sameer Srivastava: The Culture Kit Podcast is a production of the Berkeley Center for Workplace Culture and Innovation at the Haas School of Business, and it's produced by University FM. If you enjoyed the show, be sure to hit that Subscribe button, leave us a review, and share this episode online, so others who have workplace culture questions can find us, too.

[00:25:31] Jennifer Chatman: I'm Jenny.

[00:25:31] Sameer Srivastava: And I'm Sameer.

[00:25:33] Jennifer Chatman: We'll be back soon with more tools to help fix your work culture challenges.