• 8 months ago
Lan Guan, Chief AI Officer and Head, Center for Advanced AI, Accenture Jaime Teevan, Chief Scientist and Technical Fellow, Microsoft Moderator: Alan Murray, FORTUNE
Transcript
00:00Thank you. Thank you. Jamie. I want to start with the big question and we can get that out of the way and move on.
00:12It is remarkable what these technologies can do and it conjures up the image of fully automated robots who can do pretty
00:21much anything you want them to do. Do either of you think there's any realistic possibility in our lifetimes that A.I. will
00:30reduce the number of aggregate aggregate jobs out there. Are we really talking about eliminating people's jobs.
00:38You want to go first. Yeah sure. That's a great question. The short answer to that is I think in the short term there was some
00:46job impact. But it's my belief that in the long run this is a positive some game. So I really want to pivot the conversation to you
00:56know around. OK. How do we understand how A.I. will substitute or compliment. Right. What humans are doing every day so that we can
01:04get our people ready for that. Do you. So it's the wrong conversation. It's a mislead. Do you agree with that Jamie.
01:11Yeah. You know there's there's I don't like the hesitation.
01:18Our jobs are going to change for sure. Of course. That's not the big disruption. And there are pretty much two ways that companies deal
01:25with disruptions. One is to drive efficiencies get really good at what they're doing. And the other is innovation to figure out how to do
01:33new things. And I would say we essentially we're seeing both of those and like the efficiency thing is like easier. When I think about
01:40how I use A.I. I'm like oh I can write my email. They start with a fish and you start with efficiencies. And that's sort of when we're
01:46not very creative. It's where our mind goes. But like innovation is a real opportunity. But we've and we've seen this with every
01:53wave of technology that we've had before. People thought you know 100 100 years ago people thought what are all these people who
01:59work on farms going to do. That's not an issue anymore. So I'm just curious with the group. Can we get a quick show of hands. If you
02:08think that 20 30 40 years out we're actually going to have a problem creating jobs would you raise your hand.
02:17OK. There are a few people back there. I'm going to come back to you later on but it's not many. And I would I would tend to agree
02:24agree with that. So if we start with that as the premise then the question is how are jobs going to change. What what what
02:31activities are you no longer going to be doing and what activities do you need to learn to do. You want to start. Yeah sure. So I
02:39think that goes back to my point earlier. How do we get our workforce ready. Right. What kind of skills do they need to grow
02:44more. And how do we get the people ready to you know for this kind of transformation. I is my personal belief that you know the
02:52skills like what human is really good at. Right. Things like communication skill being empathetic you know doing analytical
03:00thinking structure thinking. These are the cognitive skills that AI is not doing quite well yet. Right. So I think we need to
03:08focus on having people humans focus on applying this kind of skills in our daily job. So let me give you one you know frame of
03:18reference. So I'm a big fan of you know this Nobel Prize winner and Princeton scholar Daniel Kahneman who just passed away a
03:27couple of weeks ago. Right. So he's famous theory of system one system thinking system one system to thinking. I think it's a
03:34perfect you know theoretical ground of what I was describing. Right. System one thinking basically is fast in instinctive kind of
03:42thinking and oftentimes introduce human bias system to thinking is a slow analytical thinking and think about if you know in the
03:49future stage how AI and humans are working together. AI is focusing on fast you know this kind of action taking and then the
03:57humans are focusing on things that we're really good at being empathetical you know doing the structure thinking being analytical. I
04:05think you know you know like for example in contact centers case where are you seeing that is happening. Right. AI is helping our
04:11contacts and agents to you know do this kind of system one thinking so that they can actually listen to the call transcript. Right. They
04:17can actually be summarizing all the agent notes for agents. So the agent can actually focus on having meaningful conversations with
04:26the customers. So there are many many examples like that. Such a dramatic change from where we were just a few years ago. I
04:32mean in any kind of a conversation about advice to young people and what they should do in their career. In fact it was a cover of
04:38Business Week magazine. Learn to code. You all need to go out and learn to code. Well in fact the machines doing a pretty good job
04:45taking care of the coding. And now what you're saying is we need to be better people. We need to learn to empathize. We need better
04:52judgment. We need to understand how to motivate people. It's a completely different set of skills. It's a completely different set of
04:59skills. Jamie I know you do a lot of research on this. This is this is your job. What are you. Early days. But what are you learning
05:06about the skills people need and the skills that are being taken care of. Yeah I mean actually one of the interesting emergent
05:12properties of this current generation of language models is actually that they're pushing on some of these metacognitive skills as
05:20well. So in many ways we're like oh I was going to do all the drudge work so we can have the deep conversations. But we're starting
05:27to see like one of the things that's really cool is AI is actually quite good at planning. It's quite good at figuring out how to
05:33break tests down and allocate them. It's quite good at actually connecting in an empathetic way with people and understanding how
05:40your message is going to land. And so that really sounds like it increases actually the metacognitive burden on people. And I
05:49think it's important for us to think about how to take advantage of it to actually support our metacognitive abilities. How do we
05:57come up with good conversations. How do we think about things critically. Yeah. Well let's dive into that. I mean so how. So as
06:05organizations both of you work with companies how do you how do you prepare people for that. How do you. You know you made the point
06:11at the beginning about many companies are taking the easy route early on and saying let's just use technology to replace things we're
06:20already doing as opposed to use it to rethink what we do. How do you how do companies get themselves ready to do the right thing here.
06:29Yeah. I mean and that's a huge question if you think about it. All of a sudden we're giving people this empty prompt box and saying
06:36like say whatever you want. And that's overwhelming. We've only been talking to our computers for a few months you know maybe a year if
06:44you're an early adopter. We've been talking to people for a long time but we haven't actually been communicating with the machine. Yeah.
06:52You wrote a great piece for the Harvard Business Review. I'd love for you to share some of what you said there because all of us are
06:58experimenting with this stuff. We're writing prompts. Actually there's an art to writing prompts. So how do we become better prompt writers.
07:06What makes a good prompt. I mean so it's a good question and I'll have you know it is like one of the top academic questions as well. So
07:14when you look at the academic literature a lot of the papers around prompt strategies and how you prompt a language model and that's because a
07:22good prompt strategy will actually give you better outputs than a bigger more powerful model. So actually one of the things that's important is
07:31to recognize that a language model isn't a person. And when you're prompting it you need to prompt it a little differently. So when we were
07:40chatting backstage we did a fair amount of grounding. This is that process of like getting to know each other. But because they're humans I
07:46don't have to do the same level of grounding that I have to do with a language model. With a language model you have to be super explicit and
07:53say like you are playing this role and this I want you to produce an output in this format for this audience. So that's very explicit. And
08:01that's because it's not a human. But there's other things that actually you can use to your advantage with the language model not being
08:09human. You know I can't be like oh my gosh I said something really dumb. Let's just start over and try again. When I'm chatting with these
08:18folks with a language model you can actually just erase all your history and erase all the context and start the conversation again. And so you
08:26can actually start doing things like I don't know if you know the concept of the wisdom of the crowds where you like ask a bunch of people how
08:33many jelly beans are in a jar. And if I asked this whole room we'd on aggregate get the right number of jelly beans. You can start doing that
08:41with a language model. You can ask it to play different roles or produce answers in different contexts and then look at the sum total of what
08:50kinds of answers you might get. Yeah it's fun. Yeah. Yeah. Please do. Yeah. I think that's a great question. That's a question I'm talking to
08:56C-suite clients all day long. Right. I think from our perspective I really think there are two things. One is this is time to get your talent strategy
09:04ready. Right. So I have a bold prediction. I don't know if you know Jamie agree with that or not. I think you know in the future basically you look
09:11at all your roles. You know they fall into two camps. One of the two camps. Right. You're either a builder of AI. Right. Or you are the consumer of
09:19AI focusing on consumption side. Right. So again it's time to think to rethink your talent strategy and bridge the gaps. And I think the
09:28second thing which is kind of obvious. Right. It's time to you know really drive the this kind of the concept of a continuous learning. Right.
09:35So we learn new skills and you know the skill definition could be changing. But what we can do is get ourself ready. And so how do you do
09:44that. I know Accenture takes this very seriously. I mean someone told me once you spend something like a billion dollars a year on training
09:51exercises. How do you. How do you prepare the consumers of the technology. What is the what's the nature of training that you give people. Yeah.
10:01Very good question. This could be our long conversation but I'm going to give you.
10:05Right. I think Alan. I think the number one important thing is to have the top down support. You need to have the you know top CEO level support
10:17that you know basically articulating the importance of this talent transformation because we have 700000 employees. Right. Probably a lot of them are
10:26going to be you know consumer of AI. So we actually have a program called TQ right. TQ stands for technology quotient. So I'm one of the
10:33faculty members for TQ. Right. And of course a lot of our latest episodes is about generative AI. So this is kind of a program that we ask our
10:42every single employee to take. But that's not possible without the strong support from the leadership team. I think that's the key building
10:49blocks I want to call out. Jamie a somewhat separate issue. If you look at the last two three decades of technology development it seems
11:00pretty clear to me that it has increased inequality within societies that technology has enabled some people to move much faster and
11:08further and accumulate wealth and income than others. I've seen some early studies. I know it's very early days but I've seen some early
11:16studies that suggest that generative AI may be different in the biggest productivity gains often seem to be with the least skilled
11:26experienced workers. Do you think that's true. It's a really important question that you're asking. And it's a really interesting like
11:36generative AI feels different to me as a technology that's distributing and that it's distributing in a global scale so fast. So it is it
11:46is different to be bringing this technology to every country and every location at the same time. It brings up all sorts of interesting
11:54challenges about how do you do that. How do you manage all the different regulations and reactions that are happening.
12:04It's actually very interesting because one of the emergent properties of the language models has been their ability to understand multiple
12:09different languages. So we should co-pilot for example in all of our tier one languages out the gate. In terms of the research that you were
12:17referencing. So it's been a very consistent finding across these sorts of tasks that are the ones we were talking about that are the
12:27efficiency kinds of tasks that people are doing them a lot faster and a lot more efficiently. And then particularly the people who are less
12:35skilled at those tasks are. So like if I'm not a great coder I can actually generate code that's quite good. Whereas if I'm an amazing coder
12:44it'll provide me some efficiency and benefit but it won't be as significant. I do think this innovation question comes into play though
12:52because it starts becoming like how are we figuring out the new ways to use the model. And that's going to be a place where I think it's
13:00actually our responsibility to make sure we distribute the technology and figure that out and distribute our learnings in a way that builds
13:08equity and like benefits the whole world because that's going to be what benefits all of us the most is to have how we do it. How we do it
13:17matters exactly. Let me open it up see if there are any. There's a question. Olivier and then a question right over here.
13:23Right there. Yes. And then right right here. I apologize to people in the back. The lights are very bright and it's hard to see beyond the first
13:34couple of tables. But Olivier go ahead. Olivier neuroscientist and entrepreneur in 96 Gary Kasparov wins against AI in 97. He loses creates
13:46freestyle chess where humans can use technology in order to compete. And in 2005 amateurs beat grandmasters because they had a better process.
13:58Don't you think that the one thing we need to teach people if possible is process.
14:07I mean I think that that to me connects with this notion of metacognition and like how are you thinking about using things. And that feels like
14:15the real opportunity. And actually that the things I get really excited about with the model when we sort of talking about this innovation
14:22frontier you know our uncreative selves are sort of thinking about like how do we get AI to obey us. And I think our creative selves are like
14:30how do we get AI to challenge us and get to think about things in new ways. So that's what I get excited about. Right there please. Yes.
14:40Polly said I'm University of Birmingham here in the UK. And I wanted to go back to the point of OK. Jobs are going to change in ways that are
14:51somewhat unpredictable. And my job as an educator as well as a researcher. Oh yeah. You're gone over.
14:59That was kind of my question.
15:01I didn't want to project my existential crisis in public.
15:07But as you can tell we you know we have been disrupted as a higher education institution. I'm actually a professor of computer science is
15:13particularly impacted. So my my my question is really what is the role. How is the role of higher education changing is not just the pedagogical
15:22approach which is changing because of AI. But what are your expectations basically downstream from higher education in regards of how we best
15:30how can we best prepare the next generation to basically be ready to go forward. And you understand that Microsoft is already making the tool
15:37that's going to replace you. Absolutely. Yes. We have a separate.
15:44I want to. So as a computer scientist this is there's a metaphor that I use actually. And partly my role such a created my role about five years
15:52ago to help the company deal with disruption and think about how how things are going to change. And I would say that we're all right now
16:01responsible for building something new or figuring out and that's not going to slow down that disruption is coming. And I actually think the
16:08whole world needs to start leading like scientists. So I think your science and what you teach you know and that means like experiment and
16:16develop hypotheses and try things out. But that's not the only part of the scientific process. That means learning and building on the state
16:25of the art you know reading broadly and like writing that related work section and figuring out what you're doing. It means disseminating
16:33knowledge so that others can build on it but even more importantly for validation because that's the way that people will challenge you and
16:39look at things differently. You know it means thinking about the externalities and sort of the long term effects. So I actually think that
16:46scientific framework is amazing. You're going to give him superpowers. Yeah. Yeah. You're bringing superpowers to the technology land. Final
16:55word. So it's definitely exciting time. Right. Ellen and Jamie I think to me A.I. is a team sport. A.I. is not just you know as
17:05what we've been talking about is not just being done by technical experts or you know limited to a C3 conversation boardroom conversation. I
17:13think this is a time that everybody has a role to play. Right. The democratization of A.I. and everybody needs to play their role. Right. So
17:22this is a time that nobody wants to get left behind and happy to work with all of you on this land. Jamie thank you so much. I think that was
17:30encouraging. I feel a little better than I did. Thanks so much. Thank you.