Skip to playerSkip to main contentSkip to footer
  • 11/29/2023
H.E. Omar Al Olama, Minister of State for Artificial Intelligence, Digital Economy, and Remote Work Applications; Director General, Prime Minister's Office, UAE Gary Cohn, Vice Chairman, IBM In conversation with: Geoff Colvin, FORTUNE
Transcript
00:00 Okay, well, we've already talked a little bit about AI
00:03 in this discussion, but we're gonna do much more
00:06 because we know the race to implement AI
00:09 in the business world is on.
00:11 And so is the pressure to create appropriate AI governance.
00:16 So as nationals grapple with the ethical, legal,
00:19 and social implications of AI,
00:22 from ensuring data privacy to protecting jobs,
00:25 to protecting against false information,
00:27 the need for comprehensive guidelines and guardrails
00:31 is paramount.
00:33 Just a few weeks ago, US President Joe Biden
00:36 signed a sweeping executive AI order
00:38 in an effort, rather, to establish these safeguards
00:43 for the technology while ensuring
00:46 that it benefits humanity and the economy.
00:50 We wanna talk now about the urgency of striking that balance
00:55 between innovation and accountability
00:58 and how proper regulation could help shape a world
01:01 where humans and machines coexist.
01:05 Joining us for this conversation
01:07 are His Excellency Omar al-Ulama,
01:10 the United Arab Emirates Minister of State
01:13 for Artificial Intelligence.
01:14 I've heard that's a first.
01:15 Maybe we'll hear more about that in a second.
01:18 He's also the Minister of State for Digital Economy
01:22 and Remote Work Applications,
01:24 and the Director General of the Prime Minister's Office
01:27 at the Ministry of Cabinet Affairs.
01:30 And also joining us is Gary Cohn,
01:32 who we know, of course, from his previous engagement,
01:34 but who's now Vice Chairman of IBM.
01:36 And they join our Senior Editor-at-Large, Jeff Colvin,
01:41 for this exciting conversation.
01:43 Welcome to you.
01:44 (upbeat music)
01:47 (audience applauding)
01:50 - All right, we have lots to talk about.
02:00 Your Excellency, I'd like to start with you.
02:05 The Middle East is becoming more significant
02:09 in the whole world of AI.
02:15 How could this region help to influence
02:19 the governance of AI?
02:24 - That's a fantastic question.
02:27 Thank you for having me, first and foremost.
02:29 Second is, I think when it comes to artificial intelligence
02:33 and the governance of artificial intelligence,
02:35 most conversations are non-starters for a few reasons.
02:40 The first is, I feel like when you talk about
02:42 governing artificial intelligence,
02:44 the phrase itself doesn't really make sense,
02:47 because it's like talking about governing electricity.
02:50 You don't govern electricity,
02:52 you govern the verticals of electricity,
02:53 where electricity is deployed and how it's being used.
02:56 Second is, I don't think having a diversity in opinions
03:00 is a bad thing.
03:01 Having the opinions of the people of the Middle East,
03:04 when you're talking about global dialogues
03:06 on artificial intelligence and the governance
03:08 of specific uses of artificial intelligence
03:10 is a healthy thing.
03:12 And then finally, if you look at the UAE, for example,
03:14 as a specific example here,
03:16 the UAE has 200 nationalities that live in this country
03:21 and actually build from this country.
03:23 So having the UAE's opinion is definitely going to be
03:25 an excellent addition to these conversations.
03:27 - Yeah, I would think so.
03:29 Gary, I think that the both of you are involved
03:33 in the World Economic Forum's AI Governance Alliance.
03:38 - Yes.
03:41 - Can you tell us a little more about it?
03:43 Because when we talk about AI governance,
03:46 it sounds like a nice idea,
03:47 but how do you get a handle on it?
03:49 What institutions should be doing this?
03:52 Tell us about this.
03:53 - So look, I agree with where the minister started
03:55 this conversation.
03:56 And I think the WEF group is trying to get its hands
04:01 around this very elusive topic.
04:05 So it has quickly turned into a relatively big group.
04:11 So there, you're smiling.
04:13 There's a lot of opinions in the room on this whole topic
04:16 of how to govern AI.
04:18 And I think the group is trying to figure out
04:22 which way to gravitate to provide some guidance.
04:25 I think at the end of the day,
04:26 the group needs to come out with some high level guardrails
04:31 for AI governance.
04:33 Because to me, that's where this whole industry has to go.
04:36 We don't want to stifle creativity.
04:40 So the last thing anyone wants to do is stifle creativity.
04:43 We also don't want to use regulation as a blocking function.
04:48 So we don't want to regulate certain industries
04:51 and give them a competitive advantage
04:52 or put anyone else at a competitive disadvantage.
04:55 We want regulation to make sure
04:58 that the industry is safe and sound.
05:00 So we want to put standards out there
05:02 that we all use like electricity.
05:03 We want to put standards out there for the industry.
05:07 We want people in the industry to abide by those standards,
05:10 but we want there to be a lot of creativity.
05:12 We want there to be capital thrown at this
05:14 and we want to build on what we've got now
05:17 as a really strong foundation.
05:19 - Right.
05:20 And what is IBM's contribution to this so far?
05:25 - So look, we've been very actively involved
05:28 in the governance.
05:29 In our AI platform, which we call WatsonX,
05:33 we have a whole governance section in WatsonX.
05:36 We've also signed the voluntary agreements
05:39 at the White House.
05:40 We've even gone farther than I think just about anyone else.
05:42 We have gone out and indemnified our models.
05:45 We basically have said, we have tested
05:48 and robustly tested our AI models against outcomes,
05:53 against bias, and we are putting them
05:55 in the open source community.
05:57 So therefore, the open source community,
05:58 you can see our models, you can see what we have.
06:01 And so therefore, you can look at our models
06:03 and if you can find a way to improve upon our models,
06:06 go ahead and improve upon our models.
06:08 So we've been open book on this
06:10 and we're willing to indemnify outcomes.
06:13 - Yep.
06:14 Your Excellency, I'd like to ask you about a topic
06:17 that Gary raised, which is, should this be regulation?
06:22 In other words, there are standards
06:24 that companies in the industry can agree to.
06:29 But inevitably, people are gonna talk
06:31 about government regulation of this as well.
06:36 Is that going to be inevitable
06:38 or is it going to be not inevitable
06:41 and maybe not such a good thing?
06:43 - When we talk about regulation,
06:47 I think people think that the regulation
06:48 is a one size fits all kind of effort
06:51 and that's very far from the truth.
06:53 If you look at the challenges that will arise
06:56 from artificial intelligence in the US,
06:59 they're very different in their impacts.
07:01 And for example, in their outputs to the UAE, right?
07:05 We're a smaller country,
07:06 we have different demographics of job classes,
07:09 different people working these jobs.
07:11 And at the same time, we stand to benefit a lot more.
07:13 Now to talk about regulation,
07:14 you need to think about why you're doing the regulation.
07:16 What is the purpose?
07:17 The first thing we should have done,
07:19 which I'm not sure has been done,
07:20 is actually setting the principles.
07:22 What are the principles that we want to build AI on?
07:25 Certain companies have done that voluntarily.
07:28 So certain companies have said,
07:29 we're going to build AI for the good of humanity
07:31 or they have their own motto for it.
07:33 But what is humanity's view on this?
07:35 So you have the Charter for Human Rights
07:38 that everyone thinks to a certain degree
07:40 represents all of humanity.
07:42 Where is the Charter for Artificial Intelligence?
07:45 And where is the dialogue
07:47 that's going to start this kind of effort?
07:50 The second thing is on the regulation specifically,
07:52 I think regulations are going to be more local than global.
07:57 You are going to have global regulations on specific things.
08:00 So if you look at nuclear proliferation
08:02 and the governance of nuclear weapons,
08:05 there is a specific global guardrail for it.
08:08 But not on other things, for example,
08:10 that have nothing to do with nuclear,
08:12 because nuclear is a form of energy.
08:13 So if you said we're going to govern energy,
08:16 you're not going to govern everything under energy,
08:18 you're just going to govern nuclear, for example,
08:19 and its impact.
08:20 What we need to be having is a dialogue
08:23 about how do we upskill professionals within governments
08:26 to be able to regulate in the most effective manner,
08:28 to ensure that their populace are not left behind,
08:30 to ensure that their countries don't get affected by it.
08:32 There is an example, it's a very nuanced example
08:34 that I use all the time.
08:36 And that example is a page from history.
08:38 So the Middle East, and this is something
08:40 that I've been tackling with at a very young age,
08:41 the Middle East was far ahead of the rest of the world
08:44 during the golden ages of the Middle East,
08:46 so from the year 813 to the year 1515.
08:50 The reason for that advancement was technology, right?
08:54 And some historians thought that the Middle East
08:55 was at least 500 years ahead of the rest of the world
08:58 in terms of the rest of civilizations.
09:00 Today, the fact is very different.
09:01 We are backwards.
09:03 And if you ask yourself, why is that?
09:05 It's because we over-regulated the technology,
09:07 which was the printing press.
09:09 It was adopted everywhere on Earth.
09:11 The Middle East banned it for 200 years.
09:12 To be specific, 189 years.
09:15 Now, there is a conversation that's quite interesting here.
09:18 Why was it banned?
09:19 The calligraphers came to the Sultan,
09:24 and they said to him, "We're going to lose our jobs.
09:25 "Do something to protect us."
09:27 So job loss protection, very similar to AI.
09:30 The religious scholars said people are going to print
09:33 fake versions of the Quran and corrupt society.
09:36 Misinformation, second reason,
09:38 very similar to artificial intelligence.
09:40 And the third, the top advisors of the Sultan said,
09:42 "We actually do not know what this technology
09:44 "is going to do.
09:45 "Let us ban it, see what happens to other societies,
09:49 "and then reconsider."
09:51 Very similar to artificial intelligence.
09:53 Fear of the unknown, fear of the ambiguity.
09:56 I think we need to be a lot more nuanced.
09:58 We need to be a lot more practical.
10:00 And if anything, as a government official,
10:02 everyone needs to be sitting right dead in the center.
10:05 Not too optimistic, not too pessimistic,
10:07 and work with everyone to ensure
10:09 that the future is a prosperous one.
10:10 - Yep.
10:12 Gary, His Excellency mentioned one of the inevitable
10:16 rationales for regulating it, or attempts to regulate it,
10:21 loss of jobs.
10:23 It seems to me that every new technology eliminates jobs,
10:28 but it also creates more and better jobs.
10:31 On the whole, it's a good thing.
10:33 And yet, there's already pressure to try to regulate AI
10:38 so it doesn't eliminate jobs.
10:42 I suspect I know what you think about it,
10:45 but how will we make sure that doesn't happen?
10:48 - Well, you sort of answered the question in the question.
10:51 So if we go through all the seismic technology shifts
10:54 in the world, from the cotton gin
10:57 to the internal combustion engine,
10:59 to the personal computer, to the internet,
11:02 they all were supposed to eliminate
11:03 all these amazing amount of jobs
11:05 because they created enormous amount of efficiency.
11:08 Guess what?
11:09 They created enormous amount of efficiency
11:10 and it allowed economies to grow.
11:12 We'll get back to what I actually know a lot about.
11:15 Well, let's talk about economies.
11:18 The way to drive economic growth
11:20 is through productivity gains.
11:22 And there's a part of the world,
11:23 we're sitting in a part of the world here
11:25 where we need to drive productivity gains.
11:28 So if you can make workers more productive
11:32 by providing tools for them, you're growing the economy.
11:36 So every worker now can provide more units of work.
11:40 You can grow the economy,
11:41 which the history of going through the cotton gin
11:44 to the internet tells you
11:45 that companies don't lay off people.
11:47 Companies reposition those people,
11:49 they retrain them and they create a bigger company.
11:52 There's no company I really know today
11:54 that's smaller because the internet came around.
11:57 Every company I know
11:59 from when the day the internet started to today
12:01 is probably dramatically bigger.
12:03 Even though all those men and women
12:05 that were printing memos
12:06 and delivering them to people's mailboxes
12:08 seem to disappear because email took over for them.
12:12 So this is an economy of scale,
12:14 it's a productivity enhancer.
12:17 And people are gonna be able to move
12:18 from highly dissatisfied jobs with high turnover rates,
12:23 with low satisfaction, with high error rates
12:26 into jobs with much higher satisfaction
12:29 where they're gonna be able to do something that they enjoy.
12:32 They enjoy coming to work
12:33 and the company has to become more productive,
12:36 companies become more profitable,
12:37 and therefore they're gonna be able
12:38 to maintain the labor rates where they have
12:40 and probably grow labor.
12:42 If the history has anything to do with the future,
12:45 we're gonna see workforces get bigger, not smaller.
12:47 - Absolutely, that's the way it has always happened.
12:49 Now there is an argument, this time is different.
12:53 Maybe that's inevitable,
12:57 but do you think it's possible that this time is different?
13:01 - I don't.
13:03 I think that we're gonna see a lot of tasks
13:06 that people really don't like doing
13:08 that have really high turnover rates,
13:11 really low satisfaction, disappear.
13:15 And people are gonna be a lot happier for it.
13:17 And look, I can give you examples within IBM right now.
13:21 We got rid of 700 people in our HR department
13:24 because we created an HR bot.
13:26 So instead of someone having to answer the telephone
13:29 when you need a reference to buy an apartment
13:31 or get a mortgage or get an automobile,
13:34 you need a reference letter saying,
13:35 how long have you worked here?
13:37 What's your annual income?
13:39 Someone doesn't have to answer that phone,
13:40 go look it up in the file,
13:42 figure out how to write the letter,
13:43 and remember to send it out,
13:45 which probably wastes 30 minutes of their day.
13:47 No job satisfaction.
13:49 You go into a chat bot, the chat bot does it instantly,
13:52 sends it out instantly.
13:53 And if it gets sent to the wrong place,
13:55 it's because you put the wrong address in,
13:57 not because the person took the wrong address down.
13:59 We took those 700 people, we didn't fire them.
14:02 We took them up and now we have re-skilled them
14:07 into managing our promotion process
14:09 and managing our hiring process
14:11 and becoming much better at things
14:13 that we needed to become better at.
14:15 - Yep, yep.
14:16 Your Excellency, do you think this same historic progression
14:21 will work the same way here?
14:24 - You know, I worry about contradicting my fellow speaker.
14:30 - You can.
14:31 - I think we agree on a lot more than we disagree on.
14:35 However, I actually want to say there is a possibility.
14:38 - Yes.
14:39 - So there is a possibility that it's different this time.
14:41 Whether the possibility is 1% or 90% is debatable.
14:45 Some people say it's more likely,
14:46 some people say it's less likely.
14:48 What should the government do?
14:49 Now, for a private sector entity,
14:50 you need to actually maximize the returns.
14:53 You need to ensure that you're on top of that
14:55 and that you're at the cutting edge, right?
14:57 That is the only option that you have
14:59 or you're going to be out of business.
15:00 As a government, you need to constantly start to look at
15:04 what the impacts are going to be,
15:06 constantly address the issues and set the right programs
15:10 to ensure that people are able to grasp the opportunities
15:13 that are going to arise because of AI.
15:14 There's going to be a lot of opportunities.
15:15 What we've done in the UAE,
15:17 so we've done a program called the 3R program,
15:21 reskill, retool, retire.
15:23 - Right.
15:24 - And that's a program that we put forward
15:26 and hopefully is going to go through.
15:28 What does this program do?
15:30 If your job is going to be replaced
15:34 because of artificial intelligence,
15:35 and there's a specific job that is going to increase
15:39 because artificial intelligence is becoming more mainstream,
15:42 whether it's laborers,
15:43 whether it's prompt engineers or whatever it is,
15:45 we will reskill you to ensure that you're not disrupted.
15:48 If your job is going to be increasing in productivity
15:52 because of AI,
15:53 we will retool you to ensure that you're able to leverage
15:56 these tools to become more productive.
15:57 - Yeah.
15:58 - And if you are one year away from retirement
16:01 and you feel like,
16:02 "Why would I want to be reskilled or retooled?
16:04 I've already done this for 30 years
16:05 and I just want to live my life."
16:07 You can opt for an earlier retirement.
16:09 - Right.
16:10 - And what happens here is people feel like
16:11 they have a choice.
16:12 - Right.
16:13 - And people feel like this is not something
16:14 that is dictated on them.
16:15 This is something that they can actually play a part in
16:18 or see from the sidelines in an effective way.
16:22 Now, each government has a different way of approaching this.
16:25 There are economics involved,
16:26 there are societal reasons that will dictate
16:29 how governments approach this.
16:31 But it's better for us to be proactive rather than reactive.
16:34 I think this is exactly the point that was raised.
16:36 - Yeah.
16:38 The topic of enforcing governance
16:43 is one that always comes up.
16:46 Because the fact is there are people in this world
16:49 who don't give a darn about governance and they--
16:54 - Good choice of words.
16:55 - Yeah, careful.
16:56 And they will do things that are bad
17:02 that may give them an advantage of some kind.
17:07 For example, next year there will be several
17:10 very consequential elections around the world.
17:13 We've already seen examples of people using AI
17:19 in bad ways for misinformation and so forth.
17:23 Will it be necessary to have government
17:26 try to take care of that?
17:30 - Absolutely.
17:31 And what surprises me is why governments are waiting
17:33 for someone to do something about it.
17:35 We need to already be working on this.
17:36 Let me give you a specific example on elections.
17:39 So we had our parliamentary elections
17:41 happening earlier this year.
17:43 And we asked ourselves, what is the first thing
17:45 that people are going to utilize?
17:46 It's probably going to be deep fakes,
17:48 it's probably going to be AI bots and agents
17:50 distributing content.
17:52 There's going to be a lot of different things
17:53 that need to be addressed.
17:55 So we said, instead of waiting for something to happen,
17:57 we launched a policy that requires people
17:59 who are going to use artificial intelligence agents
18:02 to register.
18:04 So they come and say, look, I will use this
18:05 within my election campaign,
18:07 and these are going to be the tools that I use,
18:09 and this is the content that I'm going to push out.
18:11 And you actually have to get approved by the government.
18:14 If you are caught to have used these tools
18:16 without registering them, then you are disqualified,
18:19 because this is you cheating and actually leveraging
18:21 something that could actually spread misinformation
18:24 about your competitors, could affect different people.
18:27 Now this has worked for us in the UAE.
18:30 Would it work for the US, for example?
18:31 No, but I think the conversation needs to start now.
18:34 We shouldn't wait for the election to be in full throttle
18:36 for us to talk about how the impact of artificial
18:38 intelligence is going to change elections.
18:40 Because then you're going to go into, unfortunately,
18:43 a endless cycle of, was the election rigged?
18:47 Did people use these tools to affect the outcomes?
18:49 What do we do about it right now?
18:52 So we need to anticipate it earlier on, right?
18:55 The second thing is, when you talk about governance,
18:58 there is the other aspect of,
18:59 can you actually enforce governance?
19:02 In some countries and some jurisdictions,
19:04 governance, unfortunately, is,
19:06 let's penalize the big players.
19:07 So we'll use the stick approach to hit the big players
19:10 so the others actually feel like we're forced
19:13 to reckon with.
19:14 It doesn't work.
19:15 If it's not universal, if you can't implement
19:18 across the small players and the big players,
19:20 if you can't do it in the right way,
19:21 then unfortunately this is you trying
19:23 to bully the big players.
19:25 What I think needs to happen,
19:27 governance as a whole needs to be reinvented.
19:29 We are governing a new technology,
19:31 the same way we used to govern the technologies
19:34 of the last 100 years.
19:36 And maybe it's time we deploy technology
19:38 to govern technology.
19:39 And maybe it's time we think of a new governance mechanism.
19:42 - Yeah.
19:43 Gary, leaving aside the specific example of elections,
19:47 just in general, will we have to have real governance
19:53 enforcement efforts somehow?
19:56 - Look, we agree on this.
19:59 So we're gonna need some global governance standards.
20:03 Remember, AI is cloud-based.
20:06 It can be anywhere in the world.
20:08 So unfortunately the bad people migrate
20:11 to places where they can hide.
20:13 And so to the extent that we get more global harmonization
20:18 and more global enforcement,
20:20 we will have better rules,
20:21 we will have better engagement.
20:23 And people who are trying to do bad things,
20:25 whether it be deepfakes or other things,
20:28 will have less places to hire,
20:30 less places to emanate from,
20:31 and it'll be easier to patrol.
20:33 So I fully believe there need to be some global guardrails
20:38 that everyone agrees to.
20:40 I'm more concerned that we end up with a sort of
20:45 quilted effect where different countries
20:48 have different rules and different regulation.
20:50 We have less harmony.
20:51 That to me is a scenario where there's gonna be
20:55 a lot of different cracks where unfortunately
20:58 the people that are up to ill will will find places to hide.
21:03 And so one of the things I think we both agree upon
21:05 on this committee we sit on is we've gotta have harmony,
21:08 especially in regulating the bad uses of AI.
21:12 - Yep.
21:14 One more thing to talk about,
21:15 which is the speed with which AI advances.
21:20 I mean, we thought Moore's Law was something,
21:23 doubling in capabilities every two years.
21:26 These AI engines are doubling in capabilities in months,
21:30 sometimes in a month.
21:32 Can governance ever hope to keep up with this?
21:36 - Look, I think governance will have something
21:40 to do with it.
21:41 But I also think we're hitting another reality frontier.
21:46 We're hitting the reality frontier of cost.
21:49 Just running an AI machine is not free.
21:54 There's enormous amount of computing power.
21:58 There's enormous amount of cost in doing that.
22:02 And what I think we're starting to see,
22:04 I'm beyond thinking, what I know we're starting to see
22:09 is we're starting to see some of the big AI players
22:12 trying to figure out how to build, retain walls
22:17 against their open AI and make it less open.
22:20 Because every time you ask for something
22:22 and I ask for something,
22:24 the AI machine goes out to the world and gathers the data.
22:28 There's a cost for gathering that data.
22:30 They can't charge you and I enough for that data.
22:33 We're just not gonna pay for it.
22:35 So if you ask for a chart of average temperatures
22:38 in Abu Dhabi for the last 30 years,
22:39 and I ask for a chart of average temperatures
22:41 in Abu Dhabi for the last,
22:42 the machine's gonna regenerate it twice.
22:45 If the machine would know to hold it
22:47 and maybe update it once a week,
22:49 so every time someone asks and not create it,
22:52 the data costs would go down dramatically.
22:55 So I think you're gonna see some very quick refinement
22:59 on this whole idea of open forever AI
23:02 because the cost infrastructure
23:05 and the amount of computing power that we're gonna need,
23:07 it's gonna self-govern itself.
23:09 - Yeah, because of the incredible expense among--
23:11 - Incredibly expensive.
23:13 - Yep, what do you think?
23:16 - I completely agree.
23:17 I think we agree on a lot more than we disagree on.
23:20 Just one thing here,
23:21 I think we have diminishing economies in terms of returns.
23:26 So you even building these LLM models,
23:30 like open AI paid hundreds of millions of dollars.
23:33 Today you can build something like GPT-3
23:35 for a fraction of that cost.
23:37 So is it worth that advantage?
23:39 Like is a question that companies are gonna ask themselves.
23:41 The answer is governance needs to evolve.
23:44 I think there needs to be a dialogue
23:46 that includes the private sector and governments,
23:48 as well as civil society and international organizations.
23:51 And the World Economic Forum,
23:52 as well as the UN, for example, are doing that.
23:55 The more conversations we have, the better.
23:57 The only thing I will say is,
23:58 these conversations need to be building on each other
24:00 rather than each conversation starting--
24:02 - Over and over again.
24:03 Yeah, I can't believe we're out of time,
24:05 but we're out of time.
24:06 Your Excellency, Gary, this was terrific.
24:09 Thank you very much. - Thank you.
24:10 - Thank you.
24:11 [BLANK_AUDIO]

Recommended