• 8 months ago
Tara McGuinness, Founder and Executive Director, New Practice Lab Sir Craig Oliver, Global Co-head, Strategy and Reputation, FGS Global Julius van de Laar, Political Strategist and Communications Consultant, Van De Laar Campaigning Moderator: Alan Murray, FORTUNE
Transcript
00:00Thank you. I don't want to be too dark about this stuff. But it's it's been a pretty dismal decade. I mean Tara you saw what
00:08happened with the Affordable Care Act in the U.S. and the kind of disinformation around that. And that was nothing compared to
00:16the Brexit campaign where virtually every single expert said don't do this. And the British public was getting its information
00:24from someplace else and said we're going to do it. And of course it's only headed south from there. And we can get to how
00:32Jenny I may magnify all of that in a minute. But but but but but Tara and Craig let me start with the two of you. What do
00:39you see from your experience as the real problems here. You know I think jumping in on the context of A.I. and elections you
00:48have to pull back and look at the context of public conversations we've been having. You know we've been on a 20 year
00:57massive upheaval of how we receive especially United States our information. We are information used to come. You know when I
01:05started my career in campaigns the nightly news for you know you were. What was that.
01:12Anchored things and we would in the evening gather around and see how we did. And that covered the vast majority of Americans.
01:19That is just 20 years ago. You could count on having a common conversation about a war or or an instance of national
01:28regard. That is not the case right now. There is no anchor place to understand where everyone is coming together. We're not even
01:36talking necessarily in the United States about the same things. And so that's a hugely discombobulating factor. And so entering in
01:47these new technologies on an already kind of broken media landscape is how I start to think about public trust and the core the
01:55core elements of this. You know it isn't just go. We don't trust government. We don't trust the media. You know we have to think
02:00about re-engaging on public trust. Yeah which is a big challenge. Craig the Brexit vote really was in my mind kind of a historical
02:11break where you said wow something different is going on here. People no longer accept authority no longer except sort of. Yeah I
02:20think 2016 was a it was a very big year because Trump was that I think a lot of people in politics hadn't realized the extent to
02:29which what I was describing there was going on. And that's only magnified and amplified over time. And just thinking about the
02:36presentation that you were doing earlier like 95 percent of our conversation around A.I. and its impact on politics is around deep
02:44fakes. And that's definitely a problem. Last week I think one of the elements said that they weren't going to release their voice
02:50simulator because it was so accurate. And we had an example a few months ago of Keir Starmer the leader of the opposition
02:56apparently calling out one of his aides over his iPad. You know the first time I heard that I thought wow that's that's amazing.
03:04Minutes later somebody said look by the way that's it. That's a deep fake. It's a problem. But actually the real problem is this sense
03:11that we're moving further and further into different tribes into our own filter bubbles. And I can come along and magnify and amplify
03:18that and drive people into an environment where there are their own alternative facts and they are split. So let me ask both of you
03:25before you go to Julie's. How much worse can it get. As you say this has been going on for over a decade for a long time. It had
03:32gone pretty far. Now you take this tool that seems to have the power to super magnify all the misinformation and target it and
03:43make it much more readily available. How can we expect a good outcome in these coming elections. I think in the short run you
03:51can't. I think the. The power of A.I. is a massive exponential amplifier. And if you have no quality control which I'd argue we don't
04:04at the moment of what's going in then you have the capacity to amplify misinformation at a faster speed. Do you agree. I do. And I
04:13think that the problem is that this space at the moment is almost completely unregulated. You're in a situation where there are no
04:19rules. And the truth about politics is if there isn't a rule someone will say well my opponent will use it if I don't use it. So they
04:26are attracted to this kind of behavior. It's a race to the bottom. It's a race to the bottom. And you see a lot of what is going on
04:32in what they call the air war of politics which is what's on traditional media and what politicians say there. But much more
04:38significant is the targeted campaigning that we don't see. So obviously that is very specifically designed to appeal to people's
04:45psychological. Yeah. And and to also give them frankly misinformation. What happens is that they take something where there's a grain
04:54of truth magnify and amplify it all out of proportion and present it as the truth to people who are. You know it's a downward spiral.
05:03Yeah. So so Julius I think the question is we know we see this wave coming. That's right. Right. It's it's heading straight for us this
05:11year and nothing is going to stop it. I think the big question is does it really work. Will it affect election outcomes. Does it have
05:18real effects in the world. We'll argue in some elections. We've already seen that it has impacted you know the outcome. There was an
05:27election taking place in Slovakia just last year in the last two days before the campaign. A deep fake came out and you could argue
05:36that it had a significant impact on the outcome of the campaign. But does it actually work. There was actually a study being done by
05:42Swiss scientists just came out and they tried to test whether you know a chat can actually chat can actually influence public opinion.
05:52We all have that one crazy uncle. I'm sure the folks from the United States at least one one one crazy uncle that you know will tell you
05:59over Thanksgiving that the big lie actually is true and that Trump's election was stolen. Now they did a series of tests and figured out
06:08that if you actually provide the chat bot with personal information about the person that you're trying to convince that chat bot is 81
06:17percent more likely to change that person's opinion than if you you you or I would actually go out there and try to convince someone. So
06:25that could actually be part of the solution of us trying to come up with ways to convince people who have sort of gone down the rabbit
06:32hole. But then again it can also be turned around and used to weaponize and even gone further to go and drive people. I want to get to
06:39solutions in just a minute. But I want to stick with what you were saying for a second because the you you started out showing us some of
06:46the deep fakes that we all know about that everybody sees and somebody can you know can blow the whistle on them and say that's fake. But
06:53what you're talking about now that's particularly powerful is the combination of using data to target. And you're sending very
07:01targeted messages to persuadable people that can make a difference and nobody else may ever know about it. Yeah. I mean you guys
07:09remember Cambridge Analytica right right here from where we're sitting right now. And so they played obviously a big role in the 2016
07:17election. Now back then they claimed that they had all the data and use targeted ads to get them in front of the eyeballs of the exact
07:24people with the right message so right device with the right with the right cadence to convince people. Now imagine today where you
07:33don't have to have you know five thousand different people who come up with the graphics with the copywriting but you just plug it in to a
07:40chat and then go out there and have campaigns basically take all the information that you can collect over the Internet that you can use
07:47from the voter file public records and then actually have a I do the targeting a I do the messaging and the delivery of the message. I
07:55think that is powerful and is quite dystopian. And it's supercharges. It is supercharges gasoline. All right. It is very easy to go.
08:05What you call Julius the rabbit hole of despair. It is very easy to get pretty deep in the rabbit hole of despair on on this topic. But
08:12let's let's start to climb out of it a little bit. OK. Can this technology help tear
08:20help us solve the problem. How do we stop the wave or or slow the wave or at least do a better job in the next round of elections.
08:29You know I have this former colleague Megan Smith who used to say technology is us. You know it's not it's not a thing built by someone
08:37else. It's all of the things that we are as humans. And so I think this is particularly true as we think about this which is it's our good
08:43and our bad. And it is really what we are aiming. In some ways I think we're aiming some of the most emergent technologies. I mean I think
08:50the health last health panel is an exception but we're aiming our technology at some of the most uninteresting things right now so we
08:59can aim them and amplify for disinformation or we can aim them and amplify for building public trust. And so I think it really is
09:08incumbent in the you know we think about elections. They are actually in an ecosystem of a public and private space. And so you know
09:16consultants work for firms. Candidates work for themselves. And really thinking about in this period that is unregulated.
09:24If you're taking a breath if you're in the media space being discerning about how your tool is being used whether or not you're the end user
09:33of it. So really I think it's an unusual space to have each of the individual actors in the ecosystem till we regulate having a having
09:41more incumbent upon them to. You're telling everyone to be responsible. I think we're we're in a period where be responsible in 2024 is
09:48what we want. It's tough. I mean I can tell you in the media space if for advertising supported media you know you say take a breath
09:56and wait. They all see these attention graphs that show that you have a brief hot moment to catch people's attention and offer your ad
10:07and make some money. And then it disappears. If you wait you lose the curve. So the business I work for we did some research with
10:14opinion formers looking ahead over the next few years what they thought the big issues were. And obviously I was massive. And the thing that
10:20that debate quickly fell towards is this regulation point. And there were some people say look do not be ridiculous. You can't regulate
10:28something that's not fully formed yet. It moves at lightning speed where legislation and regulation moves at glacial speed. But what I
10:35think is increasingly happening is people are starting to say it's not. Can we regulate. But we should at least try. So Rishi soon
10:43that was very much of this kind of California and tech world view that allow this thing to go out there and that a thousand flowers.
10:50But yesterday his people were starting to brief. We have to try. We have to find out what does that look like. Well I think one of the
10:56things and I think so far we've had a generation of social media. It is about basically calling out some of the businesses and saying you
11:04have to be more open and transparent. You have to start showing people what's going on with your algorithm. You have to see the
11:10consequences of your actions on social media. Jonathan Hayes just written a very interesting book on this. You have to see the
11:17consequences of that. And frankly this responsibility world needs to take over. There are lots of problems with it. It's not easy. It's
11:24problematic. But I do think that if we are to avoid a disaster in this space we need to at least try. So you're talking about
11:31regulating the big platforms. Yeah. And also I think encouraging them to come forward and say look it isn't all about the advertising
11:38dollar. Frankly you've made enough money in that space anyway. You need to start showing some responsibility in this because it's causing
11:45real serious problems. I think we really underestimate the capacity of institutions both both the platforms as well as the government to
11:55take the actions like the speed of fact checking is not up to the speed. And this is before we're we're going to have a vast expansion of
12:04disinformation. The Biden administration we have not regulated United States but the Biden administration put out an executive order that
12:11basically commands the agencies to build capacity. This year there will be 100 new positions in the federal government that are focused on
12:18this. But I and orders the agencies to kind of audit themselves about where the technologies are seeping in. But again that's a capacity
12:26question that I'm not sure. You know in order to do the checking they need to do the building internally. I mean many of the efforts Julius that
12:33the platforms have made along these lines have just gotten them into more trouble in the US at least just gotten them into more trouble. It
12:39has Republicans attacking them for bias. And well and I don't think there's one silver bullet that's going to work. We need a all of the
12:47above approach. And of course you know building up capacity local newspapers getting press rooms up to speed so that there can be fact
12:55checking done effectively is certainly part of it. But I would also say that if big tech companies want to do business in Europe I mean we
13:03can put some regulation on there and say look I mean you want to make your ad dollars over here. We'll give you 24 hours to get that
13:11stuff off the platform. And I want you to catch at least 98 percent of it. I think there is a way of deep fakes and catch deep fakes
13:17and more or identify that's identify and get them off. Yes. And I think we've got we've got to push back a bit on some of this sort of
13:24stuff. These nasty people criticized us and attacked us. Yeah. You're making hundreds of billions of dollars. Guys stand up and get
13:33used to that fact. I think we're too willing to let that that slide by and say oh there's nothing we can really do. I suspect in 10 20 30
13:41years we will actually make deep inroads into this and try to find a way. Lots of people say for example that these companies will just
13:48move to elsewhere to more permissive regimes. I kind of think again we still have to try because there's something much more
13:54significant at stake here. Well also if I read the headline that Elon Musk is sort of carving out that you know unit that's supposed to
14:02take care of election integrity and and making sure that there is accurate information on the platform. You know let's put a premium on
14:09that. And if you cut those units from your business well maybe you can't do business anymore.
14:15Can this work. I think this is a more than one thing solution. What has been broken you know in terms of independent brokers needs to
14:24be recreated. And so it is it is both about regulation. It is also about how we need the capacity to rebuild public trust. And I don't think
14:34regulation alone will do that. I think it means really thinking about are we applying these new technologies to the practice. There's an
14:43interesting project here in the UK that is applying new technologies. The UK policy lab to learning from the collective intelligence for
14:54improving public trust. We need much more of that there. Are there questions. Anyone. Yeah. There are a lot of questions. I suspected there
15:01would be. Why don't we go right here.
15:06Hi there. Natalia Yash took from learn light. OK. Hot take on the whole conference but your session in particular.
15:14We talk a lot about regulation risk privacy. It's a massive topic for this summit. And yet when you look at what businesses are doing when
15:26they're uploading tons of data as we heard in previous sessions when we look at what consumers are doing not reading terms and conditions
15:34not necessarily caring about fact checking what they read. Are we working against human nature and business nature to try to regulate.
15:43And I'll be hearing from the market. Now we just want to move as fast as we can. Yeah. I mean look I look you make very very good
15:51points there. I suppose the thing that's ringing around my head is just because you can't do everything doesn't mean you shouldn't do
15:56something. And it's not going to be perfect. And it's going to weaken it. There's going to be unintended consequences and all those kind of
16:02problems. You're absolutely right. But it's not been fashionable to say that sometimes you do need a parental attitude towards certain
16:11things. You need to have an environment where people have specific boundaries and that kind of thing. As you say in terms of the terms
16:17and conditions people don't know what they're signing up to what they're allowing. We don't have transparency on what these algorithms
16:23and doing. It's perfectly clear that they are stoking division and driving people towards more extreme statements. That's problematic. We
16:31need to work through this. There's a question right over here. Yes. Thank you. And then and then one in the back.
16:42My name is Octavia Sheeptracks. I'm a freelance researcher. I personally believe that in the future the amount of fake content will
16:52possibly outweigh real content. But something we have complete control over is how skeptical we are. And I worry that not enough
16:59attention is being put on encouraging people to be more skeptical. And you know because misinformation is only misinformation if we
17:08believe it. So I think more investment should be put into content provenance and authenticity technology whereby people can just not
17:18believe something unless it's verified as true. And therefore in theory there would be no misinformation if we were able to verify all
17:27real. You're really talking about education. Can we educate and also watermarking. Right. I just don't think watermarking is that
17:35effective. And I feel like talk of watermarking and you know labeling deep fakes is itself misinformation because it's implying to people
17:44that that's necessarily possible. And I wonder what might happen is if we see a watermark picture one day prior to Election Day and
17:53everyone is saying wow this is watermarked. So therefore it has to be authentic. And all of a sudden has even more gravitas than it might
17:59have otherwise. But I wonder if that point now this has been a year that we've seen that those Trump images that I showed earlier. I
18:07wonder what happens if we take another two three years and we've just been exposed to so many you know not created artificially and
18:15created pictures. Is that going to wear off. And all of a sudden going to have you know that sort of lens over it that we sort of see
18:23it recognize it and just kind of you know push it to the side. I was talking to somebody who's thinking very hard about the
18:29regulation and they were saying in schools we need to teach our children not to ask is this a lie. But actually the question should be
18:35am I sure it's not a lie. Now that's how and what mental process do I have to go through to check its reliability. But that's like a 10
18:43year 20 year solution. And I'm more about word about our grandparents than about our kids to be honest with you. That's interesting.
18:52One last question in the way in the back there. Thank you pretty much from the aforementioned policy lab in the UK government. And
19:02I guess the question here is there are lots of efforts to broaden who we're hearing from using collective intelligence. There's lots of
19:09efforts in academia in business to bring bridging algorithms. Alan you mentioned those hot take polarizing bits of content. There's
19:16people working algorithms to promote things that draw out consensus amongst people and to build on the previous questions a lot of work
19:23on pre bunking that might help instead of building capacity to fact check inoculate the public against misinformation. The question
19:31there's lots of these efforts. The question is how can the collective investment muscle in this room be directed towards those efforts
19:38and away from the less productive. Great last question. Each of you have like two seconds to answer it.
19:45Fantastic. Keep pushing. The project is public trust. That's all of that is the right place to focus. And we live in. And if we can
19:56keep coming back to that in any aspect of this whether you're building a health technology or a media company that is our collective
20:03charge and we will. It will break us if we don't all step up. Well well thank you. I'm not sure we solve the problem in the last 20
20:10minutes but I think we're pushing in the right direction. So thank all three of you.

Recommended