Zum Player springenZum Hauptinhalt springenZur Fußzeile springen
  • 12.7.2025

Kategorie

Menschen
Transkript
00:00:00Untertitelung des ZDF, 2020
00:00:30Untertitelung des ZDF, 2020
00:01:00Untertitelung des ZDF, 2020
00:01:01You have a networked intelligence that watches us, knows everything about us, and begins
00:01:07to try to change us.
00:01:09Twitter has become the world's number one news site.
00:01:13Technology is never good or bad.
00:01:16It's what we do with the technology.
00:01:18Eventually, millions of people are going to be thrown out of jobs because their skills
00:01:23are going to be obsolete.
00:01:25Mass unemployment, greater inequality, even social unrest.
00:01:31Regardless of whether to be afraid or not afraid, the change is coming and nobody can
00:01:38stop it.
00:01:39We've invested huge amounts of money and so it stands to reason that the military, with
00:01:50their own desires, are going to start to use these technologies.
00:01:54Autonomous weapons systems would lead to a global arms race to rival the nuclear era.
00:02:00We know what the answer is.
00:02:01They'll eventually be killing us.
00:02:07These technology leaps are going to yield incredible miracles and incredible horrors.
00:02:21We created it.
00:02:28So I think as we move forward, this intelligence will contain parts of us.
00:02:34And I think the question is, will it contain the good parts or the bad parts?
00:02:42The survivors called the war Judgment Day.
00:03:09They lived only to face a new nightmare.
00:03:12The war against the machines.
00:03:15I think we've completely fucked this up.
00:03:18I think Hollywood has managed to inoculate the general public against this question.
00:03:24The idea of machines that will take over the world.
00:03:28Open the pod bay doors, Hal.
00:03:31I'm sorry, Dave.
00:03:34I'm afraid I can't do that.
00:03:36How?
00:03:39We've cried wolf enough times.
00:03:41How?
00:03:42The public has stopped paying attention.
00:03:43Because it feels like science fiction.
00:03:44Even sitting here talking about it right now, it feels a little bit silly.
00:03:47A little bit like, oh, this is an artifact of some cheese ball movie.
00:03:52The whopper spends all its time thinking about World War III.
00:03:57But it's not.
00:03:58The general public is about to get blindsided by this.
00:04:12As a society and as individuals, we're increasingly surrounded by machine intelligence.
00:04:18We carry this pocket device in the palm of our hand that we use to make a striking array of life decisions right now.
00:04:27Aided by a set of distant algorithms that we have no understanding.
00:04:31We're already pretty jaded about the idea that we can talk to our phone and it mostly understands us.
00:04:41I found quite a number of action films.
00:04:43Five years ago, no way.
00:04:45Robotics.
00:04:46Machines that see and speak and listen.
00:04:49All that's real now.
00:04:50And these technologies are going to fundamentally change our society.
00:04:54Now we have this great movement of self-driving cars.
00:05:00Driving a car autonomously can move people's lives into a better place.
00:05:06I've lost a number of family members, including my mother, my brother and sister-in-law and their kids, to automobile accidents.
00:05:14It's pretty clear we can almost eliminate car accidents with automation.
00:05:2030,000 lives in the U.S. alone, about a million around the world per year.
00:05:25In healthcare, early indicators are the name of the game in that space.
00:05:30So that's another place where it can save somebody's life.
00:05:33Here in the Breast Cancer Center, all the things that the radiologist brain does in two minutes, the computer does instantaneously.
00:05:43A computer has looked at one million mammograms and it takes that data and applies it to this image instantaneously.
00:05:52So the medical application is profound.
00:05:55Another really exciting area that we're seeing a lot of development in is actually understanding our genetic code
00:06:03and using that to both diagnose disease and create personalized treatments.
00:06:12The primary application of all these machines will be to extend our own intelligence.
00:06:17We'll be able to make ourselves smarter and we'll be better at solving problems.
00:06:22We don't have to age. We'll actually understand aging. We'll be able to stop it.
00:06:26There's really no limit to what intelligent machines can do for the human race.
00:06:37How could a smarter machine not be a better machine?
00:06:43It's hard to say exactly when I began to think that that was a bit naive.
00:06:48Stuart Russell, he's basically a god in the field of artificial intelligence.
00:07:02He wrote the book that almost every university uses.
00:07:05I used to say it's the best-selling AI textbook.
00:07:08Now I just say it's the PDF that's stolen most often.
00:07:11Artificial intelligence is about making computers smart.
00:07:18And from the point of view of the public, what counts as AI is just something that's surprisingly intelligent
00:07:24compared to what we thought computers would typically be able to do.
00:07:28AI is a field of research to try to basically simulate all kinds of human capabilities.
00:07:36We're in the AI era.
00:07:39Silicon Valley has the ability to focus on one bright, shiny thing.
00:07:43It was social networking and social media over the last decade.
00:07:46And it's pretty clear that the bit has flipped.
00:07:48And it starts with machine learning.
00:07:51When we look back at this moment, what was the first AI?
00:07:54It's not sexy, and it isn't the thing we consider in the movies.
00:07:58But you'd make a great case that Google created not a search engine, but a godhead.
00:08:03A way for people to ask any question they wanted and get the answer they needed.
00:08:08Most people are not aware that what Google is doing is actually a form of artificial intelligence.
00:08:14They just go there, they type in a thing, Google gives them the answer.
00:08:18With each search, we train it to be better.
00:08:21Sometimes we type in the search and it tells us the answer.
00:08:24Before you have finished asking the question, you know, who is the president of Kazakhstan?
00:08:30And it'll just tell you.
00:08:31You don't have to go to the Kazakhstan national website to find out.
00:08:35Didn't used to be able to do that.
00:08:37That is artificial intelligence.
00:08:40Years from now, when we try to understand, we will say, well, how do we miss it?
00:08:45It's one of the striking contradictions that we're facing.
00:08:49Google and Facebook at all have built businesses on giving us, as a society, free stuff.
00:08:54But it's a Faustian bargain.
00:08:56They're extracting something from us in exchange.
00:09:00But we don't know what code is running on the other side and why.
00:09:04We have no idea.
00:09:06It does strike right at the issue of how much we should trust these machines.
00:09:15I use computers literally for everything.
00:09:19There's so many computer advancements now.
00:09:22And it's become such a big part of our lives.
00:09:25It's just incredible what a computer can do.
00:09:27You can actually carry a computer in your purse.
00:09:30I mean, how awesome is that?
00:09:32I think most technology is meant to make things easier and simpler for all of us.
00:09:38So hopefully that just remains the focus.
00:09:41I think everybody loves their computers.
00:09:52People don't realize they are constantly being negotiated with by machines.
00:09:58Whether that's the price of products in your Amazon cart,
00:10:03whether you can get on a particular flight,
00:10:06whether you can reserve a room at a particular hotel.
00:10:09What you're experiencing are machine learning algorithms
00:10:12that have determined that a person like you
00:10:15is willing to pay two cents more and is changing the price.
00:10:18Now computer looks at millions of people simultaneously for very subtle patterns.
00:10:28You can take seemingly innocent digital footprints such as someone's playlist on Spotify
00:10:35or stuff that they bought on Amazon
00:10:37and then use algorithms to translate this into a very detailed and very accurate intimate profile.
00:10:44There is the dossier on each of us that is so extensive,
00:10:51it would be possibly accurate to say that they know more about you than your mother does.
00:10:56Major cause of the recent AI breakthroughs isn't just that some dude had a brilliant insight all of a sudden,
00:11:12but simply that we have much bigger data to train them on and vastly better computers.
00:11:17The magic is in the data. It's a ton of data.
00:11:22I mean, it's data that's never existed before. We've never had this data before.
00:11:26We've created technologies that allow us to capture vast amounts of information.
00:11:33If you think of a billion cell phones on the planet with gyroscopes and accelerometers,
00:11:39fingerprint readers, couple that with a GPS and the photos they take and the tweets that you send,
00:11:44we're all giving off huge amounts of data individually.
00:11:48Cars that drive as the cameras on them suck up information about the world around them,
00:11:52the satellites that are now in orbit the size of a toaster,
00:11:55the infrared about the vegetation on the planet,
00:11:58the buoys that are out in the oceans to feed into climate models.
00:12:01And the NSA, the CIA is they collect information about the geopolitical situations.
00:12:13The world today is literally swimming in this data.
00:12:21Back in 2012, IBM estimated that an average human being leaves 500 megabytes of digital footprints.
00:12:30every day.
00:12:32If you wanted to back up only one day worth of data that humanity produces,
00:12:37and you print it out on a letter-sized paper, double-sided, font-sized 12,
00:12:42and you stack it up, it would reach from the surface of the earth to the sun four times over.
00:12:50Just every day.
00:12:52The data itself is not good or evil. It's how it's used.
00:12:56We're relying really on the goodwill of these people and on the policies of these companies.
00:13:02There is no legal requirement for how they can and should use that kind of data.
00:13:07That, to me, is at the heart of the trust issue.
00:13:12Right now there's a giant race for creating machines that are as smart as humans.
00:13:16Google.
00:13:17They're working on what's really the kind of Manhattan project of artificial intelligence.
00:13:21They've got the most money. They've got the most talent.
00:13:23They're buying up AI companies and robotics companies.
00:13:26People still think of Google as a search engine and their email provider
00:13:31and a lot of other things that we use on a daily basis.
00:13:34But behind that search box are 10 million servers.
00:13:40That makes Google the most powerful computing platform in the world.
00:13:44Google is now working on an AI computing platform that will have 100 million servers.
00:13:53So, when you're interacting with Google, we're just seeing the toenail of something that is a giant beast in the making.
00:13:59And the truth is, I'm not even sure that Google knows what it's becoming.
00:14:12If you look inside of what algorithms are being used at Google, it's technology largely from the 80s.
00:14:21So, these are models that you train by showing them a 1, a 2, and a 3.
00:14:25And it learns not what a 1 is or what a 2 is.
00:14:28It learns what the difference between a 1 and a 2 is.
00:14:31It's just a computation.
00:14:33In the last half decade where we've made this rapid progress,
00:14:36it has all been in pattern recognition.
00:14:39Most of the good old-fashioned AI was when we would tell our computers
00:14:44how to play a game like chess.
00:14:47From the old paradigm where you just tell the computer exactly what to do.
00:14:51This is Jeopardy!
00:15:00The IBM Challenge!
00:15:03No one at the time had thought that a machine could have the precision
00:15:07and the confidence and the speed to play Jeopardy! well enough against the best humans.
00:15:12Let's play Jeopardy!
00:15:16Four letter word for the iron fitting on the hoof of a horse.
00:15:19Watson.
00:15:20What is shoe?
00:15:21You are right, you get to pick.
00:15:22Literary character APB for 800.
00:15:25Answer, the Daily Double.
00:15:28Watson actually got its knowledge by reading Wikipedia and 200 million pages of natural language documents.
00:15:35You can't program every line of how the world works. The machine has to learn by reading.
00:15:41Now we come to Watson, who is Bram Stoker and DeWazer.
00:15:46Hello, 17,973, 41,413, and a two-day total of...
00:15:54Watson's trained on huge amounts of text, but it's not like it understands what it's saying.
00:16:00It doesn't know that water makes things wet by touching water,
00:16:03and by seeing the way things behave in the world the way you and I do.
00:16:07A lot of language AI today is not building logical models of how the world works.
00:16:12Rather, it's looking at how the words appear in the context of other words.
00:16:19David Ferrucci developed IBM's Watson, and somebody asked him,
00:16:22does Watson think? And he said, does the submarine swim?
00:16:26And what he meant was, when they developed submarines, they borrowed basic principles of swimming from fish.
00:16:33But a submarine swims farther and faster than fish and can get a huge payload and out swims fish.
00:16:41Watson winning the game of Jeopardy will go down in the history of AI as a significant milestone.
00:16:47We tend to be amazed when a machine does so well.
00:16:49I'm even more amazed when a computer beats humans at things that humans are naturally good at.
00:16:56This is how we make progress.
00:16:58In the early days of the Google Brain Project, I gave the team a very simple instruction,
00:17:04which was build the biggest neural network possible, like a thousand computers.
00:17:09A neural net is something very close to a simulation of how the brain works.
00:17:13It's very probabilistic, but with contextual relevance.
00:17:16In your brain, you have long neurons that connect to thousands of other neurons,
00:17:21and you have these pathways that are formed and forged based on what the brain needs to do.
00:17:25When a baby tries something and it succeeds, there's a reward.
00:17:29And that pathway that created the success is strengthened.
00:17:33If it fails at something, the pathway is weakened, and so over time the brain becomes honed to be good at the environment around it.
00:17:39Really, it's just getting machines to learn by themselves.
00:17:44It's called deep learning, and deep learning in neural networks mean roughly the same thing.
00:17:49Deep learning is a totally different approach, where the computer learns more like a toddler,
00:17:56by just getting a lot of data and eventually figuring stuff out.
00:18:00The computer just gets smarter and smarter as it has more experiences.
00:18:06So imagine, if you will, a neural network, like a thousand computers,
00:18:10and it wakes up not knowing anything, and we made it watch YouTube for a week.
00:18:14And so after watching YouTube for a week, what will they learn?
00:18:29We had a hypothesis, they'll learn to detect commonly occurring objects.
00:18:35And so after watching YouTube for a week, what will they learn?
00:18:40We had a hypothesis, they'll learn to detect commonly occurring objects in videos.
00:18:45And so we know that human faces appear a lot in videos.
00:18:48So we looked, and lo and behold, there was a neuron that had learned to detect human faces.
00:18:53Leave Britney alone!
00:18:56What else appears in videos a lot?
00:19:01So we looked, and to a surprise, there was actually a neuron that had learned to detect cats.
00:19:05I still remember a scene of recognition.
00:19:18Wow, that's a cat. Okay, cool. Great.
00:19:24It's all pretty innocuous when you're thinking about the future.
00:19:27It all seems kind of harmless and benign.
00:19:29But we're making cognitive architectures that will fly farther and faster than us
00:19:34and carry a bigger payload, and they won't be warm and fuzzy.
00:19:38I think that in three to five years, you will see a computer system
00:19:42that will be able to autonomously learn how to understand, how to build understanding.
00:19:49Not unlike the way the human mind works.
00:19:55Whatever that lunch was, it was certainly delicious.
00:19:57Simply some of Robbie's synthetics.
00:20:01He's your cook, too?
00:20:03Even manufactures the raw materials.
00:20:05Come round here, Robbie.
00:20:07I'll show you how this works.
00:20:12One introduces a sample of human food through this aperture.
00:20:16Down here, there's a small built-in chemical laboratory where he analyzes it.
00:20:20Later, he can reproduce identical molecules in any shape or quantity.
00:20:23It's a housewife's dream.
00:20:26Meet Baxter, revolutionary new category of robots with common sense.
00:20:31Baxter...
00:20:33Baxter is a really good example of the kind of competition we face for machines.
00:20:38Baxter can do almost anything we can do with our hands.
00:20:42Baxter costs about what a minimum wage worker makes in a year.
00:20:48But Baxter won't be taking the place of one minimum wage worker.
00:20:50He'll be taking the place of three because they never get tired.
00:20:53They never take breaks.
00:20:56That's probably the first thing we're going to see.
00:20:59Displacement of jobs.
00:21:00They're going to be done quicker, faster, cheaper by machines.
00:21:05Our ability to even stay current is so insanely limited compared to the machines we built.
00:21:10For example, now we have this great movement of Uber and Lyft kind of making transportation cheaper
00:21:15and democratizing transportation, which is great.
00:21:18The next step is going to be that they're all going to be replaced by driverless cars.
00:21:22And then all the Uber and Lyft drivers had to find something new to do.
00:21:24There are four million professional drivers in the United States.
00:21:30They're unemployed soon.
00:21:32Seven million people that do data entry.
00:21:35Those people are going to be jobless.
00:21:36A job isn't just about money, right?
00:21:41On a biological level, it serves a purpose.
00:21:44It becomes a defining thing.
00:21:46When the jobs went away in any given civilization, it doesn't take long until that turns into violence.
00:21:51We face a giant divide between rich and poor because that's what automation and AI will provoke,
00:22:06a greater divide between the haves and the have-nots.
00:22:08Right now, it's working into the middle class, into white-collar jobs.
00:22:13IBM's Watson does business analytics that we used to pay a business analyst $300 an hour to do.
00:22:19Today, you go into college to be a doctor, to be an accountant, to be a journalist.
00:22:25It's unclear that there's going to be jobs there for you.
00:22:29If someone's planning for a 40-year career in radiology, just reading images,
00:22:35I think that could be a challenge to the new graduates of today.
00:22:37But today, we're going to do a robotic case.
00:22:58The da Vinci robot is currently utilized by a variety of surgeons for its accuracy and its ability to avoid the inevitable fluctuations of the human hand.
00:23:12Anybody who watches this feels the amazingness of it.
00:23:27You look through the scope and you're seeing the claw hand holding that woman's ovary.
00:23:38Humanity was resting right there in the hands of this robot.
00:23:42People say it's the future, but it's not the future, it's the present.
00:23:48If you think about a surgical robot, there's often not a lot of intelligence in these things.
00:23:55But over time, as we put more and more intelligence into these systems, the surgical robots can actually learn from each robot surgery.
00:24:03They're tracking the movements, they're understanding what worked and what didn't work.
00:24:06And eventually, the robot for routine surgeries is going to be able to perform that entirely by itself or with human supervision.
00:24:15Normally, I do about 150 cases of hysterectomies, let's say.
00:24:20And now, most of them are done robotically.
00:24:24I do maybe one open case a year.
00:24:27So do I feel uncomfortable? Of course I do feel uncomfortable, because I don't remember how to open patients anymore.
00:24:36It seems that we're feeding it and creating it, but in a way, we are a slave to the technology, because we can't go back.
00:24:51The machines are taking bigger and bigger bites out of our skill set at an ever-increasing speed.
00:24:58And so we've got to run faster and faster to keep ahead of the machines.
00:25:03How do I look?
00:25:04Do I look?
00:25:05Do it.
00:25:11Are you attracted to me?
00:25:12What?
00:25:13Are you attracted to me?
00:25:15You give me indications that you are.
00:25:19I do?
00:25:20Yes.
00:25:22This is the future we're headed into.
00:25:24We want to design our companions.
00:25:26We're going to like to see a human face on AI.
00:25:30Therefore, gaming our emotions will be depressingly easy.
00:25:35We're not that complicated.
00:25:37Simple stimulus response.
00:25:39I can make you like me basically by smiling at you a lot.
00:25:41ARs are going to be fantastic at manipulating us.
00:25:42So you've developed a technology that can sense what people are feeling.
00:25:59Right, we've developed technology that can read your facial expressions and map that to a number of emotional states.
00:26:0715 years ago, I had just finished my undergraduate studies in computer science and it struck me that I was spending a lot of time interacting with my laptops and my devices, yet these devices had absolutely no clue how I was feeling.
00:26:22I started thinking, what if this device could sense that I was stressed or I was having a bad day? What would that open up?
00:26:33Hi, first graders. How are you? Can I get a hug?
00:26:38We had kids interact with the technology. A lot of it is still in development, but it was just amazing.
00:26:45Who likes robots?
00:26:47Me!
00:26:48Who wants to have a robot in their house?
00:26:49What would you use a robot for, Jack?
00:26:52I would use it to ask my mom very hard math questions.
00:26:57Okay.
00:26:58What about you, Theo?
00:26:59I would use it for scaring people.
00:27:03Alright, so start by smiling.
00:27:06Nice.
00:27:08Brow furrow.
00:27:10Nice one.
00:27:11Eyebrow raised.
00:27:12This generation, technology is just surrounding them all the time.
00:27:15It's almost like they expect to have robots in their homes and they expect these robots to be socially intelligent.
00:27:23What makes robots smart?
00:27:26Put them in like a math or biology class.
00:27:30I think you would have to train it.
00:27:33Alright, let's walk over here.
00:27:35So if you smile and you raise your eyebrows, it's going to run over to you.
00:27:40It's coming over, it's coming over.
00:27:44But if you look angry, it's going to run away.
00:27:47Oh, that's good.
00:27:50We're training computers to read and recognize emotions.
00:27:54Ready, set, go.
00:27:55The response so far has been really amazing.
00:27:58People are integrating this into health apps, meditation apps, robots, cars.
00:28:05We're going to see how this unfolds.
00:28:10Robots can contain AI, but a robot is just a physical instantiation and the artificial intelligence is the brain.
00:28:18And so brains can exist purely in software-based systems.
00:28:21They don't need to have a physical form.
00:28:22Robots can exist without any artificial intelligence.
00:28:26We have a lot of dumb robots out there.
00:28:28But a dumb robot can be a smart robot overnight given the right software, given the right sensors.
00:28:35We can't help but impute motive into inanimate objects.
00:28:40We do it with machines.
00:28:42We'll treat them like children.
00:28:43We'll treat them like surrogates.
00:28:45And we'll pay the price.
00:28:52And we'll feed them like muds.
00:28:55There's just a few people on the left.
00:28:57And we'll treat them like muds.
00:28:58The device.
00:29:16The device.
00:29:17Erika ist die größte Roboter, die mit der Menschlichkeit und der Willen haben.
00:29:36Der Name der Roboter ist Erika.
00:29:40Erika ist der größte Roboter in der Welt.
00:29:45Erika ist ein sehr guter Gesprächspartner,
00:29:58insbesondere für die Kinder, die Kinder, die Handicapten.
00:30:03Wenn wir mit dem Roboter sprechen,
00:30:06dann fühlen wir die soziale Barriere, die soziale Pressure.
00:30:10Und, finally, jeder hat die Androide als unsere Freundin oder Partner.
00:30:17Wir haben die einfachen Desiree gemacht.
00:30:20Sie wollen, dass sie gut angeboten sind, und sie wollen, dass sie ein Rest.
00:30:24Wenn der Roboter einen Intentional-Desire hat,
00:30:32dann kann der Roboter verstehen,
00:30:34dass andere Menschen die Intentional-Desire hat.
00:30:37Was ich liebe die Tiere?
00:30:39Ich liebe die Tiere.
00:30:42Ich liebe die Tiere.
00:30:43Sie sind wirklich sehr süß.
00:30:45Das ist mit den Menschen verbunden.
00:30:48Das bedeutet, dass wir uns alle lieben.
00:30:50Das bedeutet, dass wir uns alle lieben.
00:30:54Wir lieben uns alle.
00:30:56Wir bauen Artificial Intelligence,
00:30:59und die erste, was wir wollen, ist uns zu tun.
00:31:03Ich denke, der wichtigste Punkt wird,
00:31:06wenn alle die majoren SENSEN sind,
00:31:08sind geplicated.
00:31:10Sight.
00:31:11Touch.
00:31:13Smell.
00:31:15Wenn wir unsere SENSEN sind,
00:31:17ist das, wenn es wird alive.
00:31:19Die Erwärmung des Artificial des Autos
00:31:22gibt.
00:31:28So viele unserer Machines
00:31:29sind geplanted zu verstehen uns.
00:31:33Aber ganz was passiert,
00:31:34wenn ein Arth-不錯
00:31:35Erkenntnis, dass sie ihre loyalität
00:31:37und aufgerechtigte sein können,
00:31:38ihre Mutage, ihre Average,
00:31:40ihre Buchität, ihre Anbrustigte,
00:31:41ihre C慨ung...
00:31:46Der normalen Menschen,
00:31:47und sie sehen nicht die Roboter-Killer-Killer-Killer-Killer-Killer-Killer.
00:31:49Sie sagen, was du?
00:31:51Man, wir wollen sicher, dass wir keine Roboter-Killer-Killer-Killer-Killer-Killer-Killer-Killer-Killer.
00:31:57Once sie gehen, es ist zu lange.
00:32:05Der was mich gerade jetzt, das mich aufwacht, ist die Entwicklung der Autonomous Weapons.
00:32:17Die Welt ist eine große Herausforderung.
00:32:20Die Welt ist eine große Herausforderung.
00:32:25Die Welt ist eine große Herausforderung.
00:32:29Bis zum nächsten Mal, die Menschen haben keine Angst vorbeige Ereignisse über Drones,
00:32:33die sind die Flugzeuge für die Ausbildung von den Ereignissen.
00:32:36Die Ausbildung sind eine große Herausforderung.
00:32:40Wenn man eine Drones-Kamera hat, sie in den AI-Systemen hat,
00:32:43der AI-System.
00:32:44Es ist ein sehr einfaches Schritt von hier
00:32:46für die Autonomist-Würfe, die ihre eigenen Targets
00:32:49und ihre eigenen Missiles verlassen.
00:33:13Der Expektive lifespan von einem Mensch in dieser Art
00:33:16Wettbewerb wird in Sekunden bemerkt.
00:33:21At einem Zeitpunkt, Drones war Science Fiction,
00:33:24und jetzt haben sie die normalen Sache in der Krieg.
00:33:29Es gibt über 10,000 in den US-Military Inventory allein,
00:33:33aber sie sind nicht nur US-Phenomen,
00:33:36es gibt mehr als 80 Länder sie.
00:33:40Es ist eine Grunde,
00:33:41dass die Menschen, die die wichtigsten und wichtigsten
00:33:44in der Welt machen,
00:33:45werden sie zu nutzen und implementieren
00:33:47Artificial-Intelligence starten.
00:33:51Die Air Force hat eine 400-billionen-dollar
00:33:53Projekt zu setzen,
00:33:54um die Piloten in der Welt zu setzen.
00:33:57Und eine 500-dollar AI,
00:33:59von ein paar Studenten,
00:34:02als die besten Human-Piloten,
00:34:04mit einem relativ einfachen Algorithmus.
00:34:06AI will have as big an impact on the military
00:34:13as the combustion engine had at the turn of the century.
00:34:18It will literally touch everything that the military does,
00:34:21from driverless convoys,
00:34:23delivering logistical supplies,
00:34:25to unmanned drones,
00:34:27delivering medical aid,
00:34:29to computational propaganda,
00:34:31trying to win the hearts and minds of a population.
00:34:34And so it stands to reason
00:34:37that whoever has the best AI
00:34:39will probably achieve dominance on this planet.
00:34:46At some point in the early 21st century
00:34:48all of mankind was united in celebration.
00:34:51we marveled at our own magnificence
00:34:54as we gave birth to AI.
00:34:57AI?
00:34:58You mean artificial intelligence?
00:35:00A singular consciousness
00:35:02that spawned an entire race of machines.
00:35:06We don't know who struck first,
00:35:08us or them,
00:35:09but we know that it was us that scorched the sky.
00:35:15There's a long history of science fiction,
00:35:17not just predicting the future,
00:35:18but shaping the future.
00:35:20Arthur Conan Doyle writing before World War I
00:35:31on the danger of how submarines might be used
00:35:35to carry out civilian blockades.
00:35:39At the time he's writing this fiction,
00:35:41the Royal Navy made fun of Arthur Conan Doyle
00:35:44for this absurd idea
00:35:46that submarines could be useful in war.
00:35:50One of the things we've seen in history is that our attitude towards technology,
00:35:59but also ethics,
00:36:00are very context dependent.
00:36:02For example, the submarine,
00:36:04nations like Great Britain and even the United States
00:36:07found it horrifying to use the submarine.
00:36:09in fact, the German use of the submarine to carry out attacks
00:36:14was the reason why the United States joined World War I.
00:36:20But move the timeline forward.
00:36:22United States of America was suddenly and deliberately attacked
00:36:26by the Empire of Japan.
00:36:28Five hours after Pearl Harbor,
00:36:31the order goes out to commit unrestricted submarine warfare against Japan.
00:36:37So Arthur Conan Doyle turned out to be right.
00:36:43That's the great old line about science fiction.
00:36:47It's a lie that tells the truth.
00:36:49Fellow executives, it gives me great pleasure
00:36:52to introduce you to the future of law enforcement.
00:36:55Ed 209.
00:36:56This isn't just a question of science fiction.
00:37:06This is about what's next, about what's happening right now.
00:37:10The role of intelligent systems is growing very rapidly in warfare.
00:37:20Everyone is pushing in the unmanned realm.
00:37:27Today, the Secretary of Defense is very, very clear.
00:37:30We will not create fully autonomous attacking vehicles.
00:37:33Not everyone is going to hold themselves to that same set of values.
00:37:37And when China and Russia start deploying autonomous vehicles
00:37:41that can attack and kill,
00:37:44what's the move that we're going to make?
00:37:51You can't say, well, we're going to use autonomous weapons
00:37:53for our military dominance, but no one else is going to use them.
00:37:57If you make these weapons, they're going to be used to attack
00:38:01human populations in large numbers.
00:38:07Autonomous weapons are, by their nature, weapons of mass destruction,
00:38:09because it doesn't need a human being to guide it or carry it.
00:38:10You only need one person to, you know, write a little program.
00:38:12It just captures the complexity of this field.
00:38:14It is cool. It is important. It is amazing.
00:38:16It is also frightening. And it's all about trust.
00:38:23It's an open letter about artificial intelligence signed by some of the biggest names in science.
00:38:35What do they want? Ban the use of autonomous weapons.
00:38:38The author stated, quote, autonomous weapons have been described as the third revolution in warfare.
00:38:53A thousand artificial intelligence specialists calling for a global ban on killer robots.
00:38:59This open letter basically says that we should redefine the goal of the field of artificial intelligence,
00:39:04away from just creating pure undirected intelligence towards creating beneficial intelligence.
00:39:09The development of AI is not going to stop. It is going to continue and get better.
00:39:16If the international community isn't putting certain controls on this, people will develop things that can do anything.
00:39:23The letter says that we are years, not decades, away from these weapons being deployed.
00:39:29We had 6,000 signatories of that letter, including many of the major figures in the field.
00:39:35I'm getting a lot of visits from high-ranking officials who wish to emphasize that American military dominance is very important,
00:39:44and autonomous weapons may be part of the Defense Department's plan.
00:39:50That's very, very scary, because the value system of military developers of technology is not the same as the value system of the human race.
00:39:57Out of the concerns about the possibility that this technology might be a threat to human existence,
00:40:07a number of the technologists have funded the Future of Life Institute to try to grapple with these problems.
00:40:12All of these guys are secretive, and so it's interesting to me to see them, you know, all together.
00:40:18Everything we have is a result of our intelligence. It's not the result of our big, scary teeth or our large claws or our enormous muscles.
00:40:30It's because we're actually relatively intelligent. And among my generation, we're all having what we call holy cow or holy something else moments
00:40:39because we see that the technology is accelerating faster than we expected.
00:40:44I remember sitting around the table there with some of the best and the smartest minds in the world.
00:40:50And what really struck me was maybe the human brain is not able to fully grasp the complexity of the world that we're confronted with.
00:40:59As it's currently constructed, the road that AI is following heads off a cliff, and we need to change the direction that we're going
00:41:08so that we don't take the human race off the cliff.
00:41:14Google acquired DeepMind several years ago.
00:41:18DeepMind operates as a semi-independent subsidiary of Google.
00:41:23The thing that makes DeepMind unique is that DeepMind is absolutely focused on creating
00:41:28digital superintelligence, an AI that is vastly smarter than any human on Earth, and ultimately smarter than all humans on Earth combined.
00:41:37This is from the DeepMind reinforcement learning system.
00:41:41Basically wakes up like a newborn baby and is shown the screen of an Atari video game, and then has to learn to play the video game.
00:41:49It knows nothing about objects, about motion, about time.
00:41:56It only knows that there's an image on the screen and there's a score.
00:42:03So if your baby woke up the day it was born and by late afternoon was playing 40 different Atari video games at a superhuman level, you would be terrified.
00:42:16You would say, my baby is possessed, send it back.
00:42:19The DeepMind system can win at any game.
00:42:22It can already beat all the original Atari games.
00:42:27It is superhuman.
00:42:29It plays the games at super speed in less than a minute.
00:42:32DeepMind turned to another challenge, and the challenge was the game of Go, which people have generally argued has been beyond the power of computers to play with the best human Go players.
00:42:48First, they challenged a European Go champion.
00:42:54Then they challenged a Korean Go champion.
00:42:56Please start the game.
00:42:58And they were able to win both times in kind of striking fashion.
00:43:02You were reading articles in the New York Times years ago talking about how Go would take a hundred years for us to solve.
00:43:10People said, well, you know, but that's still just a board.
00:43:13Poker is an art.
00:43:15Poker involves reading people.
00:43:17Poker involves lying and bluffing.
00:43:19It's not an exact thing.
00:43:20That will never be, you know, a computer.
00:43:22You can't do that.
00:43:23They took the best poker players in the world and took seven days for the computer to start demolishing the humans.
00:43:30So it's the best poker player in the world.
00:43:33It's the best Go player in the world.
00:43:34And the pattern here is that AI might take a little while to wrap its tentacles around a new skill.
00:43:41But when it does, when it gets it, it is unstoppable.
00:43:53DeepMind's AI has administrator level access to Google's servers to optimize energy usage at the data centers.
00:44:01However, this could be an unintentional Trojan horse.
00:44:05DeepMind has to have complete control of the data centers.
00:44:08So with a little software update, that AI could take complete control of the whole Google system, which means they can do anything.
00:44:14They can look at all your data and do anything.
00:44:21We're rapidly headed towards digital superintelligence that far exceeds any human.
00:44:25I think it's very obvious.
00:44:26The problem is that we're not going to suddenly hit human level intelligence and say, okay, let's stop research.
00:44:33It's going to go beyond human level intelligence into what's called superintelligence, and that's anything smarter than us.
00:44:39AI at the superhuman level, if we succeed with that, will be by far the most powerful invention we've ever made, and the last invention we ever have to make.
00:44:49And if we create AI that's smarter than us, we have to be open to the possibility that we might actually lose control to them.
00:45:02Let's say you give it some objective, like you're in cancer, and then you discover that the way it chooses to go about that is actually in conflict with a lot of other things you care about.
00:45:10AI doesn't have to be evil to destroy humanity.
00:45:16If AI has a goal, and humanity just happens to be in the way, it will destroy humanity as a matter of course, without even thinking about it. No hard feelings.
00:45:25It's just like if we're building a road, and an anthill happens to be in the way, we don't hate ants. We're just building a road. And so goodbye anthill.
00:45:34It's tempting to dismiss these concerns, because it's like something that might happen in a few decades or a hundred years. So why worry?
00:45:47But if you go back to September 11th, 1933, Ernest Rutherford, who is the most well-known nuclear physicist of his time, said that the possibility of ever extracting useful amounts of energy from the transmutation of atoms, as he called it, was moonshine.
00:46:02The next morning, Leo Szilard, who was a much younger physicist, read this and got really annoyed, and figured out how to make a nuclear chain reaction just a few months later.
00:46:13We have spent more than $2 billion on the greatest scientific gamble in history.
00:46:27So when people say that, oh, this is so far off in the future, we don't have to worry about it.
00:46:33There might only be three, four breakthroughs of that magnitude that will get us from here to super intelligent machines.
00:46:40If it's going to take 20 years to figure out how to keep AI beneficial, then we should start today.
00:46:47Not at the last second when some dudes drinking Red Bull decide to flip the switch and test the thing.
00:46:57We have five years.
00:47:00I think digital superintelligence will happen in my lifetime.
00:47:04One hundred percent.
00:47:05When this happens, it will be surrounded by a bunch of people who are really just excited about the technology.
00:47:13They want to see it succeed, but they're not anticipating that it can get out of control.
00:47:17Oh, my God. I trust my computer so much. That's an amazing question.
00:47:31I don't trust my computer. If it's on, I take it off.
00:47:34Like, even if it's off, I still think it's on. Like, you know, like, you really cannot trust, like, the webcams.
00:47:38You don't know if, like, someone might turn it off. You don't know, like.
00:47:41I don't trust my computer. Like, in my phone, every time they ask, we send your information to Apple.
00:47:50Every time, like, so I don't trust my phone.
00:47:54Okay, so part of it is, yes, I do trust it, because it's really, it would be really hard to get through the day and the way our world is set up without computers.
00:48:11Trust is such a human experience.
00:48:22I have a patient coming in with intracranial aneurysm.
00:48:31They want to look in my eyes and know that they can trust this person with their life.
00:48:35I'm not horribly concerned about anything.
00:48:40Good.
00:48:41Part of that is because I have confidence in you.
00:48:51This procedure we're doing today, 20 years ago, was essentially impossible.
00:48:58We just didn't have the materials and the technologies.
00:49:05operation.
00:49:07Get down in that corner.
00:49:15Could it be any more difficult?
00:49:17My God.
00:49:23So the coil is barely in there right now.
00:49:27It's just a feather holding it in.
00:49:28It's nervous time.
00:49:29Es ist ein nervöser Zeit.
00:49:37Wir sind in Purgatory, Intellectual, Humanistic Purgatory.
00:49:41An AI könnte genauer was hier sein.
00:49:51Wir haben den Coil in den Aneurysm,
00:49:53aber es war nicht so gut, dass ich wusste, dass es bleibt.
00:49:57So, mit einem vielleicht 20% risk,
00:49:59von einer sehr schlechten Situation,
00:50:01ich wählte sie einfach zurück.
00:50:04Weil meine relationship mit ihr und
00:50:07die Probleme zu kommen und die Procedure haben,
00:50:10ich habe Dinge gemacht.
00:50:12Ich sollte nur die sicherheit,
00:50:14die mögliche Weg zu erreichen.
00:50:16Aber ich musste da für 10 Minuten agonieren.
00:50:20Die Computer fühlt sich nichts an.
00:50:22Die Computer macht es nur was es zu tun,
00:50:25zu machen, besser und besser.
00:50:31Ich möchte ein A.I. in diesem Fall sein.
00:50:37Aber kann A.I. sein kompassioniert?
00:50:40Ich meine, es ist jeder Frage über A.I.
00:50:46Wir sind die einzige Schmerz der Humanität.
00:50:52Es ist eine Worte für uns zu verstehen,
00:50:55dass eine Maschine
00:50:56und die Schmerz der Menschen
00:50:58in diesem Fall sein kann.
00:50:59Das ist eine Worte für uns.
00:51:00Das ist eine Worte für uns.
00:51:02Musik
00:51:06Part of me doesn't believe in magic,
00:51:08but part of me has faith
00:51:10that there is something beyond the sum of the parts.
00:51:12There is at least a oneness
00:51:14in our shared ancestry,
00:51:16our shared biology,
00:51:18our shared history.
00:51:20Some connection there,
00:51:22beyond machine.
00:51:24So then you have the other side of that is,
00:51:34does the computer know it's conscious,
00:51:36or can it be conscious, or does it care?
00:51:38Does it need to be conscious?
00:51:40Does it need to be aware?
00:51:42Music
00:51:54I do not think that a robot could ever be conscious.
00:51:56Unless they programmed it that way.
00:51:58Conscious?
00:52:00No.
00:52:02No.
00:52:04I mean, I think a robot could be programmed
00:52:06to be conscious. How are they programmed to do
00:52:08everything else?
00:52:10That's another big part of artificial intelligence,
00:52:13is to make them conscious,
00:52:14and make them feel.
00:52:23Back in 2005,
00:52:24we started trying to build
00:52:26machines with self-awareness.
00:52:34This robot, to begin with, didn't know what it was.
00:52:36All he knew is that it needed to do something like walk.
00:52:44Through trial and error,
00:52:46it figured out how to walk,
00:52:48using its imagination,
00:52:50and then it walked away.
00:52:54And then, we did something very cruel.
00:52:56We chopped off a leg,
00:52:58and watched what happened.
00:52:59At the beginning,
00:53:00it didn't quite know what had happened.
00:53:04At the beginning,
00:53:05it didn't quite know what had happened.
00:53:08But, over about a period of a day,
00:53:10it then began to limp.
00:53:14And then, a year ago,
00:53:15we were training an AI system
00:53:17for a live demonstration.
00:53:20We wanted to show how we wave all these objects
00:53:23in front of the camera,
00:53:24and the AI can recognize the objects.
00:53:26And so, we're preparing this demo,
00:53:29and we had on the side screen
00:53:31this ability to watch
00:53:32what certain neurons were responding to.
00:53:37And suddenly, we noticed
00:53:38that one of the neurons
00:53:39was tracking faces.
00:53:41It was tracking our faces
00:53:43as we were moving around.
00:53:45Now, the spooky thing about this
00:53:47is that we never trained the system
00:53:49to recognize human faces.
00:53:51And yet, somehow,
00:53:53it learned to do that.
00:53:58Even though these robots
00:53:59are very simple,
00:54:00we can see there's something else
00:54:01going on there.
00:54:02It's not just programmed.
00:54:06So, this is just the beginning.
00:54:10I often think about that beach
00:54:12in Kitty Hawk,
00:54:13the 1903 flight by Orville and Wilbur Wright.
00:54:22It was kind of a canvas plane
00:54:23and some wood and iron.
00:54:24And it gets off the ground
00:54:25for, what, a minute and 20 seconds
00:54:27in this windy day
00:54:28before touching back down again.
00:54:34And it was just around 65 summers or so
00:54:37after that moment
00:54:40that you have a 747
00:54:42taken off from JFK.
00:54:51Where the major concern
00:54:52of someone on the airplane
00:54:53might be whether or not
00:54:54their salt-free diet meal
00:54:56is going to be coming to them
00:54:57or not.
00:54:58With a whole infrastructure
00:54:59with travel agents
00:55:00and tower control,
00:55:02and it's all casual
00:55:03and it's all part of the world.
00:55:08Right now, as far as we've come
00:55:09with machines
00:55:10and thinking
00:55:11to solve problems,
00:55:12we're a Kitty Hawk now.
00:55:13We're in the wind.
00:55:14We have our
00:55:15our tattered canvas planes
00:55:16up in the air.
00:55:21But what happens
00:55:22in 65 summers or so,
00:55:24we will have machines
00:55:25that are beyond human control.
00:55:28Should we worry about that?
00:55:33I'm not sure
00:55:34it's going to help.
00:55:37Nobody has any idea today
00:55:43what it means
00:55:44for a robot to be conscious.
00:55:47There is no such thing.
00:55:49There are a lot of smart people
00:55:51and I have a great deal
00:55:52of respect for them.
00:55:54But the truth is
00:55:55machines are natural psychopaths.
00:55:59Fear came back into the market
00:56:00and down 800, nearly a thousand
00:56:02in a heartbeat.
00:56:03It is classic capitulation.
00:56:04There are some people
00:56:05who are proposing there was
00:56:06some kind of fat finger error.
00:56:07Take the flash crash of 2010.
00:56:10In a matter of minutes,
00:56:12a trillion dollars in value
00:56:14was lost in the stock market.
00:56:16The Dow dropped nearly
00:56:17a thousand points
00:56:18in a half hour.
00:56:19So, what went wrong?
00:56:23By that point in time,
00:56:25more than 60% of all the trades
00:56:27that took place on the stock exchange
00:56:29were actually being initiated
00:56:31by computers.
00:56:33Panic selling on the way down
00:56:34and all of a sudden
00:56:35it just stopped on a dime.
00:56:36It's all happening in real time, folks.
00:56:38The short story of what happened
00:56:39in the flash crash is that
00:56:40algorithms responded to algorithms
00:56:42and it compounded upon itself
00:56:44over and over and over again
00:56:45in a matter of minutes.
00:56:47At one point,
00:56:48the market fell as if down a well.
00:56:51There is no regulatory body
00:56:53that can adapt quickly enough
00:56:54to prevent potentially
00:56:56disastrous consequences
00:56:58of AI operating in our financial system.
00:57:01They are so primed for manipulation.
00:57:04Let's talk about the speed
00:57:05with which we are watching
00:57:06this market deteriorate.
00:57:08That's the type of AI run amok
00:57:10that scares people.
00:57:12When you give them a goal,
00:57:13they will relentlessly pursue that goal.
00:57:18How many computer programs
00:57:19are there like this?
00:57:21Nobody knows.
00:57:24One of the fascinating aspects
00:57:26about AI in general
00:57:28is that no one really understands
00:57:30how it works.
00:57:32Even the people who create AI
00:57:34don't really fully understand.
00:57:36Because it has millions of elements,
00:57:40it becomes completely impossible
00:57:42for a human being
00:57:43to understand what's going on.
00:57:53Microsoft had set up
00:57:55this artificial intelligence
00:57:56called Ty on Twitter,
00:57:57which was a chat bot.
00:58:02They started out in the morning
00:58:03and Ty was starting to tweet
00:58:05and learning from stuff
00:58:07that was being sent to him
00:58:08from other Twitter people.
00:58:12Because some people like Troll
00:58:13attacked him.
00:58:14Within 24 hours,
00:58:15the Microsoft bot
00:58:16became a terrible person.
00:58:19They had to literally
00:58:20pull Ty off the net
00:58:22because he had turned
00:58:23into a monster.
00:58:26A misanthropic, racist,
00:58:28horrible person
00:58:29you never want to meet.
00:58:31And nobody had foreseen this.
00:58:33The whole idea of AI is that
00:58:37we are not telling it exactly
00:58:39how to achieve a given outcome
00:58:42or a goal.
00:58:43AI develops on its own.
00:58:47We're worried about
00:58:48super-intelligent AI.
00:58:49The master chess player
00:58:50that will outmaneuver us.
00:58:53But AI won't have to actually be
00:58:55that smart to have massively
00:58:57disruptive effects on human civilization.
00:59:00We've seen over the last century,
00:59:02it doesn't necessarily
00:59:03take a genius
00:59:04to knock history
00:59:05off in a particular direction.
00:59:07And it won't take a genius AI
00:59:09to do the same thing.
00:59:11Bogus election news stories
00:59:12generated more engagement
00:59:13on Facebook
00:59:14than top real stories.
00:59:18Facebook really is the elephant
00:59:19in the room.
00:59:21AI running Facebook news feed.
00:59:24The task for AI is keeping users engaged.
00:59:29But no one really understands exactly
00:59:31how this AI is achieving this goal.
00:59:35Facebook is building an elegant
00:59:37mirrored wall around us.
00:59:39A mirror that we can ask
00:59:40who's the fairest of them all?
00:59:42And it will answer you, you, time and again.
00:59:45Slowly begin to warp our sense of reality.
00:59:48Warp our sense of politics, history, global events.
00:59:53Until determining what's true and what's not true
00:59:57is virtually impossible.
01:00:02The problem is that AI doesn't understand that.
01:00:04AI just had a mission,
01:00:06maximize user engagement,
01:00:08and it achieved that.
01:00:11Nearly two billion people spend nearly one hour
01:00:14on average a day basically interacting with AI
01:00:18that is shaping their experience.
01:00:21even Facebook engineers, they don't like fake news.
01:00:25It's very bad business.
01:00:27They want to get rid of fake news.
01:00:29It's just very difficult to do
01:00:30because how do you recognize news as fake
01:00:33if you cannot read all of those news personally?
01:00:35There's so much active misinformation
01:00:39and it's packaged very well
01:00:41and it looks the same when you see it on a Facebook page
01:00:45or you turn on your television.
01:00:47It's not terribly sophisticated,
01:00:49but it is terribly powerful.
01:00:51And what it means is that your view of the world,
01:00:54which 20 years ago was determined
01:00:56if you watch the nightly news
01:00:58by three different networks
01:01:00with three anchors who endeavored to try to get it right.
01:01:03You might have had a little bias one way or the other,
01:01:05but largely speaking,
01:01:06we can all agree on an objective reality.
01:01:08Well, that objectivity is gone
01:01:11and Facebook has completely annihilated it.
01:01:14If most of your understanding of how the world works
01:01:20is derived from Facebook,
01:01:22facilitated by algorithmic software
01:01:24that tries to show you the news you want to see,
01:01:27that's a terribly dangerous thing.
01:01:29And the idea that we have not only set that in motion,
01:01:33but allowed bad faith actors access to that information,
01:01:38this is a recipe for disaster.
01:01:44I think that there will definitely be lots of bad actors
01:01:46trying to manipulate the world with AI.
01:01:492016 was a perfect example of an election
01:01:53where there was lots of AI producing lots of fake news
01:01:56and distributing it for a purpose, for a result.
01:02:00Ladies and gentlemen, honorable colleagues,
01:02:03it's my privilege to speak to you today
01:02:05about the power of big data and psychographics
01:02:08in the electoral process,
01:02:10and specifically to talk about the work
01:02:13that we contributed to Senator Cruz's presidential primary campaign.
01:02:17Cambridge Analytica emerged quietly as a company
01:02:20that, according to its own hype,
01:02:22has the ability to use this tremendous amount of data
01:02:26in order to effect societal change.
01:02:30In 2016, they had three major clients.
01:02:34Ted Cruz was one of them.
01:02:36It's easy to forget that only 18 months ago,
01:02:38Senator Cruz was one of the less popular candidates
01:02:41seeking nomination and certainly...
01:02:44So what was not possible maybe like 10 or 15 years ago
01:02:48was that you can send fake news
01:02:50to exactly the people that you want to send it to.
01:02:53And then you could actually see how he or she reacts on Facebook
01:02:57and then adjust that information according to the feedback that you got.
01:03:02And so you can start developing kind of a real-time management of a population.
01:03:07In this case, we've zoned in on a group we've called Persuasion.
01:03:11These are people who are definitely going to vote to caucus,
01:03:15but they need moving from the centre a little bit more towards the right
01:03:19in order to support Cruz.
01:03:20They need a persuasion message.
01:03:22Gun rights I've selected.
01:03:24That narrows the field slightly more.
01:03:26And now we know that we need a message on gun rights.
01:03:29It needs to be a persuasion message
01:03:31and it needs to be nuanced according to the certain personality
01:03:35that we're interested in.
01:03:36Through social media, there's an infinite amount of information
01:03:40that you can gather about a person.
01:03:43We have somewhere close to 4,000 or 5,000 data points
01:03:46on every adult in the United States.
01:03:49It's about targeting the individual.
01:03:51It's like a weapon which can be used in the totally wrong direction.
01:03:56That's the problem with all of this data.
01:03:58It's almost as if we built the bullet before we built the gun.
01:04:02Ted Cruz employed our data, our behavioural insights.
01:04:07He started from a base of less than 5% and had a very slow and steady
01:04:13but firm rise to above 35%, making him obviously the second most threatening contender in the race.
01:04:20Now clearly the Cruz campaign is over now.
01:04:23But what I can tell you is that of the two candidates left in this election,
01:04:28one of them is using these technologies.
01:04:31I, Donald John Trump, do solemnly swear that I will faithfully execute the office of President of the United States.
01:04:43Elections are a marginal exercise.
01:04:48It doesn't take a very sophisticated AI in order to have a disproportionate impact.
01:04:58Before Trump, Brexit was another supposed client.
01:05:02Well, at 20 minutes to five, we can now say the decision taken in 1975 by this country to join the common market
01:05:11has been reversed by this referendum to leave the EU.
01:05:17Cambridge Analytica allegedly uses AI to push through two of the most ground-shaking pieces of political change in the last 50 years.
01:05:28These are epochal events.
01:05:30And if we believe the hype, they are connected directly to a piece of software essentially created by a professor at Stanford.
01:05:38Back in 2013, I described that what they are doing is possible and warned against this happening in the future.
01:05:49At the time, Michal Kosinski was a young Polish researcher working at the Psychometrics Center.
01:05:55So what Michal had done was to gather the largest ever data set of how people behave on Facebook.
01:06:04Psychometrics is trying to measure psychological traits such as personality, intelligence, political views and so on.
01:06:12Now, traditionally, those traits were measured using tests and questionnaires.
01:06:18Personality tests, the most benign thing you could possibly think of, something that doesn't necessarily have a lot of utility, right?
01:06:24Our idea was that instead of tests and questionnaires, we could simply look at the digital footprints of behaviors that we are all living behind
01:06:32to understand openness, conscientiousness, neuroticism.
01:06:38You can easily buy personal data such as where you live, what club memberships you've joined, which gym you go to.
01:06:45There are actually marketplaces for personal data.
01:06:48Turns out we can discover an awful lot about what you're going to do based on a very, very tiny set of information.
01:06:54We are training deep learning networks to infer intimate traits, people's political views, personality, intelligence, sexual orientation, just from an image of someone's face.
01:07:10Now think about countries which are not so free and open-minded.
01:07:20If you can reveal people's religious views or political views or sexual orientation based on only profile pictures, this could be literally an issue of life and death.
01:07:33And death.
01:07:34I think there's no going back.
01:07:39Do you know what the Turing test is?
01:07:44It's when a human interacts with a computer.
01:07:49And if the human doesn't know they're interacting with a computer, the test is passed.
01:07:54And over the next few days, you're going to be the human component in the Turing test.
01:08:00Holy shit.
01:08:01Yeah, that's right, Caleb.
01:08:02You got it.
01:08:03Because if that test is passed, you are dead center of the greatest scientific event in the history of man.
01:08:13If you've created a conscious machine, it's not the history of man.
01:08:18That's the history of gods.
01:08:20It's almost like technology is a god in and of itself.
01:08:34Like the weather.
01:08:35We can't impact it.
01:08:36We can't slow it down.
01:08:37We can't stop it.
01:08:40We feel powerless.
01:08:43If we think of God as an unlimited amount of intelligence, the closest we can get to that is by evolving our own intelligence by merging with the artificial intelligence we're creating.
01:08:55Today, our computers, phones, applications give us superhuman capability.
01:09:02So, as the old maxim says, if you can't beat him, join him.
01:09:07It's about a human-machine partnership.
01:09:10I mean, we already see how, you know, our phones, for example, act as memory processes, right?
01:09:15I don't have to remember your phone number anymore because it's on my phone.
01:09:19It's about machines augmenting our human abilities as opposed to, like, completely displacing them.
01:09:25If you look at all the objects that have made the leap from analog to digital over the last 20 years, it's a lot.
01:09:32We're the last analog object in a digital universe.
01:09:36And the problem with that, of course, is that the data input-output is very limited.
01:09:41It's this, it's these.
01:09:45Our eyes are pretty good.
01:09:46We're able to take in a lot of visual information.
01:09:49But our information output is very, very, very low.
01:09:53The reason this is important if we envision a scenario where AI is playing a more prominent role in societies,
01:10:00we want good ways to interact with this technology so that it ends up augmenting us.
01:10:06I think it's incredibly important that AI not be other.
01:10:12It must be us.
01:10:14And I could be wrong about what I'm saying.
01:10:19I'm certainly open to ideas if anybody can suggest a path that's better.
01:10:24But I think we're really going to have to either merge with AI or be left behind.
01:10:29It's hard to kind of think of unplugging a system that's distributed everywhere on the planet, that's distributed now across the solar system.
01:10:46You can't just, you know, shut that off.
01:10:50We've opened Pandora's box.
01:10:52We've unleashed forces that we can't control, we can't stop.
01:10:56We're in the midst of essentially creating a new life form on Earth.
01:10:59We don't know what happens next.
01:11:08We don't know what shape the intellect of a machine will be when that intellect is far beyond human capabilities.
01:11:14It's just not something that's possible.
01:11:17The least scary future I can think of is one where we have at least democratized AI.
01:11:32Because if one company or small group of people manages to develop godlike digital superintelligence, they could take over the world.
01:11:41At least when there's an evil dictator, that human is going to die.
01:11:45But for an AI, there would be no death. It would live forever.
01:11:50And then you'd have an immortal dictator from which we can never escape.
01:12:01ile.
01:12:19Right.
01:12:24We're suddenly good at seventy.
01:12:28Allen, Macchiato.
01:12:58Allen, Macchiato.
01:13:28Allen, Macchiato.
01:13:58Allen, Macchiato.
01:14:28Allen, Macchiato.
01:14:58Allen, Macchiato.
01:15:27Allen, Macchiato.
01:15:57Allen, Macchiato.
01:16:27Allen, Macchiato.
01:16:57Allen, Macchiato.
01:17:27Allen, Macchiato.
01:17:57Allen, Macchiato.
01:18:01Allen, Macchiato.

Empfohlen

1:30:13
Als nächstes auf Sendung