[APPLAUSE] Killer robots. So killer robots

actually are real, and there are companies

making killer robots. And so you need to know

about killer robots. The characteristics

of killer robots. There’s two main

characteristics. Of course, firstly,

killer robots can kill. Secondly killer

robots are robots. But there’s another

key characteristic. See, killer robots are

controlled by humans. Because there’s a

line at this point. So this is a YouTube

screenshot of Boston Dynamics, a company that makes killer– I won’t play the video– but this is a, well, you

can find it on YouTube. It’s uploaded in October 2003. It’s fabulous video

of a killer robot. And basically it just spends

eight minutes running around the car park of this

company in America that designed this killer robot. And occasionally it runs

past a fat, balding bloke who’s holding a controller. And he’s running around you

know he’s running this killer robot around. And then you see at the end

of the day, that’s the point. The point is it’s not

just the killer robot. It’s the bloke with

the controller. He’s controlling

the killer robot. So the robot running

around the car park is being controlled by a human. And the question is, can a

computer control that killer robot instead? Computers, disruptive

technology, computers like to relieve

humans of their jobs. So could a computer

control a killer robot? And in fact, we could

go a step further. Could we write an app that

people could download on their phone now I know how much time I have. On our phone, when

you type in someone’s name into the app, then some killer

robots are unleashed and goes and kills that person. So could there be a killer

robot app, can computers do that? So that’s kind of an

interesting question. So what would a

company need in order to make that killer robot app? So firstly, they’d need

a good database, right? Because obviously you’d

type someone’s name in, and it’s got to find them. So you need names, address,

and accurate information about the daily routines

of billions of people. Secondly, we need some people

that can write the app. So we need employees who are

very good at writing computer programmes. Thirdly, we need very good

facial recognition software to make sure that the killer

robot kills the right person. Fourthly, we’re going to need

experience in kind of control– now this is one of the hardest

parts– we need to control the killer robots

out of this car park and towards the correct person. And then finally, of course, we

need an army of killer robots. So if we have all

of those things, then we can write

a killer robot app. So once the company

what’s the company that has all of those things? How about Google? So let’s have a look. Does Google have names,

addresses, location of billions of people? Absolutely. Even if you don’t

have an Android phone, even if you don’t have

a smartphone at all, somebody you know– your boss, your work colleague,

your wife, your granddaughter, somebody has typed your

name and your phone number and your address

into their Android contacts, and they’ve synced that

contact list with Google. And Google knows what your

name is, what your address is, and where you live, and

your phone number too. So does Google have employees

that can write good computer programmes?

Of course it does. Does Google have facial

recognition software? Of course it does. I had an Android tablet

once, and I just looked at it and it would unlock it

was a very cool thing. Does it have computer programmes

that can drive machines around? Of course it does,

because it’s currently spending a lot of money

on self-driving cars. So the only question

left is does Google have an army of killer robots? And the answer

is, well, you see, Google has a motto,

which says don’t be evil, which it quietly

dropped in 2015, and possibly unrelated to

that, in December 2013, Google bought Boston Dynamics

and seven other robot companies. And now suddenly Google owns

all these scary killer robots. So how many killer robots

have Google actually got? Probably enough to make it

feasible to write that app. So actually, I think

in theory, I’ve attempted to

construct a proof that in theory somebody at Google

could write an app which really would have the property that

if you type someone’s name in and it unleashes several– I mean, you’d probably have to

do some experiments to work out how many was enough– but enough killer robots

to finish this person off. So that’s something

computers can do. I think at this point in time,

all the ingredients are there. So this talk about what

computers can’t do. This, I think, puts it into

some kind of perspective. So here’s another question. If Google was sentient it

and realised that it could, maybe you wouldn’t need

a human to write the app. Maybe the computer

could write the app. Maybe very 2001. Maybe the computer

could write the app. And maybe the

computer could just decide to take over

the world because it’s got access to the

facial recognition software and the database. Maybe the computer could just

go rogue and take over the world anyway. So maybe computers would

find that appealing. But I’m not so sure, because

for a computer to make that leap and decide that it’s going to

kill everyone in the world, the computer would have

to be able to think. And I don’t know if

computers can think. So we’ve seen something

computers can do if somebody writes the programme,

they’d kill everyone. But we’ve also seen

maybe they can’t think. And in my personal

opinion, I don’t think computers can think. So here’s a picture

of two things. The thing on the left

is Garry Kasparov. He was the world chess

champion in 1997, which is when this photo was taken. And the thing on the

right is what at the time was a very

fancy-looking monitor. And that monitor is

attached to a computer. And the computer is

called Deep Blue. And it was made by IBM. And it was the best

chess-playing computer of that time. So world chess champion versus

computer chess champion. This was a six-game

match played in 1997. And it was big news. Kasparov lost, and

1997 marked the era when computers became

better at chess than humans. And since then,

all that’s happened is that computers got

much, much, much better. And now there aren’t

any more competitions between humans and computers,

because no human with any sense would ever dream of trying

to compete for chess. So who’s thinking

in this picture? Well, Garry Kasparov certainly

looks as if he’s thinking. I think that’s a

Sicilian defence. They’re a few moves

in, and he’s probably trying to work out what

the best move to play is. And he’s using his intuition. He’s using insight. He’s using experience. He’s also using psychology, even

though he’s playing a computer. He’s played this

computer before. He knows what kind of positions

that computer plays well and what kind of positions

that computer plays badly. And he’s adapting his play to

try and beat that computer. What’s the thing

on the right doing? Is the thing on

the right thinking? I’m not so sure that

thing he’s thinking. Is Deep Blue thinking? I don’t know. What does a computer do? A computer is a device

that is very, very good at following instructions

very, very quickly. And that’s different to

thinking, in my opinion. So the basic logical

instructions that the computer follows contain something

called a computer programme. But I’m going to have to tell

you something about computers and computer programmes

in this talk, but I’m going to try

and keep it simple, whilst also attempting to

get to the difficult point. So let me tell you two

things about a computer. First thing is the

computer has memory. A computer needs to

store information. Because sometimes it

does calculations then remembers the answer to look

back at the answer later. So the computer needs memory

because a computer stores information. The computer programme is

typically stored in the memory as well. And a computer also

needs a processor. The processor is basically the

bit of the computer that reads all the instructions in the

programme and does them one by one very, very quickly. And sometimes the

instructions say, go and do this instruction instead. Jump over here. And the processor

works very, very fast following these instructions. So nowadays a 4 gigahertz

processor, you probably have one of those

in your phone now, that’s following 4 billion

instructions per second. And so you can see even in

1997, because IBM had a lot of interest in computers

and doing a lot of research into making computers very fast,

that computer Deep Blue was analysing 200 million chess

positions every second. So what was it doing? It was just trying

all the moves– even the most ridiculous. It would try moves. That piece I moved last

time, let’s just put it back. That’s the kind of

move it would try. It would try every single move. And it would try

all the responses and keep going very, very fast. And then it would make

some kind of guess as to which positions

looked OK, and then they follow up with them. But Kasparov wasn’t

playing like that. There were many moves in

a given chess position the Kasparov wouldn’t even

consider because he could see. He had intuition. He knew they were

going backwards. But intuition ultimately gets

defeated by brute force search, analysing millions and millions

of chess positions per second. Eventually we’ve now

proved that computers can do this better than humans. So I don’t think

that’s thinking. So I think they’re kind of

safe from computers going rogue and trying to kill us all. But you see the reason I

don’t think it is thinking is because I do a lot of

unloading the dishwasher. And unloading the

dishwasher it’s a very mechanical procedure. There’s a plate. I know where the plates go. There’s the bowls. And I know where the bowls go. There’s the cutlery and I

know where the cutlery goes. And I just sit,

just unthinkingly, I sit and put everything away. And I actually even

unload the dishwasher in exactly the same

order every time. Because I’m trying to be like

a computer in some sense. And while I’m doing

that I’m thinking. And I’m certainly not

thinking about what I’m doing. What I’m doing is somehow

subconsciously hovering in the background. Yeah, I tend to be

thinking about mathematics, or whether I’m

hungry, or what I’m going to do after I finish

unloading the dishwasher. And that’s thinking. And both the computer, when we

have robots unload dishwashers, they would just download the

dishwasher and they’d be gone. So I need to talk

about certain puzzles. We need to talk about

problems, and whether or not we can do them or not. So I want to talk

about problems, but the only problems

I’ve mentioned so far have been very

difficult problems. I’ve talked about the problems

of writing an app that will somehow unleash killer robots. That’s actually a very difficult

problem for an engineer, because there’s lots

and lots of components that need to go in it. And then somehow beating the

world chess champion at chest is also very

difficult. Even trying to work out what the question

is, do computers think, that’s very, very hard. And different people

have different opinions. So we need to think about

some similar problems if we actually want

to get anywhere. What I’m going to do

first, because I don’t know how many of you are very

happy with the abstract notion of a computer, I want to

forget about computers as well, and I just don’t talk about

the idea of what a problem is and how to solve it. And how to know we’ve solved it. But also, if you

get a problem we’re stuck on, then wondering whether

that problem can be solved. I want to consider these ideas. And the reason I’m

giving this talk, really, is because I gave a

big course this summer. I thought a lot about proving

things and how to prove things. And so one thing led to another. I got very interested

in this topic. So here’s a problem raised

by the Ancient Greeks. Can I bisect an angle

using ruler and compasses? So this is an ancient,

2000-year-old problem. The ancient Greeks, you

know, we have Euclid, they were very good at this kind

of thing, the ancient Greeks. They were very interested

in this kind of question. So if could have

the visualizer on, I will now demonstrate

what an angle is and how to bisect it

using ruler and compasses. All this on the

Holloway Road today, just the same as the one I

had when I was in the C form. And lo and behold,

a pair of compasses. Fabulous. And a ruler. And I’ve also pinched one of

my daughter’s felt tip pens. So first of all I

need to draw an angle. So there’s an angle. That looks OK. I think it’s OK. Can you see the angle? I think it’s OK. There’s a shadow. Maybe I shouldn’t worry. So here’s the angle. And what the job is,

what’s the question, the question is can

we bisect the angle. And bisect, as maybe

many of you remember, bisect means I need to draw a

line just around down there, exactly in the

middle of this angle. I need to cut this angle

into two equal pieces. That’s the kind of thing that

the ancient Greeks were doing. Euclid wrote an

entire book on it. So can we bisect this angle? And the point is you can’t

kind of go, well, it’s kind of, it’s sort of. You’re not allowed to do that. You’re not allowed to guess. You’ve got to do it exactly. But we don’t just have a ruler. We’ve got some compasses. So watch this. Here’s how you do it. First of all, you put the

point of the compass dead on that angle. And then, of course,

we draw a circle. Or I’m only going to draw

that bit of that circle. Now the thing about circles

is that the distance from the centre of the circle

to the edge of the circle, the circumference of the

circle, is always the same, wherever you draw

the circumference. So one thing I know for sure

is that that line there, that red line there, is exactly

the same length as that line there. Because I’ll even tell

you what the length is. The length is this length here. The length is the distance

between the pencil tip and the spiky bit. So those two lines are

definitely the same length. And now I’m going to

do something else. I’m going to put

that spiky bit here. It’s probably got a

name, but I never– they don’t even have these. I went to the stationery

cupboard said, could I have a pair of compasses,

and they looked at me like I was an idiot. Apparently things have moved on. I don’t know. So there, while I’m

chatting, I’m doing it. You didn’t even see

what I was doing. I put the point thing

there, and I drew an arc. And I put the pointy thing

there and I drew an arc. And those two arcs have

met at that point there. And I’m going to draw

these two lines now. And again, in both

cases that is a line that went from where

the point of it was to where the pencil bit was. So I’ve drawn four lines

now, and all those lines are the same length. So great. Four-sided shape, all

lines are the same length. What’s it going to be? It’s going to be a square. But it’s not square because

it’s a bit squashed. It’s called a rhombus. And now we open our copy of

Euclid’s Elements from 300 B.C. And we find the

proposition that says you’ve got a rhombus

like this, and you draw the diagonal of the rhombus. Let’s do it in pencil,

because it’s not the same. All the red lines

are the same length. Here we go. Let’s do that in pencil. There’s the diagonal

of the rhombus. And you can see the rhombus

is very, very symmetric. And so this is a line of

symmetry of the rhombus, and everything is all the same. This angle, this

angle, and this angle are going to be the same, right? Oh, it looks quite– oh yeah, I just was

looking at that one. It looked terrible. It does look quite

good up there. So there we go. So we’ve drawn this line,

and it’s cut the angle into– Euclid is clever. It’s been know for

over 2,300 years. And in a book that

was apparently the thing to read, even

up to the 19th century. If you were an educated person

then you would read Euclid. So I bisected that angle. So here’s the question. Here’s a theorem, a

mathematical theorem. It’s possible to bisect an angle

using a ruler and compasses. And the proof of the theorem

is you just watched me do it. And I talked your

way through it, so I convinced you

that I was doing it. And of course Euclid

does more than that. He labels the

things A and B and C to solve math things that

you maybe do at school. And ends up proving that you

can bisect an angle using ruler and compasses. So it can be done. And not only that, the

reason it can be done is because I did it. So if Euclid did it 2,300

years ago– and now here’s another problem raised

by the Ancient Greeks. Can you trisect an angle

using ruler and compasses? So bisect means cut it

into two equal bits. Trisect means cut it

into three equal bits. So can you trisect an angle

using ruler and compass? You’ve got to think

what you’ve got. Bisecting, you kind of think,

you know, you’re on a roll. You try trisecting, and the

ancient Greeks couldn’t do it. They found it very

difficult to trisect angles using ruler and compasses. So that problem sat

around unsolved, with nobody doing it, and

somehow if you can’t do it, then what other

alternative is there? And 2000 years later,

in 1837, Pierre Wantzel proved it was impossible to do. So that’s why the Greeks

couldn’t do it, because it was impossible to do it. And bisecting the

angle is kind of cool. But this is super

interesting, because proving that it’s impossible

to do something, the questions are

formally very similar, bisect an angle,

trisect an angle. But we solved one

of them by doing it, and then 2000 years later

we solved the other way by proving it can’t be done. We resolved the question. It cannot be done. And that’s why

the Greeks failed. But the Greeks didn’t want to

conclude it couldn’t be done. They just wanted to conclude

they couldn’t do it. So how does one prove it’s

impossible to do something? Somehow, I don’t know, that

result seems striking to me. So what did Wantzel have that

the ancient Greeks didn’t have? The ancient Greeks had geometry. They were beginning to

develop some arithmetic. And these were somehow the

two fields of mathematics that we completely

disjoin sometimes. They didn’t have much abstract

mathematical machinery available to them. By the early 1800s,

when Wantzel was around, mathematicians had

worked out what zero was. Remember for the Greeks

every number was positive. They’d invented zero,

negative numbers. That Cartesian Coordinates,

this is the big insight. Rene Descartes observed that if

you have a point on the plane, you can think of that point,

something in x=coordinate and a y-coordinate. So suddenly that

point which hitherto had been a geometric notion,

that point suddenly has an arithmetic interpretation. We can explain where that

point is using numbers. This is a link between

geometry and arithmetic that the Greeks didn’t have. So it took Rene Descartes,

and but nowadays it’s kind of a trivial thing. You draw graphs, and

this is y this is x, and you’re just taught it. But someone had

to discover this. Turned out it was a

genius who discovered it. And also these people,

because they were also quite good at arithmetic,

they had long since isolated these four basic– plus, minus, times, and divide– but now instead of thinking

about all the numbers they were interested

in subsets of numbers. Things like the

rational numbers. You can do addition,

subtraction, multiplication, division on them. And now this is

somehow when things begin to get interesting. They developed an abstract

theory of dimension. So they consider these

fields of numbers. They can see a collection

of numbers, called it a field of numbers, then they

consider a bigger collection of numbers, they call it

a bigger field of numbers. And they realised that there

was a notion of dimension. The bigger thing had a dimension

over the smaller thing. And if you make the

smaller thing really small, you can make the

dimension get very big. So this is not kind of

a real-world dimension. This is an abstract mathematical

notion of degrees of freedom. And using these

abstract collections of fields of numbers,

they could construct objects that had some

abstract, very large dimension. So it took 2,000 years to

develop these concepts. And once they were developed,

Wantzel could take them, and he realised that it was

impossible to trisect not just any angle. He proved it was impossible even

to trisect a 60-degree angle using ruler and compasses. So basically he

looked to the system– I don’t want to

bore you, really, but this is what I teach in

my third-year Galois theory course. He looked at the

systems of numbers generated by the coordinates. You see, he turns a geometric

problem into arithmetic one. He looks at the number

systems generated by the coordinates

of these points that you can only reconstruct

with a ruler and compass. And he proved that the

dimensions of all these fields was always a power of two. Then he proved that the system

of numbers you generate, if you can trisect a 360-degree

angle, got dimension 3. And 3 is not in that list. The list goes 1, 2,

4, 16, dah, dah, dah. Never contains the number three. So this is how the proof worked. So the conclusion so

far is that Euclid proved it was possible

to bisect an angle and Wantzel proved it was

impossible to trisect an angle using ruler and compasses. And so we learned from

this that mathematicians can prove something is

impossible by just doing it. And they could also prove

that something is impossible. But actually it lies deeper. It often lies deeper. And it might take thousands

of years to develop a theory. So we’re going to see

another example of this next. And of course, as

a remark, that’s if we use modern tools, modern

things that were nowhere near known to the ancient Greeks. But the modern tools, as well

as doing modern mathematics, they can solve ancient problems. This is in some

sense a vindication of the modern tools,

the fact that they can resolve ancient problems. So this phenomena is

commonplace in mathematics. You get questions that a

raised and sometimes take hundreds of years of answer. And the tools you’ll

use in answering them are often not tools that

were even in existence when the questions were stated. So there’s mathematics. Now let’s talk about computers. So there’s the first-ever

computer programmer, Ada, Countess of Lovelace. So why is she the first-ever

computer programmer? Because she wrote the

first-ever computer programme. So a guy called Charles

Babbage, he was not really a mathematician, he

was more an engineer, but he designed something called

Babbage’s analytical engine. It was a machine,

but actually it was a machine you

could set the dials and you could actually

tell the machine to do lots of different things. It was a very primitive

kind of computer. You set the dials, that’s

like writing a programme. And then you run the

machine and the machine runs according where

the dials are set. So this is really

the first computer that was around in some

sense, this analytical engine. And Babbage gave

lectures on it in Italy, somebody took notes in

French, and these notes made it into the hands of Ada

Lovelace, who decided she’d translate them into English. And while she was

translating them, of course, you translate

something, you read it, she read it and she

learned the material. And then she wrote a series

of appendices at the end of Babbage’s notes, and

Appendix G, the last appendix, there’s an example of where

you can set the dials in order to run a programme that would

compute Bernoulli numbers, which were the thing, you know. And people would compute

things by sitting down with pieces of paper and

writing them all out. In her observations,

Bernoulli numbers a bit annoying to compute. But if we have one of these

analytical engines, then actually we could just

set the dials like this, switch it on, and out come

the Bernoulli numbers. So it would save somebody a job. So Ada Lovelace. She wrote the first

computer programme, also we can ascribe

to her a theorem, a computer can calculate

Bernoulli numbers. And the proof is do it. Just like the proof of bisecting

the angle was doing it, the proof is let’s just write

a computer programme that computes Bernoulli numbers. So that’s what she did. She never saw it run, because

Babbage’s analytical engine was never built, in fact. There were technical design

problems at the beginning. By the time these

things were resolved, the money had run out,

the funding dried up, and the analytical

engine never got built, and Lovelace died at 36 and

never saw her programme run. But of course nerds went

away and made other ones. So it does work. And they compute

Bernoulli numbers. If you’re thinking, you see

where the story is going. So now we skip forward

100 years to Alan Turing. And Alan Turing, he thought

about computers as well, but he wasn’t an engineer. He was a mathematician. And so he started thinking

about mathematically modelling a computer. Kind of like using arithmetic

to study points in geometry. He started to wonder whether

you could model a computer using the language of mathematics. And nowadays it’s

called a Turing machine. And Turing’s mathematical

description of a computer opened the doors

for mathematicians now to start proving much more

deeper and profound results about what computers

could or couldn’t do. And of course, we’re in

just the same situation. you’ve got a problem. The question is can

a computer do it? If the answer is yes, how

do you show the answers? You just write a computer

programme that does it. How do you show

the answer is no? That’s going to be difficult. So Lovelace cited this

interesting problem computing Bernoulli

numbers and showed the computer could do it. And Turing also isolated an

interesting abstract problem, but he showed, because he

had more tools than Lovelace, he showed a computer could

not solve this problem. And again, when you

just step back, how ever are you going to prove

something like that? How can you check that every

single computer programme ever was written, will be

written, can be written, will not solve this problem? I’m going to try to explain

to you how he did it. And the problem wasn’t just

an abstract problem as well. It’s not just this bisect

the angle nonsense, this sad thing Wantzel proved

you couldn’t trisect the angle. By this stage, nobody cared. Whereas Turing, Turing

works on a problem that actually even though

in 1936 when he worked on it there were still no

computers, right? The first computers started

appearing in the 40s. And Turing actually was

involved in several of them. But Turing proved

this theory in 1936, before a single computer

had ever been built, a single computer programme

had ever been run. He was interested in the idea

that computers might sometimes crash. And computers crashing

is a pain, right? You lose data, you

lose information. You have to switch

it off and on again. We don’t want

computers crashing. Turing could see

that this was going to be an issue before any

computers ever even existed. So I’m going to try to tell

you about what Turing did. So let me try and explain to you

what a computer programme looks like. So this is a computer

programme read in pseudo code. So it’s kind of read

in English but there’s lots and lots of steps. And what the

processor will do is it will just go from each

step to the next step, unless somebody tells

it to do something else. And each step will

carefully read what’s there, and it will do what it’s told. And it will to do it

very, very quickly. So this computer programme

here, step 1, run it. You have it on your phone,

you click on that app, and little screen pops

up, says type in a number. And you type in a number

like 53, and you click OK. And you’ve typed in 53

and the box disappears. And our step 2 is

let x be that number. So now if we typed

a 53, x will be 53. Now step three is a

funny thing that involves the question is x equal to 7? But we’ve typed in 53,

so x isn’t equal to 7. If x was equal to 7, we

would have gone somewhere. So x isn’t equal to

7, so we go to step 6. So we’ve typed in 53, x is 53. We’re at step 3, x isn’t 7. So let’s go to step 6. Step 6 we print hello on

the screen and then we stop. So when you run this

app on your phone, you click on the little app,

it says type in a number. We type 53. We click OK. Another little box

pops up saying hello, and then that’s it. App is finished. If you play with it for a bit,

you’ll realise that that’s what it normally does. But then eventually

you hit on the idea of typing in the number 7. And so you type in number 7. The computer lets

x be 7 in step 2. In step 3, notice

the x is equal to 7. And instead of going to

step 3, you go to step 4. And in step 4, you can

see what to do in step 4. You just go to step 5.

so it goes to step 5. And step 5 says go to step 4. And so it goes to step 4. Then step 4 says go to step 5. So it goes to step

4, and it just alternates between step 4 and 5. You can tell the

computer is not thinking, because if a human

had to do that, they’d get bored at some point. But the computer will

just keep doing it. That’s exactly what

they’re paid to do. They just quite happily,

mindlessly follow the instructions. So the computer will get

stuck going between step 4 and step 5. Now you’re running

this, if your operating system gets stuck in that kind

of loop, then it’s stopped. You see, line 1 is kind

of implicitly saying, what’s the person

pressing on the keyboard? But if you’re just hopping

between step 4 and step 5, the computer isn’t looking

at the keyboard or the mouse or anything like that. You wiggle these things

around and nothing happens, and you know what’s happened. When your computer

is in that state, you press the Caps Lock key

and the light doesn’t come on, you know that your

computer’s had it, and you have to switch

off and on again. So computer programmes

sometimes get stuck in loops, and you’ve all seen it happen. So there you go. If the user types

in the number 7, that computer programme can

get stuck in a endless loop, so it crashes. So here is a much more

complicated programme. We’re not going to analyse it,

because we’d just basically spend all day analysing it. That other one had seven steps. This one’s got nine steps,

but it’s vastly complicated. You ask a user to

type in a number. Let’s say they type

in the number 9. Step 2, let x be that number. Step 3, if x is bigger than 10. It’s not bigger than

10, 9 is less than 10. So we get to step 6. Step 6 says if it’s less than

10, and it is less than 10, because 9 is less than

10, so we go to step 4. Step 4, x changes. Step 4 we replace

x by 2 times x. So now suddenly x was

9 and now it’s 18. And then what do we do? We go back to step 3. Now x is 18, and now x is

bigger than 10, so we go to 7. And you can see we’re

kind of jumping around. The x is 18. Step 7 says if x is less

than 100, go to step 4. Step 4, we’ve been

to step 4 before, so you’re going to think,

oh, an infinite loop. But we’re not in

an infinite loop, because when we were at

step 4 before, x was 9. And now we’re at step

4 again, x is 18. And so we’re not exactly

the same state as before. So we might not be

in an infinite loop. And x is going to

be replaced by 36. And you see it jumps around. We didn’t even go to– step 5 is ridiculous. It’s not really clear. This is the question,

can this computer problem get stuck in an infinite loop? That is a tricky

logic puzzle, right? And I don’t fancy doing it. Somehow I would try to

do it using intuition. I would try to find key numbers

which behave differently to other numbers. But can this computer get

stuck in an infinite loop? Figuring out that

kind of question is a complicated logic puzzle. Now I like doing complicated

logic puzzles, but actually, at the end of the day,

what kind of gadget is really suited to solving

complicated logic problems? It’s a computer. So what we should

really do is not bother solving that

problem at all. What does that mean? That doesn’t mean I stop. I’m halfway. Oh that’s good, because I’m

halfway through the slides as well, because the last

two slides are jokes. So this is the sort

of puzzle which would be ideal for a

computer to work on. And so this is what

Turing decided to work on. Let’s write a computer programme

that checks other computer programmes to see if I get

stuck in infinite loops. So when a computer gets

stuck in an infinite loop, we’ll let it crash. And that’s bad, right? So we’ll say a computer

programme is bad if it can get stuck in an infinite loop. And we’ll say a computer

programme is good if it never gets stuck in an infinite loop. So there’s some definitions. And now Turing says, how do

we check to see if a computer programme is good? We want to tell the good

programmes from the bad programmes. So he’s going to write

a computer programme. So in his 1936 paper he

abstractly axiomatised mathematically what

a computer was. And so now he can

use mathematics to work on this question. And he found the

solution, which as I say, you probably guessed by now. So Turing’s going to try and

write a computer programme. And the input to the computer

programme is not a number. You’ve probably gone

to some website, I do this when I’m

writing references. You click on something that

says, upload your reference. And you find the reference and

then you just drag it over. Computer programmes will quite

happily accept random files as inputs rather

than just numbers. So this computer programme

that Turing is going to write, he’s going to write a computer

programme that takes this input, an arbitrary

computer programme, a different computer programme,

and then Turing’s computer is going to print out good

if it’s a good one, and bad if it’s a bad one. So Turing’s going to try and

design that computer programme. But actually something

interesting has happened here. Because if I’ve got a computer

programme that takes us into the computer programme,

that means that you take the computer programme itself

and put the computer programme into itself, which

is a little bit meta. And that was where

the fun started. So Turing realised if he

could tell good from bad with a computer programme, then

he could write a ridiculous, useless computer programme,

computer programme k, which would behave like a

bad programme if you inputted a good programme and behave

like a good programme if you inputted a bad one. So the idea is you

input the programme, the crazy programme k figures

out if it’s good or bad. If it’s bad, that’s great. if it’s bad, it will just stop. And if it’s good, it

will intentionally go into an infinite

loop and be bad. So programme k is bad

if the input is good. And programme k is good

it’s the input’s bad. And then of course you take

programme k and you feed it into itself. Now what’s the consequence? So now we could ask is the

answer going to be good or bad? If k is bad, then you feed it

into k, and it is good, right? If k is bad, then

k must be good. And if k is good,

then i must be bad. So there’s something wrong. Because the computer programme

can’t be both good or bad. So we must have made a

mistake, because we’ve got a contradiction. And the mistake is because

the two-letter word I’ve underlined,

that I’ve put in red. The mistake is Turing’s initial

assumption that we could write a computer programme which can

tell good code from bad code. So that’s Turing’s mistake. I mean it’s not a mistake,

it was an assumption. The assumption is the

only part of the argument that he couldn’t make

completely logically valid. So his assumption is wrong. And Turing’s conclusion is it is

impossible to write a computer programme which can check

any and every other computer programme for bugs. You cannot. And this is why computer

programmes will crash. Because it’s impossible to

tell if they’ve got bugs. So Turing rigorously

proved that in 1936. Remark, still before any

computers had existed. I’ve sketched the

idea but what I want to stress as

absolutely crucial to it all was the fact that he’d

abstracted what a computer was and put it in mathematics. So there you go. So there’s a

summary of this bit. Lovelace proved that a certain

interesting problem can be solved using a

computer programme. The proof was just do it. Turing proved that a certain

problem could not be solved using a computer programme 100

years later once the machinery got more sophisticated. So again we see an example. Turing proved that it was

impossible to do something. In my mind, it’s

an amazing thing to prove you can’t do something. So it’s formally just

like trisecting angles. So this is what happens in life. This is obviously mathematics. People work on areas of

interest and eventually they run into obstructions. They run into problems

which they can’t solve. And people work on them,

and of course sometimes they get solved later on. But sometimes they remain

obstinate and then eventually it turns out the reason

they remain obstinate is because the answer is that your

mathematical machinery cannot solve this problem. So very brief

summary, we can prove it’s possible to do

something by doing it. We can prove it’s

impossible to do something, but it often lies deep. So we’ve seen an example

in pure mathematics and we’ve seen example in

theoretical computer science. Theoretical computer

science with a capital T, really, because

what do I mean? There’s an issue with

Turing’s computers that I’ll come to now. So what we’re really

interested in nowadays is because our

computers do exist, and they’re becoming vastly

dominant in our lives, I’m going to talk about a

practical problem involving computers. So let me first

tell you, I told you the Turing axiomatised

abstractly what a computer was. And here’s some of the

problems with Turing’s theoretical notion

of a computer. Firstly, had an infinite

amount of memory. Because mathematicians

are kind of used to things being infinite. There’s an infinite

amount of numbers. So it had an infinite

amount of memory. And he didn’t really care

about processing speed. All he cared about was did he

get stuck in an infinite loop or not? So if he had a programme that it

turned out got stuck in a loop that would basically take

a million years to get out of, but in a million years

your mouse will magically start working again and your computer

didn’t crash after all, you see you don’t really– you don’t somehow, if it is

going to take a million years, you’re not really interested. So in reality

computers have memory. And memory, this is engineering,

memory is made of stuff, right? Like terabyte hard

drives are made out tiny, tiny little magnets. And solid-state

drives are made up of tiny little

electrical circuits. But any memory location that

stores a piece of information needs to be physically

made if we’re going to make one

in our universe and stop looking at abstract

computers like Turing. So we are going to need

at least a particle. One at least. Bare minimum, we’re

going to need a particle to make that thing. And unfortunately there’s

only 10 to the power of 80– that’s one with 80 zeros– there’s only one with

80 zeros, there’s only that number of particles

in the entire universe. So that puts a limit on how

much memory a computer can have. And similarly,

quantum mechanics says that when you look

very, very small, things get much weirder

than you might expect. So one of the consequences

of quantum mechanics currently believed theory

is that there is basically a smallest unit

of distance which is meaningful to talk about. And beyond that, things

just get to strange. And it’s a very tiny distance. But beyond that distance,

somehow everything becomes meaningless. If you believe relativity,

information can’t travel faster than the speed of light. So one might ask,

how long the light would take to travel one

of these Planck distances. It’s all to do with

Planck’s constant and stuff. And it turns out that

a light would take 10 to power minus 44. So that’s naught point and

then 44 noughts and a 1. No, it’s not. It’s naught point that

43 naughts and than one. There’s one Planck time. 10 to the minus 44 seconds. So you cannot hope that your

computer can do anything, because it’s impossible to

do anything in a unit of time shorter than one Planck time. Well of course, the Earth

will become uninhabitable around a billion years. We’re interested in

computers on Earth. Then this is going

to be an issue. And more generally we’ll

be interested in computers in the universe, then we

have to kind of figure out when the heat death of

the universe will be. But I read a lot about

this on Wikipedia, and scientists aren’t sure. They don’t really

know if the universe is going to expand forever. I mean they believe

in the Big Bang, but they don’t know if there’s

going to be a big crash, or if it’s just going to stop. So I don’t really know

where the universe will end, but certainly the sun will burn

out in a few billion years, and by 1 billion years

it will be this red dwarf and the seas will be boiling. And again, if you

think about it, that puts some theoretical limit

on the number of instructions that any computer can do. So let me just summarise. So to summarise, I’m not

now interested in Turing’s abstract theoretical computers. What about computers on

this planet that can’t run a programme which involves

storing that much information, where that much is some

number I’ve chosen, very large to make sure that

it’s more than the number of particles in the universe. And similarly, they can’t run

computer programmes in which the processor has to do

that many instructions. Because again, I’ve just chosen

these very, very large number to make sure that even if it did

10 to the power 44 instructions every second, it

still, you know, the world is going to

be a dead hulk by then. So there these limitations

that Turing didn’t have. Turing didn’t really care. Turing had other

things to think about. I mean there was a war on. But these issues are

now practical, right? It’s not just can we

solve the problem or not. It’s can solve the

problem quickly? So these now give

rise to new questions here’s the last thing

I need to talk about. The remainder of

the talk I’m going to talk about a third question. So here’s two

computer programmes. They’re not like these nightmare

ones that I just showed. These are two very simple

computer programmes, because I need to

teach you a concept. Computer programme 1, the

user types in a number. The user types in 53. And then step 2, let

x be that number. So x is 53. And all it does, it

doesn’t print hello. It prints out x. Sample run for Programme 1. The user types in

number 53 and clicks OK, and the computer prints out 53. So it just prints

out what you type. So computer Programme 2

formally looks very similar. This time the user

types in a number. You let x be that number. But now instead of printing

out x using our normal notation for printing numbers

out, instead all it does is print out x. Think, 53 has a meaning, right? 53 means 53 things. So instead of printing

out our clever notation, it actually prints out

the meaning of the number. So it prints out 53 things. So if you don’t like

line 3, then how do you actually print out x 0. What you would really do is

you just print out 1 zero, and then you’ll decrease x by 1. And if it was still

positive, then you’ll keep going so you’d actually

do this using a loop. So that’s how you print out x

0s, using a computer Programme. So now of course sample

1, let’s type in 53 again, and this is the output. We just get 53 zeros. So now let’s have

a race, Programme 1 against Programme 2. We’ll type in 53 to both of

them, and who’s going to win? Programme 1 is going to

win, because Programme 2 has got to do a lot more work. First Programme is

going to finish– I mean, in practise of course,

both are finished in the blink of an eye. So you won’t even

be able to notice. But I want to explain

a way of telling why this second

Programme must be worse than the first Programme. Why it’s sort of less,

why it works more slowly. And it’s because of scaling. So by scaling I mean

let’s not type in 53. Let’s now think about

typing in a bigger number. Let’s type a 5 and then 3,

then let’s type another number. So instead of typing in

53, let’s type in 531. And now let’s run

our two programmes. We saw what happened

when we typed in 53. Now let’s type in 531. The first Programme used

to have to print out 53. Now it’s got to print out 531. It just takes one

more digit, right? Not very much more time at all. The second Programme is going

to print out 531 things now. And 531 is about 10 times 53. So the second Programme is

going to take 10 times as long to finish. So they behave in

very different ways with a small perturbation

of the input data. If I make the input data

a little bit bigger, The first Programme

deals with it easily. The second Programme, the

amount of time it takes gets multiplied by a constant. So you have this compound

interest phenomenon. If you leave a penny

in a bank long enough, eventually you’ll get to a

million pounds sort of thing. Because things can

blow up substantially. if you double your

money every time, then by the time you doubled

your money 10 times, you’ve multiplied your

money by over 1,000. So this is an issue that

the second Programme has. It doesn’t scale well. You make the input

a little bit bigger and suddenly it takes

a lot longer to run. So when you’re trying to

solve real-world problems, the input data might

well slowly increase. So you want to make sure that

your computer Programme doesn’t ground to a halt if

the input data starts getting bigger and bigger. So an example is it’s a very

complicated problem trying to work out when all the

students in the college are going to take exams. Because lots and

lots of students registered for lots and

lots of different courses, so you have to make

sure that every student, they haven’t taken

two courses for which the exams at the same time. But all these exams

are going to be crammed in to a small part of time. So you have to have some

exams at the same time. It’s a very delicate question. Of course, we do have a computer

Programme to solve this. And we want to make it

scale like Programme 1, not like Programme 2. So scaling, I talked

about it informally. The mathematicians have

formalised these notions. So the first time you

do a historical search, the first time you see this

notion of people really beginning to care about not just

abstractly whether computers can do things but how long

it takes them to do them is in a 1954 letter. This is the Beautiful Mind

guy, 1955 letter from John Nash the NSA, the American

security agency, where he began to lay

down that he isolated the issue that actually it’s not

about whether the computer can solve it. It’s about how quickly

the computer can solve it. And he writes explicit problems

in the area, some of which he can solve. So I’ll tell you what

Nash said, because these are kind of important words. “A computer runs

in polynomial time if it scales like

the first Programme.” That’s an informal definition. I’ll give you a

proper definition. Here’s the proper

definition there. So formally, “A

computer Programme runs in polynomial time if

there’s a polynomial function n to the power 20 plus 1,000.” So if you input a

number with n digits, then the computer will churn

away and it will finish. But how long will

it take to finish? The Programme will finish. But it will do at

most o of n steps. That’s the idea. If you can write a computer

Programme like that, then you’ve got a

polynomial that’s kind of controlling how

fast your computer will take to calculate. And polynomials don’t

grow too quickly. This is sort of

morally the thought. But also in practise, polynomial

time computer programmes are what people are after. If you’ve got a problem and you

can solve in polynomial time, then somebody has

already written the code that will solve

that problem for you, and it will run

very efficiently. So we’re interested now in the

collection of problems which can be solved with a computer,

and furthermore, the computer can solve it quickly. Problems that could be

solved in polynomial time. And mathematicians

being what they are, they needed a name for

these collection problems, so they call it p because

that’s what they do. So that’s now called p, the

collection of problems which we can solve in practise quickly. So now let me just mention

codes a little bit. Secret codes are really good. You agree on a secret

code with your friends and then you’ve got

the secret code, your friends have got the secret

code, but nobody else has. And now you can send

messages to your friend using the secret code. Because you got

your message and you encrypt it using

the secret code. You send to your

friend, and they decrypt it using the secret code. You’ve both got the secret

code, but no one else has got the secret

code so it works. So I’m afraid nowadays

things aren’t so easy. Because I want my phone to

kind of connect to random– yeah, I want my computer

to talk to other– if I go to a website I’ve

never been to before, my computer is suddenly

talking to another computer that it’s never met before. And I might have to type

in my credit card details if I want to buy something. And now I need to talk

to this other computer. I need to send the computer a

secret piece of information, but we haven’t

got a secret code. You see, the problem

is people can hear what you’re doing on the internet. Your ISP knows

what you’re doing. The government probably

knows what you’re doing. And people that are just

generally snooping around on the internet

might see packets going by from your computer. So this is a problem. Secret codes aren’t

good enough anymore. We need something called

“public key cryptography,” where the idea is I establish a

code we’re going to use. And even if somebody is

snooping on my packets, even if my next-door neighbour

who’s got some kind of Wi-Fi detector thing and they’re

watching all my Wi-Fi packets, I need to make sure that they

can’t find out what my credit card is. So these things are

called public key codes, public key cryptography. And rather surprisingly,

they exist. They were invented in the 70s,

when it was realised that these things were important. And the way they work is

they exploit asymmetries in encoding and

decoding algorithms. So the idea is you want to

make sure the encoding is quick but decoding is slow. So let me give you an

example of an asymmetry. If you’ve got a

piece of cotton, I’ll give you a long piece of cotton,

and I’ll give you two minutes and you can tie as many

knots in it as you like. And you can tie lots

and lots of knots. And at the end of

it you can imagine you can make this piece of

cotton into a real mess. Lots and lots of

tiny little knots. You pull them tight,

but after a while the knots kind of get– you’ve

done this, I’ve done this. The knots kind of get

closer and closer together. And they kind of start

compiling up on each other. And at the end you get a nice

kind of chunky little thing that’s really

difficult to untie. So you turn a piece of cotton

into a big, complicated knot. It’s very easy. Give me a couple

of minutes, I’ll give you a complicated knot. On the other hand, you’ve

got a big, complicated knot. If you go to untie it,

it can be very difficult. It’s going to be

possible, but it’s going to take a lot

more than two minutes. So some things in

life are asymmetric. And surprisingly, something

you know a lot about, multiplication, turns out

to be one of those things. So here’s a

multiplication problem. I’m going to tell you

the answer, don’t worry. What’s 43 times 61? There’s a method. You learnt it at school. Long multiplication. 43 times 61, we do 1, 3 we

do 4, 3 6s and 4 6s as well. And then we do some adding

and we’ve got the answer. And how many steps

did be actually take. We did 1 times 3, 1 times 4. 3 times 6, 4 times 6. We did under 10 steps. So long multiplication

is actually a really good method for

multiplying numbers together. That scales very well. Here’s the clue that

it scales very well. What are the numbers involved? The smallest number involved is

43, but did it take 43 steps? With that computer Programme

that you typed in 43 print out 43 zeros. That takes 43 steps. Here we didn’t use

anywhere near 43 steps. We did it at under 10. So that’s the clue

it scales very well. So now let’s un-multiply. So here is the answer, 2,183. I times two numbers

together and I got 2,183. And what were those two numbers? Now they don’t teach you

now to do that at school. So let’s do trial and error. Was it 2 times something? We could divide 2,183 by 2. It doesn’t work. We get remainder 1. So let’s divide it by 3. It doesn’t work. We get remainder 2. And so we just divide it

by 4, or if you’re smart you’ll skip 4 because you’ve

done 2 and you’ll go to 5. But you keep dividing and

dividing and dividing, and eventually you divide

it by 36, it doesn’t work. You get remainder 33. You divide 2,183 by 37,

it works on the nose. There’s no remainder. It’s 59. So we conclude 2,183

is 37 times 59. But when you think

about it, so what are the numbers involved

in this question? 2,183, 37, and 59. And how many steps did it take? You see, this method is

going to scale poorly because we took 37– I guess we took 36 steps

because we didn’t divide by 1. But this particular method for

undoing multiplication scales very, very poorly, just

like the second Programme. Because the answer involved

37, but it’s took steps. Whereas, before you multiply

these things together, we do it in under 10 steps. So there’s an asymmetry. You can multiply them

together in just a few steps, but un-multiplying them,

factoring them it’s called, it took forever. It took 37 steps. 37 isn’t very long, really,

but before I go into that point I was about to make,

every published method for un-multiplying numbers,

in fact, scales poorly. Some of them scale less

poorly than this one, but every published

method scales poorly. We don’t know how to do that. So factoring actually

might be a problem, that it is not possible

in polynomial time. And now let’s scale. If instead of multiplying

two-digit numbers together we started multiplying

100-digit numbers together, then we can multiply

them together. We have to do this times

this, this times this. We have to take a digit from

the first and the second number. That’ll take 100

times 100 is 10,000. So we do 10,000 multiplications. We do 10,000 additions. We do about 20,000 steps. That’s not a problem. Do 30,000 steps in

the blink of an eye. But remember, I’m talking about

numbers with 100 digits here. Those numbers have

a meaning, right? We write them down

in this neat way, 1, naught, naught, naught, naught. It’s very clever. But they have a meaning. And the meaning is a vastly,

vastly huge number of things that you can’t even comprehend. And indeed it’s a number too

big to fit in the universe. It’s a number bigger

than the number a particles in the universe. So that number is

representing an idea that’s too big for our universe. If we just know the answer,

the 200-digit number, which we get by timesing

them together, and we want to

recover the factors, and we’re going to use

this method I explained, we might have to count all the

way up to a 100-digit number. And we can’t do that, because

the universe isn’t big enough, we don’t have enough time, and

we don’t have enough space, and it can’t be done– at least within my lifetime. And when you’re

sending secret codes, you’re not really worried

about whether they’re cracked after you’re gone. Yeah, my lifetime. This is relevant data. But here’s a funny

thing about factoring. If somebody actually

told you the answer, somebody said, I think

the answer might be this, and they just gave you

two 100-digit numbers, you would be surprised that

they solved the problem. But you can check to

see if they’re are right or not, because if you’ve

got these 100-digit numbers, you can times them

together and see if you get that 200-number

you’re supposed to get. So factoring has this sort

of funny thing about it. It’s hard to do, but if somebody

tells you what the answer is, it’s very easy to check,

because multiplying is easy. So factoring, it seems

to be hard to do, but it’s definitely

easy to check. And the notion of a

problem being easy to check turned out to be

another central notion in modern computational

complexity theory. So we got a name. Np is the collection of

all problems for which we can check a solution is correct

very quickly, by which I mean in polynomial time. So I’ve defined two

classes of problems. P is the class of problems

we can solve quickly, and NP, the class of problems

that, if somebody tells you what they think

the answer is, you can check if they’re right

or wrong very quickly. So there’s subtle differences,

but different they are. So every problem in p is an NP. All the problems you

can solve quickly, you can check the

answer quickly, because if you’ve got your

p problem that you can solve quickly, and somebody says,

oh, I think I know the solution and they give you

the solution, you can check it by just solving

the problem yourself. You solve the problem

yourself to see if you get the same answer. So any problem in

p is also in NP. But unfortunately there’s

plenty of NP problems, problems that we can check

the answers quickly but which we don’t know

how to solve them quickly. So this raises a question. So here are some

examples of NP problems, problems which we

can check quickly, which are not known to be in p. So factoring large numbers. I’ve mentioned that already. We don’t know how

to do that quickly. But if someone tells

you what the answer is, we can do it quickly. Timetable exams,

unfortunately, turns out to be another problem. And in general problems

of that nature, I’ve had three kids going

to secondary school, and the way it works is the

council sends you a form, and you fill in the

form, and this is in like November or something. You send in the form, and

then the glorious news comes out where your kid’s

going to secondary school on the 1st of March. And why does that

take four months? It’s because trying

to figure out a way of fitting every

11-year-old in the country into a school so

that there’s just the right number of

students in each school is really, really hard. So there’s examples

all over science. I just took these

things off the computer. I’ve got no idea what

any of those things mean, but they looked very important,

so I thought I’d mention them. This one I do know

what it means. Decrypting data sent. We have a standard way to

send data to a secure website to secure a website where we are

typing in credit card numbers to buy things on the internet. And decrypting that

without the decryption key is something we don’t know

how to do very quickly. But this is in NP. There’s various other

things Candy Crush is an NP. Pokemon also an NP. Super Mario Bros also an NP. Basically trying to solve

a Super Mario Bros level is very hard. But if somebody comes

along and does it, you can clearly check

that they’ve done it because I got to the end. So we don’t know a

fast method for solving any of those problems. But we know a fast method

for checking the answers. So there are examples

of problems that are NP. They’re in NP, but we

don’t know if they’re in p. So there’s the question. Does p equal NP? If we have a problem,

and it turns out we can check to see if the

solution is correct quickly, could we actually have

solved the problem quickly in the first place without

knowing what the solution was? I’m doing all right. I thought I was

going to overrun. I’m doing great. So there you go. By 1971, Stephen Cook had

formalised this notion. And he raised this question

explicitly in 1971, having laid down some

mathematical background. So he’d given a rigorous

mathematical notion of what a problem was,

of what the solution was, and the class of

problems that are in p and the class of

problems that are in NP. There’s lots of mathematics and

rigour that I’m suppressing, but he did it, and he raised

this question explicitly. And then the year

after he failed to get tenure at UC Berkeley,

because computers in 1971 were really irrelevant. So it’s an abstract,

well-defined mathematical question. And it was an unsolved problem

that the mathematicians at UC Berkeley decided was

of peripheral interest. But you’ve seen now the kind

of implications that solving this problem might have. And it’s now regarded as one

of the most important problems in mathematics. Is one of the seven “Clay

Millennium Problems.” So if you solve it,

some entrepreneur has donated a million

bucks as motivation to try and solve these problems. So we’ve got the problems in p. They’re in there. We have problems in NP. And then the question

is, is there actually definitely a problem that’s

lurking where that question mark is that’s in NP,

you can check quickly but it’s definitely not in p. So we know plenty of

problems that are in p, and we know plenty of

problems that are in NP, and we don’t know if

they’re in p or not. But that’s the current

state of our knowledge. So what would happen if somebody

proved that p equaled an NP? Because mathematicians

are working– mathematicians, theorists,

computer scientists, working very hard

on this problem. What would happen if

somebody proved p was NP. Then all of a sudden a

whole bunch of problems which we didn’t

know how to solve, suddenly we could solve very,

very quickly using a computer. So suddenly transportation

get scheduled optimally. We save lots of fuel. People and goods move

around quicker and cheaper. Factories become

infinitely more efficient. Dramatic advances in

medicine, because suddenly we can cure cancer. I don’t know if we

can cure cancer. We could certainly do

much quicker experiments. Perfect computer vision,

computer language recognition. So you see the problem. It wasn’t very important

in 1971, but predictions of weather and of earthquakes. Huge advances in

modern mathematics, because solving a

mathematical problem is a great example of

something that’s an NP. How you work on a

problem, I can’t do it. There’s lots of axioms. I don’t know which to– I don’t know how to apply them

or what order to apply them in. But if somebody comes along

and hands me a proof of this, you know, if someone says,

I can prove the thing. I can sort out the

thing you’re working on, here’s the question

you are working on, here’s a proof of it. I can read the proof

really quickly. I can check that that’s

a mathematical proof. That’s part of my training. So I can check the answer

quickly, and so if p was NP, I can solve the problem quickly. So we can just get computers

to do lots and lots and in modern mathematics. It also means, of course, there

is no such thing as encryption anymore. So the government can

read all your emails, even if you encrypt them. And internet security

is now broken. You can’t type your credit

card into the internet, so online banks instantly,

everything gets– So society really changes. The internet stops working,

but maybe we can cure cancer. But the world would be

a very different place. Unfortunately, or

possibly fortunately, most computer scientists believe

that p is not equal to NP. They believe that there are

problems out there which are easy to check but

hard to solve, easy to check the solutions

but hard to solve. So what would

happen if we proved that p was not equal to NP? Then we know then

these guaranteed– we have algorithms

which we now guarantee can’t be solved quickly. Now, we definitely know

we can talk in code. So I know that the

government can’t spy on me. That’s really good. And terrorists also

know that the government can’t spy on them. And all the people that are

currently selling illegal drugs and weapons on the

dark internet– and that really is

happening– they can sit comfortably knowing full

well that they can definitely not be traced. So again, the world

changes a little bit. And this is currently

the working model right. Theresa May was very,

very upset about there was an iPhone story. You remember some guy, there

was incriminating evidence on his iPhone, but they

couldn’t unlock it, and Apple said they

weren’t going to unlock it. The problem resolved itself, but

that’s the tip of the iceberg. More will happen. Again, there will

be police officers that know full well that there’s

an encrypted drive that if they could decrypt, then

this guy is nailed. But they can’t decrypt it. If p is an NP, we

can design code that can be possible to break. So it should be clear that it’s

clearly important to work out p is NP or not, right? So all the problems

in p are in NP. But the question is, are there

problems in NP but not in p? If you tell me this is

your problem in class p, we said all this already. So this is the point. I’m going to tell you the point. How does a mathematician– so

p probably isn’t equal to NP, so how is a mathematician

going to prove that? And this is the point. All they have to do is

they have to find something in that question. They have to find a

problem which is in np. We can check the answer quickly. But they have to prove that

this problem cannot be solved quickly. That’s what the problem is. And we’re not yet

at the stage where we have the theoretical

tools to do that. This is the issue. We have to prove that

you can’t do something, and we could only prove you

could do things in the 70s, so when are we going

to be able to prove that you can’t do things? So here’s the time frame

now, the third example of doing things and

not doing things. Nash formulated the question,

gave easy examples of problems that could be solved quickly. Cook formalises the mathematics,

raises the question explicitly, and then we don’t

know what happens. Somebody at some point

probably will find an example of a problem where they can

check answers quickly but they can prove that you

cannot solve it quickly. But we are not yet there. It took 2,000 years

for the Euclid one. How long will this take? So many modern mathematical

techniques have been tried. All have failed. There was a promising

approach developed initially in 2001, and then really

prominently in 2011, using geometric

complexity theory. And again, it follows

exactly the same pattern as the other things. This used modern

algebraic geometry, which was only

invented in the 1960s, and modern

representation theory. So using these very,

very modern techniques that weren’t around for

people like Cook and Nash. Geometric complexity

theory was an attempt to resolve this question. But the big

breakthrough in April last year was a team

of people proved that actually the path

that we were following using geometric complexity

theory to prove that p was not in NP, they proved

that it would not work. So you see mathematicians

are good at proving that things don’t happen. But in this case it wasn’t

proving that p wasn’t in NP. It was proving that our idea

is even sort of meta meta. The strategy of

proof cannot succeed. So there you go. So that’s the end of the talk. How do we prove that we can’t

solve a problem quickly? Currently we don’t

have the answer. So thank you. [APPLAUSE]

the traveling salesman is said to be NP

how do you quickly check a suggested path is optimal

should there not be a set of problems outside NP that can not be solved or checked quickly

ummmmmmmmm DARPA runs an open contest where AI systems compete to analyze and patch other software…. professor funnypants didn't do his homework

Google might not have killer robots, but it has AlphaZero. So most of that talk about Kasparov vs. Deep Blue was just obsolete a year later (i.e. in 2018), when that evolution of computers in chess reached a level beyond human comprehension.

35:55 "So we've seen an example in mathematics, and we've seen an example in theoretical compuTer science – theoretical compuTer science with a capital T"

If you have any problem and just guess a solution and that is the correct solution, and you can prove it quickly, doesn't that also mean that you can solve it quickly in theory by just guessing in every single case? Not in practice, but in theory.

I don't understand why trisecting doesn't work? You would just need to create a number that is a multiple of 360 copies of that figure in a circle until the figures overlap exactly, and then the rest is a trivial matter of looking what is the number at which they perfectly overlap and then simply use basic math to find any x-secting lines?

The only issue you have is just to have those figures so independent that you can still keep them apart, and not just have a single mess on the paper.

You get a million bucks if you tell the Clay group the proof. It's probably worth more than that if you don't. A lot of companies would pay big money to have a headstart on mapping out the implications.

There's a very bad SF movie about this called "The Traveling Salesman".

Proving it's not true probably isn't worth as much.

An infinite loop is not a crash, a crash happens when the computer program cannot continue. Having accidentaly written infinite loops into programs that ran for over 6 days and only stopped because the operators needed to shut the machine down for weekly maintenance I can guarantee an infinite loop is NOT a crash.

It’s almost paradoxical. The P vs NP problem is an NP problem 🤣

1. Atanasoff & Berry constructed and operated the FIRST electronic digital computer at Iowa State University from 1939-1942.

Nobody was interested in the least.

2. Turing's proof that the "Halting Problem" was impossible to solve has this (not too obvious corollary): It is impossible to predict the result of an algorithm (computer program) without actually running the program. If this is not clear to you, think about it for a few minutes and it will suddenly clarify itself.

3. Programs that can run in Polynomial Time may sound like they are computable IN PRACTICE, but this is not really true. Example: Suppose that a program with N (at least 10) as an input runs in N^(9999) time [N to the power 9999] then it will prove to be insoluble in practice since 10^(9999) Planck time units is far longer than the lifetime of the Universe so far. In the real world, depending upon the program, a program that runs in Linear time (power 1) is nearly always (with obvious exceptions) able to finish, while a program that runs in Polynomial Time where the degree of the polynomial is larger than 10 or so, will never get done. From a PRACTICAL point of view, a program that runs in Polynomial Time may very well be intractable, even when the degree of the polynomial is fairly small.

around 50:00 Yes, computers cannot solve EVERY problem, but for the class of problems that can be solved by computers, they work well enough for use as a tool to solve those problems and save true consciousness to focus on other aspects of the higher dimensional problem.

Player versus non player? that's called pve! Lol.

It's nice that he personally doesn't think computers "think", but the terminology is horribly vague, and the way humans think is poorly understood.

Run in a loop : abs(ln(n)) Start with n=2 & reinject the result in the formula & so on . Can you tell the value after a million step without running it ? Does it ever stop ? Does it repeat itself ?

Are humans really thinking, though? I don't think humans are thinking, either, at least not in any different sort of ways from a computer.

This Radiolab story about a lady who had transient global amnesia, which put her into a loop with an iteration of about 90 seconds: https://youtu.be/GxmP4zRvi7U?t=368

She ends up having the same conversation with her daughter, over and over. Is she a machine?

Those trousers! 8-0

As a direct result of watching this, I finally understand how “secure” internet connections could ever possibly work, even a little bit, despite the fact that in order for the internet to work in most cases, you need to trust a bunch of strangers (computers) to pass your communications along to their destination. This problem has bugged me for a very long time.

If a robot is not designed to kill humans but does so my accident is it not still a killer robot?

A divided by Zero equals NOT A where A is unrelated to NOT A except at superposition. P=NP? P divided by Zero equals NP.

fINALLY SOMEONE SPEAKING TRUTH ABOUT AI

That's quite a choice in trousers to wear for a lecture at the Royal Institution

Mr. Buzzard is trying to show that you can also prove things by talking about them very very quickly.

I feel like this is andrew riddle who was really into math and didn’t get an owl.

The first part of the talk, about computers' ability to think, was disconnected, ungrounded and actually never addressed. And by the way, it's been a long time since AI had to resolve to brute force. Given the level of the rest of the talk I think he knows that, which make the argument disingenuous.

However the rest was very nicely exposed, and it is the very first time I see a convincing explanation about asymmetrical operations.

A 4 GHz processor does not process 4 billion instructions per second. You need to look into the CPI and do the maths.

Ada Lovelace actually died at 37.

Those were his laundry trousers.

I'm a professional computer programmer. Let's just say that most people who've been educated in this field who, despite this,

stillsay they believe computers can actuallythink(in any practical sense) tend to do so more out of fanciful wishing paired with misunderstandings of what sentience even is and/or due to ideological reasons (e.g. attacking people who believe in the human soul by using misleading, deceptive, unsupportable memes like "brains are just biological computers" as their weapon of choice).29:00 Java fixes all of this.

38:40 for the happy ending.

hi there , i'm amir . can you check out my video please https://www.youtube.com/watch?v=CYxh-E6jrOs&t=11s

i'm waiting for your comments . thank you

It seems that Turing took a page (so to speak) from Godel; by feeding a program into itself, he in essence created a self-referential loop that exposed the hidden flaw in the assumption (in this case, that one could write a program to tell good programs from bad.) Pretty zarking brilliant, if you ask me! The man was certainly a genius. It's too bad that his society judged him not on his mind, but on his personal life [he was gay.] I only hope we can do better. So far though… Thanks for the great vid! Rikki Tikki.

This man seems to have no clue about computers or humans at all. All he does is state his opinions of half truth and believes without any science to back it up. It is easily possible to present counter arguments and contradict him in nearly every sentence …

He seems more like a philosophy guy who talks out of his arse than a neurologist or computer scientist.

Boston Dynamics "Big Dog" is designed to be a robotic pack animal, not to kill things. It's no more a killer than a burro.

As an extra note: If you're looking for prime factors, then as you are testing whether a divisor is composite or not, the shortcut is that you have to test only up to the square root of the number you are factoring. e.g. SQR (100) = 10, which means to find all prime factors for 100, all you have to do is test the prime numbers between 2 and 10 (2, 3, 5, 7) And that's GAG (Good As Gold)

To trisect any angle using a compass and a ruler use:

let angle = 90;

let trisect = 0;

for (i=1;i<100;i++) {

angle = angle / 2;

if (i%2==0) {

trisect = trisect – angle;

}else {

trisect = trisect + angle;

}

}

console.log(trisect);

What is he saying?

i find it kinda scary but also really cool that we have to ask this question

A lot of the information presented here was either wrong or misleading, or even just an opinion.

First, P vs. NP doesn't technically refer to the algorithm used to solve a problem. Rather, P is the set of problems which can be verified in polynomial time with a deterministic algorithm (one input can only be processed one specific way) versus NP – the set of problems verifiable in polynomial time with a non-deterministic algorithm.

Example: the brute force algorithm used to factor around 51:00 is clearly O(n) which is polynomial time. Not only that, but it is verifiable in O(1) time with a deterministic algorithm. Many modern factorization algorithms use non-determinism to achieve expected run times of O(log(n))

we can write a program that finds loops, its called a debugger, right? isn't this basic computer programming? also we can write "loop detection" into subroutines in the debuggers, we can safeguard them by setting the iterations. These loops that he says are problematic can be if you don't know what you are doing but please do some reading on how an accumulation buffer works when a computer game does full scene motion blur. If you know how to handle your loops it can be totally safe. at 36 minutes it really feels like the guy is about to launch into the next part of his argument after having asserted some really flawed logic. That said I will watch on because I feel like he is driving at a greater point, and I feel like I might be focusing on the detail and missing the argument Kevin is trying to make. Oh and by the way that I'm taking the time to make a massive response if a key indicator that I really care about what I am watching, I will go back to watching 🙂

"… heat death of the universe…"

Multivac, how can we reverse entropy?

Insufficient data for meaningful answer.

This guy needs to slow down 😀

To find a definitive answer to the question does P=NP. One must first understand the limitations of the system in question. In the case of computers, current technology has its limitations but future technology may have different capabilities and limitations, perhaps less, which then changes the equation but not necessarily changing the answer. Regardless, we must ask the question and find the answer to know for sure.

Slow down buddy… youll catch on fire and burn those sexy pants

My favorite part of all these shows is Mike's table….

Many a genius have similar. They think faster than the brain can process.

It's quite confusing for regular people to claim that's it's "what computers can't do".

It's more general than that, it's about the limits of computing and logical processes, it also applies to humans which in technical terms are also computers (as in, we compute data).

Five minutes was enough and noe I will carry on watching paint dry

As far as i can remember that "killer robot" is ment to be "just" a transporter thingie, replacing a mule/horse/camel (or human) carying heavy objects on terrain where regular 4WD vehicles can't go, no weapons added (for now)…

We have way more effective (and likely cheaper/less complex=less worry if they get shot down) flying killer robots-drones.

"What would a company need, to make a killer robot?"- A licence to kill.

Whenever someone say "well a computer can't do this!" it usually doesn't take long before someone makes a computer do it. This won't be an exception.

I would throw a system of inequalities at the program at 30

"We do not know." is a very unsatisfying answer.

First of all, why would anyone want to kill another human beeing?

But you could trisect by making half angles 3 times. That'll be messy and hard but can be done by compass and ruler just use a large angle and different colors and it'll make sense. For small angles, you'd have to be very careful. Even the half angle can be very tricky for small angles!

I think it gets stuck at Otherwise.

Super Mario Brothers is obviously in P.

The term "thinking" seems to me to be too vague to give a definitive yes/no response.

Reasoningby a machine is possible.Human reasoning and turning it into code is where it falls apart. So, asking for the impossible a machine wouldn't be able to decide no matter what.

For example tiling a plane is a perceptual problem that requires imagination to provide an answer.

1940 konrad zuse completed the Z2

spreadsheets

candetect circular references. (but only if they aren't obfuscated using concatenation)If quantum computers become useful then it wouldn't necessarily be the P=NP? problem that caused internet commerce to break.

This talk is so confused, misinformed and misleading it's embarrassing 🙁

Voted down as soon as he called the Mule a killer robot, then clicked away.

8:53 According to the Wikipedia article on Deep Blue, it used an algorithm called alpha-beta pruning, which allowed it to ignore branches where its opponent would absolutely take advantage of mistakes. I know it's a small point, but it seems to be essential to your argument that the computer wasn't really thinking.

I downvoted this video partly because of small stuff like that, but also because of larger problems like how you don't give a formal definition of thought. If you don't have a proper definition, how can you be sure that computers can't do it? And if you aren't sure, why was it included in this talk?

That said, this was still interesting food for thought. Thanks.

Am I the only one who still wants the "last 2 slides" with jokes?

Make the speed at 0.75 , Thank me later

No you don't. You need data on one person to make a killer robot. The rest is public domain. It will happen. Give it 10-15 years.

Not fair to Boston dynamics when they are explicitly against working on killer robots.

41:26 what if the user types in 0? This still prints 1 zero

Those robots aren't killer robots… they are mostly designed to assist troops by bringing them supplies.

I could program that app in 2 weeks, including the drone control ,face recognition etc.., it hasn't been done cos is immoral and illegal.

Algorithm at ~29:25 gets stuck in a loop if x=10.

So the proof that P≠NP is in NP.

Or is it?

My money is on the relation being one of those undecidable things.

cool

Oh ! poor camera operators following this guy arround the stage for one hour…

@41:29

The expanded version of program 2 has a bug!

So many fallacies in this lecture. For example, getting stuck in an infinite loop is not the same as crashing. A crash is an unexpected exit. An infinite loop never exits. They are two completely different issues. Actually in fact they are opposites in a way.

Another: I can tie knots in a string for several minutes that can be unravelled in a few seconds.

this guy propably was behind that one black mirror ep with the boston dynamics killer robots

Suddenly I'm taking my Google+ account VERY seriously.

What about AI that learns? Alpha Go, for example?

Where can I find those pants?

… his bipedal cloth covering is distracting …

holy shit! Joe Satriani has taken off his sun glasses!

Best operational research lecture ever.. I had a course @university and had a crush on the topic but never had a chance to think to it in these terms. Amazing lecture.simple yet perfectly explaining examples. I'll suggest to collegues

Dude walked 10K laps around that damn table.

Great speaker, very energetic which is what you need when dealing with complex and dry subjects.

Proving that you can write a program that can get stuck isnt the same as proving that you can't write a program that doesn't get stuck.

Btw. humans (and other biological organisms) can also get stuck – motivation is the only thing preventing people from that.

Pigeons can get stuck, put multiple pigeons in a skinner box, there is good chance that some of said pigeon will do random stuff (like flapping its wings) while an othrr pigeon ttiggers the rewards. And then said pigeon gets stuck brlieving that it needs toflap its wings to get rewarded. If an other pigeon triggers the reward while its still desperately flapping, it will be even more sure.

Same thing can easily happen with humans.

The only thing preventing a true infinite loop is death – but then computers break too.

Konrad Zuse 1936, hold my beer 🙂 https://en.wikipedia.org/wiki/Z1_(computer)

Without a clear negative proof, it seems premature to make claims about what computers cannot do. It is certainly clear that modern that neural network systems are capable of transcending classic brute-force algorithmic analysis. The game of go, for example, famously cannot be algorithmically brute forced. And some of the problems he mentions fall squarely in the domain of quantum computer applications.

do not understand the mathemtics but am aware of the implications. got ample words for it.

Encryption is solved with strange entanglement. Instant knowledge of breeches,and where.

It seems to me that it would be easy to invent a computer controlled armored machine that kills everything. The difficult thing would be trying to get it to make decisions on who to kill and who to leave alone. Too many variables. (i.e. Self driving cars that can handle every scenario is a similar problem)

If one looks at the history of self driving cars, at least the early iterations, it could be argued that they qualify as "killer robots". It's just that the killing part and the target part are pretty much random.

The number 0 wasnt invented in the 1800's by the greeks , It was there way way before and invented by the indians … as unlike just stories trigonometry was part of there religion ..

A computer would not wear those pants. That's for sure.

Great lecture! But the position at 6:54 ist not a Sicilian.

The idea that proving P=NP true would lead to all these shocking consequences (e.g. encryption breaking) assumes that any proof of P=NP would be 'constructive' i.e. that the proof itself would outline

how(construct) we could quickly prove (move to P) something that's quickly verifiable (NP). This would be some general schema or framework applicable to any NP problem, like a computer program.Proofs, however, needn't be constructive: they needn't actually design some process to achieve the desired outcome but, rather, show its truth based on general principles. Mathematicians don't all agree that a P=NP proof would necessarily be constructive.

So, even if P=NP is proven to be true, if the proof is non-constructive then we needn't immediately worry about chaos ensuing. Designing methods for 'cracking' individual NP problems might take an unreasonably long time (indeed, we've failed to do so for many basic ones so far e.g. factoring), so the impact would be limited.

On the video it is suggested that Deep Blue was brute forcingly going through even "silly" configurations in its search, but that would have been a gross understatement and a harsh insult against the Deep Blue engineering team. Reality was very far on the contrary – Deep Blue software was running the most sophisticated levels of chess search algorithms at the time, which certainly did not spend time looking at silly configurations. Of course even the fact that Deep Blue used an efficient and well researched chess search algorithm does not dispute or alter the point that the lecturer was making about the question whether the computer was "thinking" or not.

The BIG 0!

the finger is as powerful as the pen. Every pen or nom de plume is a finger…

Awful. Did he ever get to P vs NP.? I couldn't stand listening to him. I had to stop listening after 20 minutes of torture.

sorry, he lost al credibility with the pants, pop culture hipster science, tri-oxymoronic diatribe