(upbeat music) – Hi, what’s up everybody? It’s an honor to be closing out the day. I’m from Chicago, part of the
Chicago Ruby community, so I, you probably don’t know who I am, I’ve been a software
developer for about 20 years. Started using Ruby via Rails in 2006. I went to school to study speech. Audiology and speech pathology. And after graduating I
realized that my degree prepared me to do absolutely nothing. So I was able to get an entry level programming job, at a nonprofit in my town. And one thing led to another
and I just kept programming. I figured that one day
people would find out I don’t really know any computer science, and then I would have to find another job. Now I’ve come to learn it’s kind of called impostor syndrome, when you feel like you don’t
belong, when you actually do. And it took about 10 years
for me to kind of realize that actually I’m just as
capable as someone else who might have computer science degree. And then I actually had a very anti computer science attitude. I became to the point in my career where I was hiring other people. And I would interview college graduates who had just come out of
college with a CS degree. And I felt like they didn’t
really know anything. (laughs) In terms of actually being able to sit down and start doing programming. That’s because I had a
misconception in my head that I’m gonna explain here, but… So I just had this very
strong anti attitude toward computer science. Then a number of years ago I guess I got to the point where I
became interested, actually. I would be reading things on
the rails talk mailing list, or on the ruby core list, and they’d be mentioning terms that I didn’t really know what they were. And so I would start to look those up, and I found myself getting
into some computer science. I now teach in the masters
program for computer science at the University of Chicago. Don’t tell them I don’t have
a computer science degree. (audience laughs) Is that camera on? Oh man. But I just want to tell you a quick story of how my foray into computer science has changed me as a developer, and hopefully it can be
inspiring to you as well. The title, you might
have already caught on, is an inside joke to the famous Crockford book about Javascript. This picture I think appears
at like every conference, and it kind of makes the rounds. There’s the “Javascript: The
Definitive Guide” on the left, and then there’s “The
Good Parts” on the right, and you can tell by the
relative thickness, yeah. (audience laughs) So what I’m gonna try to do, is try to do the same
for computer science. I’m just gonna try to make you aware of some of the good parts, just enough so you guys
can learn on your own. I’m not gonna go into great depth, I don’t need to with this crowd. But I’m just taking a very
beginner-focused approach, so you don’t need to know
anything about computer science, hopefully, to get something out of this. If you were to open up any
computer science textbook, you would see something
about data structures. And this is usually where
I close the book. (laughs) So I mean, I just can’t think
of anything more boring. But computer programming is arguably just a matter of data transformation. This is kind of a weird way think about computer programming for, especially for someone like me, I’m an object-oriented kind of thinker, and to think about data as
having that primary role, might seem strange. This is a photo of no one’s garage. (audience laughs) This is a, this was taken from an ad for the things you can do
to organize your garage. But I guess the idea is, that if you’ve got something to put away you want to put it away in such a way that you can easily go get it again. And depending on what
you’re trying to put away, you might need different
kinds of containers. And this is how I start to
think about data structures. You know, we use data
structures only because we need a place to put data, when we’re not using it
right at that moment. So we, it would behoove us to be able to put it away in such a way that we can go get it again
easily when we need to. So let’s start with probably
the most primary data structure that one learns in computer science. The linked list. Alright, so this is my
favorite linked list. And it’s just, you know,
as Ruby developers, we’re accustomed to, you know, an array. A linked list is not an array. It’s an array with just the
most primitive operations you can possibly think of. To just barely be able to
store a list of things. And in a linked list, you have only the ability to go get the first thing in the list. From there you can only get the next thing that it’s connected to. So each element is linked to the next, kind of imagine it as
like elephants going, you know, trunk to tail, trunk to tail. And, I once sat down to try to build a Ruby class, that would give me the ability to only push things into the list, get only the first element, and from that element get the next one. So everything I had to do, I couldn’t just say, give
me element number five. I would have to start
at the first element, and walk the chain to get
to that fifth element. That’s a great exercise, if you were to space out from
here to the end of the talk, if you were to just try
to sit down tonight, and write a Ruby class that
had that minimal functionality, I think it would warp
your brain a little bit. It did for me. The solution is generally
very counter-intuitive. And there’s lots of different solutions. So, you’ll hear the term linked list, it’s just our entry way
into a data structure. From there, you may have heard of this idea of a binary tree. Instead of having just one
connection between elements, we can have two, and the
cool thing about that is if you’re a little bit smart about how you make the connections, you can actually presort
the data as you go. And basically the way
that it works is that, let’s say we start with the number 60, great, I’ve got the number 60. And if someone then
gives me the number 31, instead of just attaching
it, I’ve got two choices. And so I just use a rule,
that would be convenient. So in this case, if it’s less than 60, I’ll put it on the left. And if it’s greater than,
I’ll put it on the right. So, 31 would go on the left. Let’s say the next number
that comes at me is 80, that would go on the right. If the next number that came at me was 70, Well I would start at the top. 60, 70’s greater than
60, I move to the right. Oh wait, there’s an 80 there. Then you go to the left, and so on. And that’s how we start to
populate this binary tree. And that actually makes
traversing this tree now, very easy to go get elements. It’s kind of already in a sorted order. Who would ever use such a thing? Well, actually this kind of binary tree, if we stop to think about it for a minute, it’s used for lots of things. Decision trees, if you’ve worked on any sort of decision management system, or business logic,
business rule logic system. Object hierarchies, if
you wanted to, like, how does Ruby actually keep track of all the classes and
subclasses that we write? If you get into compilers, if you’re curious about
implementing programming languages, abstracts and text trees use
the same kind of approach. And you can do very
important things with it. Like a Star Wars family tree that someone put on the internet. I don’t know why. If you break free from just
having two connections per node, you get to what we call a graph. This confused me for years. I thought a graph was, you
know, like a bar graph. So I’d be reading
something and it’d be like, oh, you just use a graph. And I’m like, how do I
use a bar graph, to make? I don’t understand. But this graph is just a generic word for nodes that are kinda connected, and you can have as many
connections as you want. This actually turned out
to be amazingly useful for modeling things like social networks, maps, plumbing systems, electrical grids, security systems, figuring
out degrees of separation, air traffic control, if you
want to somehow tilt this into a three dimensional space where the nodes are moving, very fun. Figuring out the shortest
path between things, neural networks, it’s
how we start getting into machine learning and
artificial intelligence. So this graph structure
which a minute ago, you probably said, oh
yeah, that’s pretty easy. Well that’s, if you thought that’s great, because that’s actually
your gateway drug into all of these other areas
of computer science. So speaking of maps, so
coming down from Chicago, actually I flew, but I was
thinking about driving, and so I punch it in and Google Maps instantly gives me these three options. How did it figure that out? How does that actually work? That’s the kind of thing
that keeps me up at night. It’s just a graph, but
the edges, the connections between all these roads have values, which is either distance, or traffic, and using that we can
figure out shortest path. That’s also great in industry, for things like least cost,
we can model them as a graph. By the way, why does Honolulu
have an interstate highway? (audience laughs) I don’t know. If anybody’s here from Hawaii,
please talk to me afterwards. I’d love to learn. This is a snapshot of our current transmission grid, electrical grid. This is an automated,
self-balancing network that we’ve been building. It’s an amazing thing, our lives now depend on
this kind of technology. Does anybody happen to
recognize this crazy photo? What do you think? Awesome. This is a picture
from the Curiosity rover that we landed on Mars years ago. And of course the first
thing that we taught it to do once it landed is take a selfie. Right, so there it is. The landing procedure,
this is too small to read, just note that it’s extremely complex. The way that we usually,
we land things on Mars up until Curiosity, was we
would just wrap the thing in a big trampoline, throw it at Mars, and it would, like, land and bounce, and wherever it stopped we were like, that’s a good place to
explore, right there. (audience laughs) So, they came up, but
this was a big rover, it cost a lot of money, they came up with this extremely complicated, parachute, heat shields,
crane, just incredible system that was, I heard about this only two days before it was gonna happen, I was like, this is never gonna work. Like, I’m in computers and I
know, this is never gonna work. This system was completely
under computer control, it’s not like there’s someone at NASA with a joystick, moving it around. Because one move of the
joystick to transmit that light takes about three
minutes to get to Mars. So buy the time you see
the martian monster, it’s too late, right,
to actually change it. And, I don’t think they thought
it was gonna work either. And…
(audience laughs) But so much of our lives
now are done automatically through computers without
human intervention. So, data structures,
though you can’t really do anything without
talking about algorithms. And many of you know Charles
Babbage and Ada Lovelace, kind of the first ones to really
start thinking creatively, this was around 1820, 1830. About how to get a machine to try to do something with a procedure. They were just trying to do some simple mathematics at the time. And it eventually just shortly
after that led to this event. (laughs) This is actually my
favorite photo of all time. This is Apollo 11, and, you know, here’s Neil Armstrong
coming down the ladder. He’s about to– do you know how long he thought about this moment? How many years he trained? How much work it took
for the space program to get him to this point, he’s about to actually step on the moon. But then I thought, wait a minute, if that’s Neil Armstrong, who’s taking the picture? There are aliens on the moon! So actually, that’s Buzz Aldrin, Neil Armstrong had already come down and he’s got the camera. But this was the first time that we trusted computers with our lives. The way that this early computer that we had to invent just for this. Imagine when they climbed back up, after they’re done running
around the moon for awhile, if they were to climb back
into this piece of tin foil and pushed the button to lift off, if that button doesn’t work, they’re, like, there was no rescue plan. There was no other way to go get them. So we had at least reached the point where we could trust computers. Nowadays we don’t give
it a second thought. If we have to go to the hospital, they have this amazing technology now, and we just expect it to work. But this computer worked. Because a lot of people worked on it, but particularly this person. Many of you, I’m sure
already know the story of Margaret Hamilton, lead engineer for the Apollo space program. This is a famous photo where she’s standing next to the printout. And she had print out
all of the source code, of that guidance computer. And, she’s the one that saved their lives, actually if you watch on YouTube, the original recording of
Neil Armstrong landing, there were multiple alarms going off, 60 seconds before he was trying to land. Her invention of, like, threading, to have the computer do
multiple things at once, and then to put a priority
idea on top of that, allowed them to land and live. Instead of having the
computer worry about an alarm that actually was not
important at that moment. So her, she was really the first person to coin the word software engineer. She did a lot for us to
talk about software quality, what it means to have a system that works. So, from there I want to talk about this notion of complexity. We know that for any given problem, there are an infinite
number of good solutions. How do we compare one against the other. Not all implementations are created equal. And, there are generally two
ways that we can compare. This piece of code versus
this other piece of code. And that’s to look at how it uses time, and how it uses space. You may remember Einstein talked
a lot about time and space and how they’re connected. Most people don’t know his 1907 paper, saying that time is
money, which was genius. So, the way that we kind
of, in computer science, give a notion of
complexity is what we call, Big O notation. And I did not understand
this for a very long time. And, let me just see if I
can give a couple examples. See this capital O of n,
this weird looking thing? This is how we kind of
describe the complexity of the lame implementation that I’ve got, right there in Ruby. Of course, it’s not really idiomatic Ruby, I just wanted to use it
for demonstration purposes. But here’s a method, exists, does this name to find
exist in this list of names? I go through each one, I
return true if I find it, so I stop the loop early. Otherwise I return false. Now, the reason that this is O of n, this means that as your
input size gets bigger, in the worst case, the time it takes for this method to do its work will vary directly linearly
with that input size. So if I have a hundred items and it takes however many seconds on your computer, if you have 200 items, we know
it will take twice as long, in the worst case. If you have a thousand items, we know it’ll take 10 times as long. And that’s just good to know. So that you can be aware when
you’re writing your methods, how it’s gonna do when you
throw a lot of data at it. And so, you can graph
it, it’s just a line. It’s, we say O of n, n is the input size. How many things are you trying to battle against in
your, in your algorithm. And here’s another implementation
of that same method. Still returns true or false. Just what we call a
binary search algorithm, don’t worry about the
details too much here, but basically it’s just like, you would look up a name in a phone book. I don’t know if anybody knows
what a phone book is anymore? I’m old, so if you told me to
look at a name in a phone book for Cincinnati, I’ve
never been to Cincinnati, I would basically just
open in the middle, right? See where I am, and I know
that the phone book is sorted, so I can then go half and
half again until I get closer. Same idea, I grab the midpoint, I look and see if I got lucky. If so, I’m done. If not, I say, okay, is my name gonna be
before that or after that, and I just recursively call that same function again,
now using a subset. This lets me zero in on a name, you know, in four or five tries, even in a phone book that’s
got millions of names. And, so, this performs very
well against large data sets. It’s what we call logarithmic complexity. And so we would say O of log n. And one way to think
about logarithms is if, it’s just a number of
digits in your number. If you have a thousand
items, versus a million, it’ll only take twice as long, not a thousand times more. So, you can barely see that
green line along the bottom. So that’s really good. You wanna use methods that
have logarithmic complexity, if you have a choice. Anybody use the Atom editor, like I do? Okay. About a year ago, I wanna say, they did a blog post on one of the things they were doing to increase
the speed of their editor, and they talked about how
one of their breakthroughs was to use this divide and
conquer type of algorithm. They used the big O notation in there, and I was actually able
to understand, (laughs) what they were talking about. And so, hopefully if you come
across this type of thing too, then it’ll be easier for you as well. Finally, here’s a counter example. Let’s say I’ve got an array, and I wanna find all possible
combinations of that array, make a bigger array. Here I’m just gonna map those items, but I’m gonna, for each item, I need to go through all the items also, in order to make all
possible combinations. This has what we call
n squared complexity. This is bad. Do not do this. In fact, the telltale sign
is that if you ever see that you have nested loops,
you should worry very badly. So, as the number of items
increases only a little bit, the time that it’s gonna take
you is the square of that. And so very rapidly,
it gets out of control. There are other big O notations, but hopefully now, if you
weren’t familiar with it before, or were uncomfortable with it before, you can now start to look those
things up and go from there. Alright, so finally, let’s talk about the future a little bit. Sort of, what’s next in computer science. And, it’s why I’m so excited to be… A part of this. Let me go back to World War II. So who can name the British mathematician that helped break the
German Enigma Machine and helped end the war? Exactly. This guy, right? (audience laughs) So… Alright, so he, you know, we sort of know him, oh, he did the code, but
if you watched the movie, oh, the code breaking. Actually, he did a lot more than that. After the war was over, he did a lot of deep thinking about the role of computers in society. Ideas that made no sense to his peers. He was worried about, what if machines start to run our lives. And I think his peers were like, dude, it was just running a thing against the code, don’t worry. It’s the size of a room,
don’t worry about it. But he was already thinking about, what does this mean? What if machines, can they learn to think like our brains can think? And if so, what would happen? And that’s in the news today. Right, you know Elon Musk
is often talking about, should we worry about
artificial intelligence? But one of the main ideas that I learned from studying him, was
that computer science is not computer programming. I thought it was. It’s not. It’s really two different things. I was able to do computer
programming just fine, knowing very little computer science. Because the two things,
yeah they’re related, one can empower the other. But let’s not mistake one for the other. I believe computer science, it’s more about a way of thinking. Some people call this
computational thinking. And it means, do you know how
to look for cause and effect? Do you use logic and experiment,
and empirical results? That’s part of computational thin– breaking things down into small pieces. Breaking a problem down into small pieces. Is super critical. I specialize is working
with non-programmers who are just beginning
to learn programming. And, these elements are
usually what’s hardest for new programmers to learn. It’s second nature to the rest of us. Being able to do thought experiments, like Turing did about
artificial intelligence, and machine learning, but
also the inspiring thing about Alan Turing was he
focused on what was important. When he found himself in a bureaucracy that was more concerned with minutia than the important things of the day, he would get out of that situation. He just couldn’t stand it. And it reminds me of this other hero in our history of computer
science, Grace Hopper. She was also, same era, she became the first female admiral, she basically invented COBOL, she was the first one to give
us the notion of a compiler, and this notion of an
indirection between the code that you write and the
actual machine language code. Maybe from there we get the
adage that there’s no problem that another level of
indirection can’t solve. She tells a story of how
generals would come to her, and say, this would be in like the ’60’s. And they would complain,
and they would say, I’m trying to get this communication from ship to shore, and it takes so long. It’s going up to a satellite
and down at the speed of light, why is it taking so long? And she would look at them,
and she would take out a piece of wire that is this long. I have a dozen of them up here
if you want to come up later and grab one, I call this
the Grace Hopper wire length. She would ask them, do you know how far light travels in one nanosecond? And she would hold up this wire. It’s 11 and 5/8ths inches,
you know, almost a foot. She said, so when you’re
trying to send something from ship to shore, up to
the satellite and down, give a brother a break. It’s gonna take the light a little while, to be able to do that. And, she also is kind
of famous for recording, back to the beginning. This is too small to see,
but relay number 70 panel F, parenthesis moth, found in the relay. This was the first bug. (laughs) They had suspected that
insects were getting into the circuitry and
that was causing problems, but now she finally proved it. But she didn’t just work
on the technical side. She was inspiring because she said, humans are allergic to change. I don’t know if you work
somewhere like this. They love to say we’ve
always done it that way, well, I try to fight that. It reminds me of Turing. She had a clock on her wall, in her office that ran counter-clockwise. And people would be like,
you know your clock, uh… She’d be like, why does
it have to run this way? Is there a technical reason why? No, we only do that for tradition, But, challenging assumptions is such an important part of computer science. We need people to push the envelope. Back in the ’70’s, famous rock musician, Frank Zappa said without
deviation from the norm, progress isn’t possible. And I think that’s true. She said, you know, a
ship in port is safe, but that’s not what ships are for. Sail out to sea and do new things. Many of us here are entrepreneurs, others of us are not, but we’re
all trying to do new things. Now, the sea is a scary place, if you’ve ever actually sailed out to sea, it’s a scary place. We’re fearful of becoming
isolated, or falling overboard. But I’m perhaps most grateful for having encountered computer science, for the following reason,
that I am not alone. We are not alone. Hopefully as you’ve seen, we are standing on the shoulders of, the work
of all these other people. And, computer science isn’t really about data structures and algorithms. It’s about joining a
long tradition of using computer programming to advance the world. Life is short. To many of my friends and family, I have these amazing super powers. I can make apps. (laughs) So what am I gonna use it for? I would submit to you to
do something important. Something that’s important to you or your family, or your community. You know, Matz has said that
perhaps the biggest invention to come from the Ruby language, is the Ruby community. But we need to help each other. I mean, like it or not,
we depend on each other. Just look at our gem files. (laughs) I can’t, I cannot pay my mortgage without the work that many of you do. So, instead of reinventing
the wheel next time, contribute to someone’s
existing open-source project. Or help mentor someone else learn something you already know. If we make the tide
rise just a little bit, all of our boats will be lifted up. Thank you very much. (audience claps)

RubyConf 2016 – Computer Science: The Good Parts by Jeffrey Cohen
Tagged on:         

Leave a Reply

Your email address will not be published. Required fields are marked *