MARTHA MINOW: Good afternoon. I don’t think there’s
any other person who has a chaired professorship
at the Harvard Law School who would be doing his own AV. But of course, it is
thrilling to welcome you all to this splendid occasion. We will soon hear
the lecture Love the Processor, Hate the
Process– The Temptations of Clever Algorithms
and When to Resist Them by Jonathan Zittrain. But first you have
to hear from me. We are here to mark the splendid
occasion of the appointment of Jonathan Zittrain
as the Georgia Bemis Professor of International Law. Everyone here knows Jonathon. Some know him as
JZ– more on that later– but that won’t stop
me from telling you about him and indeed raving about him. And some may be surprised
by the association with international
law, so I thought I’d create a little mystery. Stay tuned. First, I’m really
delighted to welcome some special guests– several
members of JZ’s families. Sister Professor Lori
Zittrain Eisenberg, brother-in-law Dr.
Michael Eisenberg, and niece Hannah Eisenberg,
and fiance [INAUDIBLE]. Now, the mystery of
the international law chair and why a chair
name for George Bemis is not only appropriate
but elegant for our remarkable Jonathan Zittrain. Bemis attended Harvard
College and studied law at Harvard Law School and earned
his degree not that long ago– 1839. He was born nearly
100 years ago in 1816, just up the road in
Watertown, Massachusetts. His ancestors actually helped
to found the town of Watertown in the 1640s. While in law school,
Bemis taught classes at the Charlestown State
Prison, and this experience stoked his passion for reform,
for prison reform initially. And his initial
focus was the system of cumulative
punishment, and his work produced an overhaul of the
scale of criminal penalties, actually something we
might well use again now. Bemis made news in 1844 when
he defended a convict on trial for murdering a prison
warden by using– Bemis that is– an extenuating plea of
insanity and uncontrollable impulse. This led to his invitation to
assist Attorney General John Clifford in the
prosecution of what became the most famous or
infamous trial of its time– the Dr. George
Parkman trial, one of the first trials
in the United States to use forensic evidence. British Historian Simon Schama
published a book about it. PBS did a documentary about it. If you’re interested,
you can even see the books on the iPhone
application that’s about it. So you see this is all
connected in one way or another. Bemis retired to Europe and
lived the rest of his life abroad and turned his
attention to international law and served the United States
government in connection with the claims for
compensation lodged against the British
government for permitting the Confederate Cruiser
Alabama to take on arms and escape from British
ports during the Civil War. He also wrote pamphlets
relating to the problems and issues of neutrals and
belligerents in wartime. This is all going to connect. I promise. When he died, in his will
he provided for the founding of a professor. And he said in his
will and I quote– he wanted to pay
tribute to, and I quote, “the instruction
which I derive from the legal department to the
lips of the late Judge Story, whose memory I cherish
as one of the best guys to study whom I ever shall
have the good fortune to meet, and who’s friendly stimulus
to exertion I shall always gratefully remember.” The terms of the gift call for
establishing and maintaining a professorship of public
or international law, and I will give you the choice. You could have it be
called public law, but I’m going to
make the case right here for international
law because Bemis wrote that he wants the recipient
to be a practical cooperator, and I’m quoting
him, “in the work of advancing knowledge
and goodwill among nations and governments. For that object I should
prefer, if practicable, the incumbent should have
had some official connection with public or diplomatic
life or at least have had an opportunity by
foreign travel or residence to look at the United States
from a foreign point of view and so to estimate it only
one of the family of nations.” Here it comes. Who better than
Jonathan Zittrain can bring a practical
sense of cooperation in advancing
knowledge and goodwill and look at the United States
from a foreign point of view than the person who
understands the internet better than anybody? Than the person who actually
has served in the government and served the
government in addressing issues of our new day? Who’s lived in these very
foreign parts– Silicon Valley, Oxford, NYU, Stanford– and who
actually most importantly, has the incredible
commitment and ability to come up with ideas like,
what if when there’s a service attack you had the
different service providers behave as if they were on
the sea during war time, finding a way to cooperate
and help each other? That’s Jonathan Zittrain. So a little bit more about
him because I can’t resist. I don’t know any other
professor in the history of Harvard University who
has been as heralded, that he is under such demand, that he
is a professor at the Kennedy School of Government, at
the engineering school as well as at the law school. And here at the law school he is
faculty director of the Berkman Center for Internet and Society,
itself a university center, and he is the co-founder of it. But that’s not all. He’s also vice dean for Library
and Information Resources, and as the director of the
Harvard Law School Library, he has brought an
extraordinary vision about how information should
be accessible and open to all and how to innovate constantly. To innovate, to come up with
practical solutions to problems like stack life so
that when you zoom in and you find the right thing
through your digital searching, you also have the serendipity
experience of seeing what’s next to it on the shelf. And the library cloud. Another project, both
of them recognized by the Stanford Prize for
Innovation and Research Libraries. And H20. How many of you have
encountered H20? Anybody here? So the possibility of using
the digital resources to bust open the monopoly of publishers
on the case publishing business, something that
hits me in the heart because I’m an editor
of two case books. And free the law. Free the law, a fascinating
and exciting project to open up the resources and
riches of the Harvard Law library to the world. A man of capacious intellect. There’s a reason why he’s
known by just his initials. He’s a rock star. JZ. And underlying all
of these talents is just an extraordinary level
of generosity and friendliness and good humor. Good humor– you’ll
see some in evidence, making him such an extraordinary
teacher and collaborator. Child of two lawyers. One of his early books
was a look at the tort law through the work
of his father, who had been attorney for football
player Mean Joe Green. And in his introduction to that
book, Jonathan writes than he and is co-author, who was
also the child of a lawyer, we’re grounded in and taught
to love the law at early ages by our parents. When he was undergraduate
he studied cognitive science and artificial intelligence. A magna cum laude
graduate of this school, his name is engraved
on a plaque in Langdell for winning the Williston
Negotiation Competition. This explains how
you’ve nego– no. He also has a master’s
degree from the JFK School of Government. And he has, as I’ve
already alluded to, been a professor of Internet
Governance and Regulation at Oxford University and was
a Principal of the Oxford Internet Institute. He’s been a visiting professor
at NYU University School of Law and at Stanford Law School. And his research interests
range from battles for control of digital
property and content to chromatography,
electronic privacy, the role of intermediaries
within internet architecture, and the useful and
unobtrusive deployment of technology in education. His many books include
The Future of the Internet and How to Stop It, and
he crowdsourced the cover. So in every aspect
of his work he exemplifies the cutting
edge, innovative approach that informs his ideas. His students and he
collaborated on a website called Chilling Effects,
which tracks and archives legal threats that are made
to internet content providers. And he performed the
first large scale tests of internet filtering in China
and in Saudi Arabia as part of the OpenNet initiative. Get it? International. International law. And he created the
website Herdict, which is a user driven platform
for identifying web blockages as they happen, including denial
of service attacks, censorship, and other kinds of filtering. A leader in the
global conversation about access to information
I can’t tell you, after the Davos meetings
of the World Economic Forum each year I get
this wave of emails saying, Jonathan Zittrain
is incredible, which I know. And some of these messages
are in another language, so that’s really something. He wrote an open letter
earlier this year to British Prime Minister
David Cameron warning against upending the
long established balance of security and privacy. And his four point message
to the British prime minister shows what a great
teacher JZ is. And that may be one
of the many reasons why Harvard University featured
him in its 375th anniversary as one of the great teachers. And then there’s
Foreign Policy Magazine recognizing him as one
of the 100 foreign policy global thinkers. Get it? International law. He serves on the
board of trustees of the Internet Society and
Electronic Freedom Foundation. He has taught me, as a
colleague and as a co-teacher, more than I could
possibly tell you. He’s on the board of advisors
for the Scientific American. He’s a member of the Council
on Foreign Relations. He’s been a form fellow at the
World Economic Forum, which named him a young global
leader, and he still counts as young for sure. And he was appointed chair
of the Open Internet Advisory Committee, which was called for
by the Federal Communications Commission to track and evaluate
the effects of the FCC’s open internet rules and
to provide recommendations to that commission regarding
policies and practices related to preserving the open internet. He was appointed also to serve
as the first distinguished scholar of the FCC. And in making that announcement,
the chair of the Commission, who is also an alum of Harvard
Law School, Julius Genachowski, described Jonathan, and I
quote, “as one of the world’s leading strategic
thinkers on communications policy in the 21st century. As I turn to Jonathan,
I will briefly tell him I couldn’t resist
crowdsourcing a little bit. So I did ask a couple people
what I should say about you, and I’ll just share two. There are just
too many to share. One is from Dan Meltzer,
the Story Professor of Law, who says, as a student,
Jonathan was much as he is today– brilliant,
poised, exuberant, and hilarious. One day, in response
to a question in class, he answered, yes. Only with so talented and
justifiably self-confident a student would I have dared to
respond, Mr. Zittrain says yes. Can anyone think of a shorter
and possibly more accurate answer? [LAUGHTER] And Dan Meltzer goes on to say,
Jonathan appreciated the retort even more than
anyone else, and now he’s gone on to become
a renowned scholar, internationally recognized
figure, and indeed a celebrity of sorts. His good humor, his enthusiasm,
and his decency towards others have remained undiminished. Finally, Larry Lessig. The Roy L. Furman
Professor of Law says, one source
of JZ’s brilliance is his ability to keep
constantly in view the views of everyone in his audience. Whether writing, speaking, or
teaching, and engaging in a way that responds to even a
radically mixed audience of views. Never did I see
that skill tested more severely than
when we were teaching an iLaw class in Brazil. At a celebratory
dinner, our host honored the leaders of the
course, and JZ in particular, by inviting an incredibly
scantily clad– actually, almost not clad–
Brazilian dancer to dance with JZ in front
of the assembled students and teachers. This created an
obvious conflict, as the audience ranged between
people from our perspective who thought the display
quite inappropriate, to people who thought
it expressed traditional and engaged Brazilian culture. JZ met the challenge through a
mix of hiding under the table. And then when finally fleshed
out, graciously acknowledging the invitation to dance with
a generous and warm smile and a bow and then a
retreat to the applause and laughs of almost all. I introduce to you
Jonathan Zittrain, the George Bemis Professor. [APPLAUSE] JONATHAN ZITTRAIN: Thank
you very much, Martha. I can’t think of the
occasion for which hiding under the table is
not an appropriate response. And I confess, I’m somewhat
tempted to try it right now, but you’ve out maneuvered me. There’s not room
under the lectern. And it does seem
like only yesterday, even though it was
a bit ago now that I was coming from Dan
Meltzer’s criminal law class having been slightly
put in my place. And in fact, in that
era, I remember right before Thanksgiving break
my parents, both lawyers, were coming to visit me at
Harvard Law for the first time, and I was very excited
to show them the place. And they arrived on the
Wednesday of Thanksgiving weekend, and the one
thing my dad wanted to do while he was here
that he’d been talking about was he really wanted
to see the library. He loved libraries. He loved history. He was so excited to see
Langdell Library, which as law libraries go was a
pretty good one, I must say. And I was like,
yeah, yeah, yeah. We’ll see the library. But first, I got to
show you The Hark. And I was just–
the folly of youth. I don’t know what
I was thinking. Wyeth Hall, which
used to be here, and I think they
performed an exorcism after they knocked it down. But I just showed
him everything. And it was finally
getting late in the day and he’s like, we really need–
please can we see the library? I’m like, all right. We’ll go right now. And so we’re walking
over towards Langdell with the big windows. You can see in. And as we’re walking,
we see in sequence the lights going
off at Langdell, and I’m like, run for it. And we go up the stairs
and basically get to the entrance of Langdell. It’s pitch black inside, and
there’s the guy locking up, and I’m just like, please,
my parents are here. You’re about to close for
all of Thanksgiving weekend. My dad just really
wants to see it. Can you– and he looked
at me and he said, these are natural gas
fired lighting fixtures. It takes 25 minutes
to turn them back on, which was true before
the Langdell Hall renovation. And he then managed
to meet us halfway. He pulled out a flashlight, and
we got a tour of Langdell Hall Reading Room by flashlight. And you haven’t really
experienced law school until you’ve done that. Sadly, insurance
precludes it now. And I must say
though, metaphorically speaking, I feel like
in the intervening years since I returned
here in a new role, I feel like metaphorically
I’m still exploring Langdell Hall with a flashlight. And to have the
help that I have– the colleagues, the friends,
the fellows, the students who are together in that
search, not counting on just the overhead
light to do the trick– is part of what makes this
place such a wonderful crucible. Such a great place to
try to explore the world. And it’s been a particularly
special thing in the past year, not only to be connecting
more with the Berkman Center but to have a role
at the library and both across the board and
with the Innovation Lab staff. And I saw its immediate former
co-director, David Weinberger, was here, and it
was actually David who did some of the programs
that Martha mentioned, including StackLife
and Library Cloud. And amazing to see the energy
that’s here and really my role these days in just helping to
catalyze it and accelerate it. And so I just feel so
blessed and privileged to be a part of that. Beyond the circles of faculty
and fellows and friends within the Harvard orbit, there
are communities of scholars out there, and cyber law
is one of those fields that had relatively recent beginnings
and from its beginning its existence questioned. The famous article
by Larry Lessig on The Law of the Horse
catalogs Judge Easterbrook, wondering why there’s
a cyber law at all. It makes no more
sense than it does to have a law of the horse. And in the early days,
a lot of cyber law was focused around
intellectual property issues and what the online world
was doing with the copying of information. And that then receded, and it
was sort of quiet for a while. It wasn’t a unifying thing. We each went off
into our own corners and worked on different things. And I think I feel another
centripetal moment coming on. And what I thought I
would talk about today would be some of the
joy and excitement I’m feeling in the face of a new set
of really puzzling challenges that have– feels like overnight
but probably more accurately just in the span
of 6 to 18 months– have come into the view of
multiple people simultaneously. Not just within law, but
across multiple disciplines as we’re all trying to
figure this thing out. And that’s really what I
wanted to talk about today. So how to talk about that? Well, one anxiety of our
time that will definitely date this talk as circa
2015 are the worries around what artificial
intelligence is going to do to us. Bill Gates, no less. Funny that he would
be worrying about it. But yes, Bill Gates is really
worried about the threat posed by artificial intelligence. Here’s Stephen Hawking, a
famed theoretical physicist worrying about it. Elon Musk said
that AI is nothing short of a threat to humanity. With artificial intelligence,
we are summoning the demon. And Nick Bostrom,
philosopher at Oxford where I spent several years, has
this wonderfully Newsweek kind of view on it. The end of every
Newsweek article is the future is unclear
but one thing is certain. If things don’t get better
they can certainly a lot worse. He says, super
intelligence could emerge, and while it could be
great, it could also decide it doesn’t need
humans around or do any number of things
that destroy the world. So, you know, anyone’s guess. Let’s just wait and watch. Let’s just see what happens. And I, without spending
the rest of the talk on it, want to align myself
with those who don’t see AI as a big threat. I’m actually more optimistic
than I am pessimistic about it, but this kind of
existential angst, second only to
zombies these days, is capturing a set of
anxieties within us that have to do with technology,
autonomy, power, and control. And those are feelings
aren’t coming from nowhere. Those are feelings
that really are coming from the
development of technology, wholly apart from
whether there’s going to be the terminator
as a documentary. And so I want to situate
these anxieties in a much more present and concrete context. So here’s present in
concrete or slightly past. This is it. The page rank algorithm. This is what Google regionally
originally used, in its nut, to figure out, when
crawling a ton of web pages all over the place,
what to rank what so that when you put in a search
term something would come back. And it’s parallel these days. The Facebook News
Feed algorithm. So secret I don’t have
anything to put up on the screen except its
result rather than the recipe. And for those of you who
have been really working on that book for a while and
not doing anything else– the Facebook News Feed
will populate your screen with all sorts of stuff. One thing after another
that might interest you. If you like it, you click like. If you don’t like
it, you click like. And you don’t know, among the
many hundreds or thousands of things that could
appear, how Facebook decides what’s to show you. And I want to now run
through basically 5 and 1/2 hypothetical or not so
hypothetical situations and capture the zeit
geist of this room of 2015 as to how we feel
about each of these. So for each of
these, at the end, I’m going to call for
a hum on your view, and if you agree one way at
the right moment you’ll hum. And if you think the other
way at the right moment you’ll hum in the
other direction, and we’ll see where we stand. So here’s the
first hypothetical. This is Facebook
News Feed oriented. And of course, the
feed doesn’t just show you stuff on the basis
of it being off in a corner. It’s learning about you as
you act on or near Facebook. And in this case, for
instance, Facebook can predict when two
Facebook members are about to enter into
a relationship, possibly before
even they know it, which gives them an opportunity
to decide to move things along a little bit by
populating the other’s feeds with positive things
about the first. Or if it decides
it’s not a good idea, or a would-be in-law who’s not
pleased writes them a check, could decide to pull
them apart kind of thing. That’s a little
far fetched so far, but with this kind
of knowledge you can imagine a more
concrete hypothetical that I broached
last summer based on this very real
experiment done off the congressional
elections of 2010. In that November
of 2010, Facebook, in conjunction with
some researchers, did an incredible thing. They salted the News Feeds of
tens of millions of visitors to Facebook on election
day from North America with this message that
says, today is election day. You can find your
polling place, and here are some of your friends in
particular who have voted. Facebook is in a privileged
position to know that. And they waited
to see, for those who feeds they put
the message in, did they vote out of
proportion to those who didn’t have their
feeds have anything in it? And the answer was, yes. Statistically significantly so. And enough that, for
an election as close as the US presidential election
of the year 2000 in Florida, it could have tipped the
election in one way or another with a number of
extra votes depending on what Facebook had done. So here’s a hypothetical. Suppose that it’s the
2016 presidential election and Facebook has a firm view
as a company, as represented , say, through Mark
Zuckerberg, about who should win the election. And Mark decides
to put reminders in the feeds of–
take your pick. Republicans or Democrats. Whoever you don’t like most. And it alerts them
to go to the polls, and it says nothing
to those who you like and who Facebook doesn’t. And let us suppose,
as a result, we can document that the
election outcome was changed. So here’s where I’m
going to call for a hum. At the count of
three, if you think that would be an awful thing to
do– Facebook absolutely should not do that, and if
it did do it they should possibly be in line
for some form of punishment– let me know with a hum
on the count of three. One, two, three. [HUMMING] All right, so a somewhat
uncertain hum but consistent. If you think that’s
perfectly fine. Electioneering is
the name of the game. Let the good times roll. Facebook is entitled
to do what is pleases. Let me know with a hum. One, two, three. [SOFTER HUMMING] OK. Three people in the
front row have a view. Couldn’t see if it
was Charles Fried. [LAUGHTER] I will withdraw that
calumny right now. All right. Sounds like pretty lopsided,
not happy with that. Let’s move on. Here’s the next
hypothetical, and this was highlighted by Berkman
fellow Zeynep Tufekci. Thinking about the two events
that consumed us last summer as a nation. First, the riots in Ferguson and
the after effects, and second, the ice bucket challenge. These were the two things
that were really big from the summer of 2014. And here’s what Zeynep assembled
noticing about this fact. First, that as Ferguson
was getting more and more unrestful, you can see
in this chart of Twitter mentions Ferguson
going off the charts. Twitter. Everybody’s mentioning it. Retweeting stuff about Ferguson. Going off the charts. And then, others were starting
to notice it’s so weird. My tweetstream is
wall-to-wall Ferguson. Only two mentions of it
in my Facebook News Feed. What’s going on? And in fact, it does turn
out that Facebook News Feeds had many more ice
bucket challenges than they did talk of Ferguson. So here’s now the hypothetical. Suppose it’s the summer of
2015 and it’s not Ferguson. It’s some other town, and
we don’t even know why, but there is unrest
in that town. And you are at Facebook and
a law enforcement official, perhaps the new US Attorney
General, calls and says, I’m not going to order
you to do anything, but I want you to know the
unrest is looking like it might spread to other cities. And we have reason to think
that when people are uploading videos, well short of
incitement but still that are showing
unrest in this town, it might spread to other towns. We are asking, as a civic duty
in the interests of protecting physical safety, we are asking
you to do a few fewer shares in news feeds of
video of violence taking place in this town
and a little bit more of the ice bucket
challenge of the day. OK, that’s the question. How many people
say, absolutely not. I will not do that. Whatever this roulette
wheel of an algorithm that even I don’t
understand spits out is what they’re going to get. By gum, I won’t hand tweak it. How many say that? One, two, three. [HUMMING] All right. Very confident humming now. How many people are like, I
am open to that discussion because I believe
in saving lives? One, two, three. [LAUGHING WHILE HUMMING] If you’re laughing
while humming it’s hard to hear, but less
enthusiasm about that. OK, let’s mark that. It’s less enthusiasm than the
first at this kind of maneuver. Should say, by the
way, one inference about why there was less
Ferguson on Facebook– much was ascribed
to Facebook somehow in cahoots with the
government trying to tamp down news out of Ferguson. I think it was
actually that Facebook was really excited about
hosting native video and the ice bucket challenge
happened at just the moment that they were pushing
native video up. If you recorded a video
and put it on Facebook they really wanted to
share it because they wanted to promote it. And in fact, that’s
what made the ice bucket challenge take off. I can think of no
other explanation for why pouring ice
on your head in order to avoid giving to charity
would be something that would be extremely exciting to do. All right, third example. For the longest time, if you
performed a search in Google, not on the word Jewish
but on the word Jew, one of the top hits, in
this case the second hit, would be the site jewwatch.com. The most comprehensive
and easiest to use– because sometimes websites are
hard to use– website dedicated to both current historical
Jewish news, organizations, et cetera. It has a famous Jews list,
Jewish entertainment, Zionist occupied governments–
but wait a minute. [LAUGHTER] This is an anti-Semitic site. And some people noticed that
and complained to Google and were like WTF. So Google took firm action. They bought themselves an
ad at their own prices, on their own
service and said, we are disturbed about
these results as well. Please read our note about it. And if you click through,
you get a message from Google that’s like we feel terrible. This is not the Google
we strive to be, and yet, we couldn’t
possibly do anything about it because that would be messing
with the secret algorithm that none of us
understands anymore. We won’t change the outcome. OK. Let me again ask. Each time I’m asking
the first question it’s in favor of
non intervention and the second is in
favor of intervention. So how many people would say
Google did the right thing here by refusing to
adjust the position of the anti-Semitic site
after the complaint? One, two, three. [HUMMING] How many people say
Google should have taken action on the rankings? One, two, three. [SOFTER HUMMING] OK, very clear
consensus in the room. Not unanimity, but
consensus, that Google did the right thing here. All right. Example number four
I think we are on. I Googled on Sunday
vaccinate my child, and these are the results I got. The first of which
is anti-vaccination. The second of which is pro. The third of which is
anti, the fourth of which is pro, for which you might get
a sense that it is completely up in the air as best humanity
knows whether vaccinations are a good idea. Should we ask
Google– should Google think about whether to change
the ordering of those results? I’m going to take off the
table the idea of buying an ad, being horrified
by these results. You might be, depending on
either side that you’re on, but we support you either way. So how many people think
this represents a problem that Google should fix? And I’ll just put it on the
table– in favor of pushing down the anti-vaccination woo? W-O-O. How many people would
like to see that happen? One, two, three. [SOFT HUMMING] All right. Not very much enthusiasm. How many people
say, let it ride? One, two, three. [LOUDER HUMMING] All right. We are status
quoists in the room. We like that roulette wheel even
though we don’t know the odds. Ah, yes. Here is the fifth example. This is mugshots.com, a website
with a fairly clear business model. It gathers public
domain mugshots that exist from police
departments around the nation. It gathers them and information
about the people who have been arrested, and then it
makes clear that search engine should try to find it. And if you type in
this guy’s name, odds are good,
traditionally, that this would appear pretty
high up because it’s a quote, unquote “relevant”
search result on his name. Now, I realize he
looks pretty happy. He was booked for
marijuana possession. You could imagine that later
he regretted it and might not want this as a first
hit on his name. So again, I ask. Are you open to fixing the
roulette wheel a little bit so that this kind of
site doesn’t go down? And before you answer,
let me remind you that there’s a little button
here to unpublish the mugshot, and it costs only $400 to
have your mugshot removed from mugshots.com. Of course, that does
not cover mug shots with a z dotcom or any
of the other 50 sites, potentially run by the same
person, that do exactly this. All right. So what do you say? How many people think
Google should finally violate what you appear to
think is its prime directive and change the formula to push
down the mugshots.com sites? One, two, three. [HUMMING] All right. I got some votes here. And how many people
say, let it ride? One, two, three. [HUMMING] All right. We are even on the mug shots. It may surprise and
amaze you to know that Google finally decided
enough with the mugshots. And not because they
were doing somehow undo search engine optimization
but because it basically did seem an extortion
racket, and The New York Times had written
a story about it and they just decided enough. And this greatly affected the
business model of mugshots.com and similar sites and
made life better perhaps for people who had
been arrested and had public mugshots about it. All right, so there
are our five examples. Let me do the half real quick. And the half is grounded
in the observation that around the corner,
life is less and less going to be I search for
something and I get results and then I click on stuff. That’s very 2005. More and more it’s going to be
concierge like services, where you just ask a
question of your AI and it gives you some
form of answer back. And it’s kind of the answer,
and that’s how you go. That’s the model behind Siri. It’s the model behind
something like the Nest Thermostat, where you could
set it to a temperature. Good luck getting it to
stay at that temperature. The Nest is too smart for that. It’s going to change the
temperature of your house depending on what it infers
you probably want at the time, according very little
weight to what temperature you set the thermostat at. So here’s the half example now. Suppose that Nest takes
some money from NStar or whatever they’re
called these days. Energy source? Eversource? They take money from
Eversource, the local utility, to lower every person’s
thermostat in the jurisdiction by half a degree, saving a
lot of money for the utility, saving maybe some
burden on the grid, and that’s just one of
multiple factors that go in. How many people think
that’s not fair, don’t touch the thermostat
in that circumstance? One, two, three. [HUMMING] Oh, anger at saving energy. How many people say, bless
you, that’s a great idea. One, two, three. [HUMMING] All right. A few less than that. All right. We’ve done a bunch of
votes and case studies. Let’s now try to sort
them out along a spectrum. So here we go. I’ve made a spectrum from
leave the algorithm alone over to no, no. What just happened is not good. We should adjust or penalize
the algorithm as a result. And here’s where I think
we ended up in our voting. For the search on Jew
within Google, which just led an accusation,
that’s where people were most supportive
of Google’s action in, in fact, doing nothing. Leave the algorithm alone. For vaccinate my child,
which have results that include
inaccurate stuff, there was a little more enthusiasm
for it but not much. For the mugshots
being de-emphisized, I think I heard more support
for Google’s ultimate decision there. Probably more
after you found out that they made that decision. And for Ferguson or another
city having its updates about violence being
demphisized– whoa. Do not want to
see Facebook, even in the name of public
safety, making changes. And finally, for
Facebook putting a thumb on the scale on election,
that was the third rail. That was where people were
like, that’s beyond the pale. So how do we make sense
of our range disclosed as a group of feelings about
this other than to say, I don’t know? We just seem to feel different
ways about different things. Well, here’s a couple
ways of thinking about it. One is offered up
by Randall Munroe. Randall Munroe is the
creator of the XKCD comic, and Randall has lots of
time and lots of his comics are about taxonimization and
categorization and stuff. So for example, he likes
to do comics that draw out how things work, and he’s also
very sensitive to the role that technology
can play in things. So he shows how the actual
nuclear chain of command may work. And in that instance, and this
is now going beyond the comic, but Randall has suggested
that maybe the technologies that we’re talking about,
across these examples, some we think of as tools and
some we think of as friends. And the things we
think of as tools we may expect to
work a certain way, and if they don’t,
we react in a way different from how we
react if our friends don’t act a certain way. And these technologies are
evolving under our noses, sometimes pushing the line
between tool and friend. So maybe this distinction
between tool and friend or tool and guide is
something worth exploring. So here’s one hypothesis. That we start and say,
if you’ve got tool, maybe that’s where we’re
least interested in changing the way the tool works. For instance, in the US
nuclear command and control, the job of the
engineer is to build a tool that does one thing
and then be out of the picture and that’s it. That’s the tool. It’s deterministic. And so we are least
interested maybe in changing when we
see it as a mere tool and it operates according
to certain principles. We don’t want to fuzz it
in specific circumstances. But as something moves
more and more to a friend, and maybe the Facebook feed is
more and more like your home. It’s like a place you visit
online that’s your place, and it’s getting filled
by Facebook acting in a capacity of trying to
give you stuff you really want for which there’s no
baseline feed, a generic feed, against which yours is compared. Your feed is
personalized to you. Facebook perhaps is acting
as a friend in that instance, and that’s why we would be least
interested even when there’s a clear public safety mandate
hypothesized, not in having anything going on
underneath that’s different from somehow
a general rule which I’m calling the roulette wheel. A algorithm so complicated
that even the operator, designer of the algorithm,
doesn’t know what’s going on. So that’s one hypothesis. Now of course, there was
disagreement in the hums as we went along. I don’t want to
assume unanimity. If you find yourself disagreeing
with where I placed a given example along the spectrum,
one account for that might be what I’m
calling toxicity. There’s a sense that’s
some information or result is sufficiently
toxic, however you want to define
that– wrong, evil– that this is not about
a user getting something from a friend that
is relevant to what the user searches all the time. It’s about no, there’s some
stuff that shouldn’t exist. So for example, if
the Google algorithm were turning up child
abuse images in response to searches that you
might infer were looking for exactly that,
we would probably, I would assume, not have
much objection to Google adjusting the algorithm
to lose those images. And in fact, possibly to
make reports for those who provide the images online. That’s an example to
me, if that’s how you’re feeling, of toxicity trumping. And for those who think
that maybe this should be a website that’s a hate site,
in Europe it might be quite trivial to say, that should
be pushed down in results or omitted entirely. Website stormfront.org
is a neo-Nazi site. It does not appear in google.de. It has not appeared in
google.de for at least a decade under German law as
Google understands it because it’s just
deemed toxic in Germany, and there’s not a First
Amendment to stop it. I guess my point here is
that that reaction is one not specific to the technology. That is a classic
under-the-law reaction. Here’s some stuff in society
that we’re not going to allow. Our government
decides what it is. It gets tested against
some freedom of speech test perhaps in the courts, and if
it survives that’s how it goes. And then those who are
trafficking in the speech have to abide by the ruling. That’s the toxicity trump. Now other ways of
thinking through, besides tool and friend,
what can we think of? And for this, I want to just
show some other thinking going on by others. As I said, from other
fields, from other quarters. One such person
is Frank Pasquale, here with Oren Bracha,
in I think 2008, writing this article. Federal Search Commission
and Accountability in the Law of Search. Now this article hews to
Betteridge’s Law of Headlines, which is to say, if there’s
a question in the headline the answer is always no. So he does not call for a
federal search commission in this 63 page article. In fact, spends most of
the article just trying to lay down track
to say, Houston, we have a problem here. You readers are used to saying
leave search engines alone. They’re just doing their thing. Don’t interfere. But there’s a bunch of stuff
on the horizon predicting some of the case studies
and hypotheticals that I’ve already mentioned. Unfortunately for solutions,
they don’t have a whole lot. They’re just wanting to
put the problem on the map. And this is the penultimate
paragraph of the article, and it basically
says, just don’t shoot me for saying maybe there
should be a form of regulation. What it should be I don’t know. TBD. And in fact, Frank has since
coming out with this book, just coming out this month,
The Black Box Society. And we’ll be presenting on
it at the Berkman Center next Tuesday. This is your sponsored moment
of the lecture for which you’re welcome to attend in
which he’s talking more about really a much more
capacious view of the problems here. Frank is thinking a lot
about power dynamics, about colonialism, about
capitalism and such, but a very interesting view. Not as much driven in
the technology as what we’re talking about today. Christian Sandvig
is another scholar who’s been thinking a lot about
this, coming from a sociology and communications background. He also has wonderful
titles for his talks. But the thing I wanted to
draw out of Christian’s work is he’s come up
with another axis. Not tool or friend,
but he says, here are two questions to ask about
a technology with what we would call algorithmic consequences. First, is what it’s doing
predictable by the designers? Do they know what’s going to do? And second, can users discover
what the designers can predict? And the choices that
he’s most interested in are when it is predictable but
not discoverable by the users. And what would an
example of that be not in the technology realm? Probably the
Harvard-Yale game of 2004 where at halftime, members
of the Harvard pep squad went up and down the
aisles in Soldier’s Field on the Harvard side
distributing colored cards so that at the right moment
they could be given a signal, like a North Korean rally,
hold up their colored cards and spell out a message to
the other side of the field. Which, at the signal,
turned out to be different than expected because it was
not the Harvard pep squad. It was the Yale pep
squad in disguise, and this was the resulting
message at the game. So this is a great example
of an algorithm that is entirely predictable by the
designer but not discoverable by the users until
it’s too late. So what would that
mean technologically? Well, I think for almost all of
the examples– five at least. I won’t go into the half yet. All of them are pretty much in
these category of predictable, if you’re intervening
by the designer, but not so readily
discoverable by the user. You won’t know what
you don’t know. And that means solution wise,
if you think any of these would be a problem,
if you’re not in favor of this kind of
action, you might say, well, let’s just make
things discoverable. This naturally gravitates
towards a transparency rule of some kind. And sure enough, we’ve
had people, including Nicholas Diakopoulos who
coined the phrase algorithmic accountability. It has alliteration
going for it. Nicholas has written a
wonderful paper with advice to journalists who are trying
to deal with issues like this as they write stories and
talks about how important it is to try to encourage
transparency among the likes of Facebook or Google. And when you don’t get it, how
to try to game the process so that you might be able to
reverse engineer what’s going on and find out
what those cards are going to say before you
actually hold them up. But, now, transparency
has its limits, and these are limits well known
to legal scholars and policy folks. There are all sorts of times
when disclosure isn’t enough. I will often ask a torts
class, if there’s rancid meat in the supermarket and
the supermarket knows it but the shopper
doesn’t– seems to meet these kind of criteria– is
transparency the way to go? Stick a sign and it’s
like, rancid meat half off? Or is it like, maybe it
shouldn’t be on the shelf at all? This is a test for your
free market libertarian kind of thing. But we tend to like
to live in a world where we don’t
always have to read the fine print not to fall ill. And similarly, there
might be challenges to transparency, particularly
when on every search telling you what you
don’t know if it’s exactly the thing hidden. There’s so many
things you don’t know when there are 10 hits on
a million possibilities. Very hard to figure out what
a transparency rule would look like there. And that’s why I would
shift to other approaches. Complementary approaches. And one of them is inspired
by Jack Balkan, a conversation with him actually at a symposium
thrown by the Harvard Law Review here last
spring, is the idea of information fiduciaries. Information fiduciaries is a
concept not yet finding itself in law anywhere. This is just an idea
so far that says, we have relationships
in the real world that are doctor patient,
lawyer client, maybe financial advisor
and mark– I guess that’s what you’d call it. By the way, financial advisors,
you know there are two types? Fiduciary and non fiduciary? And fiduciary just
means they have a duty to put your interests first. That’s all it means. And there are some financial
advisers who have that legal duty and others who don’t. And if you have one
that does, the idea is that adviser would not
recommend a jackalope ranch to you as an investment just
because the person is getting a commission on
shares of the stock, and the one without
the fiduciary duty would be allowed to do that. And it’s just transparent
in the sense of by the way, I’m not your fiduciary. Now buy the ranch. And it’s just weird to
go online and be like, which kind of advisor
is right for you? It’s like in what planet
is the second kind of advisor right for you? So if we have these kinds
of relationships backed up by a legal duty, differing
from one field to another, couldn’t we have something
similar for information fiduciaries, for
Google and Facebook, that have to put
your interests first? Or at least not
theirs ahead of yours? And what would that
mean on our spectrum? And maybe we should think less
about any specific case study and just more
about what it would mean as we would distribute
it between tool and friend. And here’s my thought. Have some kind of lighter
duty but more than no duty if you’re thinking
you’re just a tool, and if you’re a friend there’s
going to be a heavier duty because friendship carries
certain responsibilities with it. If you’re following
Dumbledore down a dark alley, you want to believe that he’s
kind of on your side, which most of the time he is. What would an example
be of a lighter duty? How about don’t be evil? That’s not a bad way of
thinking about a duty, and it’s, in fact,
one that Google itself, of course, embraced. It’s exactly why Google was
reluctant to change the output of a search on the word Jew. They have a certain pride
about the engine itself and what it means and
about its neutrality, and they’re going
to stick to it. So here’s another
example of don’t be evil. If we had a
lightweight information fiduciary on this
tool, the Kinect. Microsoft recently filed for
a patent on the Kinect, 2011. And here’s the figure. There are two things
worth noting here. The consumer detectors
in these examples. The Kinect is a
consumer detector. It detects consumers. And what that means is
that if you order up a pay-per-view movie and there
are 10 consumers in the room it should charge
more than if there’s only one consumer in the room. That is an example,
if not outright evil– this gets to a discussion
of price discrimination that is beyond the
scope of this lecture– but is something for
which, at this point, it’s not clear the Kinect
is working for you anymore. It is a tool that is not
only not being your friend, it’s got someone else’s
interests in mind. And that’s the kind of thing
that maybe could be precluded. Now, when we go over
to the heavier duty, what might that mean? And I guess I find
myself turning back to tort a little
bit and thinking of something that
resembles a recklessness standard of some kind. And for that I would say
it’s when it’s more a friend. So this isn’t just for
organic search or something. It’s when you’ve got
something more than just regular old search results. So this is the Google
Knowledge Graph project. You have been exposed to the
Knowledge Graph sometimes. This is a search for Avishai
Margalit, an Israeli professor emeritus in
philosophy, and here is just the regular results
for which Google basically takes no responsibility. Why should they? This is just a tool. The only thing I would
say here is don’t be evil. This is Knowledge Graph at work. This is Google
filling in what it knows, through
automated means still, about him, maybe
drawing from Wikipedia. Now here’s something notable. It says that he
died in 1962, which would mean there was a
really low bar for serving as the George F.
Kennan Professor at the Institute for
Advanced Study in Princeton. But in fact, Avishai
is very much alive, and he’s a puzzled man. He’s like, where do I
go to get my life back? And we don’t have much
of an answer for that. There’s nowhere
to go on Google– you can get updates about him. If he should come back to life,
you’ll be the first to know. And what happened was they drew
it from Wikipedia to begin with and Wikipedia had it wrong. And then Wikipedia got
fixed and got it right, but Google never bothered
to visit and draw in the new information. What responsibility,
if any, would Google have over here–
not over here, friend not tool– to do something here? I think that’s a fair question
worth asking, and a question that doesn’t lead to madness. The way they’re trying
to impose some kind of bizarre standard of being
responsible for everything in the world. This division
helps us say, look, if you’re venturing
into this, if you’re going to play
Dumbledore, you gotta learn how to work a wand
a little better than you do right now, would
basically be the idea. And you can see the same
thing about appendicitis. You Google, what
is appendicitis? You’re just getting
whatever comes in on the sedimentary
wave system below. But if you’re looking here
through Knowledge Graph, this had better
represent appendicitis. It doesn’t have to
be an actual suffer. It could be a Corbis image. But if this has
nothing to do with it or if down here
it was like, this is known to result from
an imbalance of the four bodily humors, I would object
to this and say, come on Google. Go big or go home. This is not the way to be. But now, fiduciaries can’t be
the solution to everything, and for that, it’s worth noting
that Uber presents a really good example of the problem. Now Uber is easy to think of
as actually a great example for a fiduciary at first because
you order up a car service and it should come and you
see the drivers get rated. That helps you out. You can make a
choice, although you’d only know the driver’s rating
after the driver’s assigned to you. Whatever. But there was a moment where
it became known that in Soviet Russia, the car rates you. And in fact, through a
bug on the Uber site, you could find out
how Uber drivers were rating you as a
passenger because they did. And this spread on Twitter
in the summer of 2014, so I guess three things
happened in the summer 2014. This became known. And here’s somebody
saying, my Uber passenger rating is just a 3.9, so my life
is as C plus person continues. And my Uber rating is 4.8. I’m racking my brain to
figure out how I’ve been a less than perfect passenger. Mind you, this is out
of five, but whatever. We aspire to nothing short
of perfection in our lives as car passengers. And this is my favorite. 2.6. Guess it’s not cool to
always yell, hold it steady, and roll out of
a moving vehicle. We should say, by the
way, there are now services that can buy
that information from Uber and use it in jobs selection
and other sorts of things, but that’s a
different talk having to do with scaring you about
personal privacy, which can always do at any time. So this is an example,
though, of a two sided market. Uber is not just in the business
of helping passengers get cars. It’s in the business of
helping drivers get fares. So to whom do they owe the duty? Is it to the driver
to get a good ride, or is it to the passenger
to get a good driver? Both? Well, what happens
if they conflict? Thinking this through with
fiduciaries alone, I think, doesn’t exactly
solve the problem. And you can start
to see insidiousness creeping in either direction. Suppose on the
basis of the color of your skin you get a lower
rating, either as a driver or as a passenger. Is that fair? How would you know? How would Uber know? Is there anything that
could be done about that? And you can generalize
it away Uber even to job sites, which more and
more are taking on concierge like, friend like, non-tool like
roles in saying, you know what? We’re going to take in
2,000 or 20,000 applications for a barista at the Starbucks
down the street, but Starbucks, you don’t have to
go through them. We’re going to offer
you three, and they’ll be on the basis of
a complex algorithm that even we’re not sure
how it will turn out, but it will be
trying to figure out how to serve you and
get you the baristas you want through your
ratings of them after we fill earlier positions. That could allow latent
racism or other discrimination to creep in. How do we think about that? Well, in this case,
the law has had a lot to say about that
over the years. This is a 1936 map of
the city of Philadelphia. The original redlining
done without the aid of computational services. These are just the
bankers carving up the city as to where they would
and wouldn’t be making loans. And it’s this kind of
behavior through the decades that led to things like
the Community Reinvestment Act of 1977, which
said if you’re a bank you can’t do that. And in fact, you
have to look to see– we’re going to audit you every
so often and see where are you and aren’t you
making loans, and how is it matching to the kinds of
applications you’re getting. And as a society,
those behind this law have found it an
acceptable intrusion. I think it’s almost a version
of the toxicity point. This is a form of
toxicity– discrimination– and that’s going to be a trump. Maybe you’re a
tool, maybe you’re a friend, but toxicity trumps,
and that kind of discrimination is not acceptable. The only thing to
worry about here is discrimination
can be so widespread and across so many
services, this really works best as a stove pipe solution. You pick a narrow domain. One where you can
compare apples to apples, discern a disparate
impact, remove factors that might
account for it that don’t have to do with discrimination. And that means
housing, employment– very specific fields in
which that might happen. Either through mechanisms
like this or there’s some really interesting
technical work going on in the computer
science side of things where researchers like
Cynthia Dwork and others are looking for formulas
that could prepare data before it goes in
to an algorithm, like that of monster.com, to
see to it that the data has been situated so
that like people will be treated in like ways. And I won’t attempt to summarize
how it works in this paper except to say it is
extremely complicated. But that’s the kind of work that
Cynthia and others are doing to say, could we
have algorithms that would try to get the stage
set so you don’t have to go in and try to audit things
later and be really intrusive but still try
to have some assurance that you won’t have the
kind of discrimination that we don’t like. That’s a tour of predictable by
designers and discoverable by users. But really, the last
riff that we were just on gives a hint of
the harder problem. What happens when
stuff isn’t even predictable by the designer? The discrimination
is not intentional. It’s just an emergent
characteristic of the system. And for that we have
a real challenge. And there are lots
of systems that are emergent in their properties. It is said– it might
be apocryphal, but not so much so that it
stopped Salon from writing this article about it–
that House of Cards was not like they sat around
the writers room and were like, you know what we need? Is a house of cards. But instead, they went
through all the data of people who like this, like that, and
the machine turned the crank and it was like,
Kevin Spacey now. You’re the machine. And Kevin are you free? We need you to sit in
the Lincoln Memorial. We don’t have a plot yet. That’s the kind of
thing that is not predictable by the
designer, certainly not discoverable by the user, and
we’re just like, I don’t know. I am addicted too. It’s just the way it is. And it’s the stuff
that gives you ads like these from
lowermybills.com where you’re just like,
why is that the ad? How is that possibly
an appealing ad to click on to get yourself
some car insurance? And the answer from LowerMyBills
was like, we don’t know. We just took some stuff
and kept putting it on, and you know what? People click on this stuff. You’re just like, well, OK. Now if you feel that your
life isn’t Dada enough, this is probably a good
trend, but for the rest of us, you start to worry. And I don’t know if the
problem is more that it works or that it doesn’t work,
as in this instance where, if you were shopping
for The Official LEGO Creator Activity Book, here’s
the perfect partner. American Jihad– The Terrorists
Living Among Us Today. It’s like, I’m going to back
slowly away from that book. That’s the kind of thing
that just– I don’t know. And there’s so many
correlations with big data that are entirely spurious,
and yet they’re very tight. And how are you going
to know when it’s right and when it’s not. I should say, by the way,
this correlation, 0.99376, and many others like
it are offered up on Tyler Vigen’s website, HLS
class of ’16, so go Tyler. I don’t know if
you’re here today. You’re probably correlating
things as we speak, so very good. And this is a great example
of not just an algorithm going awry, but
what happens when algorithms start to encounter
other algorithms out in the wild? This is a wonderful example from
Amazon where the normal market price of this book, The
Making of a Fly, is $28.95, but there was a
strange period of time when it turned out it was
$1.7 million at its cheapest. It’s like, why is this
book so expensive? Here’s the next seller
up, two point– well, this is looking
like a better price. So what’s going on with this? Well, we did some reverse
engineering of the algorithm as a community. This is great, from
Michael Eisen’s blog. And he noticed that the
two sellers were actually using each other’s price
point to determine their next. So whatever profnath charged,
bordeebook would charge 1.27. And the next day,
profnath would be, whatever bordeebook’s charging? 0.99 of that and
I’ll under cut him. Now, why is bordeebook
like, I’m going to charge some more that
will differentiate me? Because he doesn’t
have the book. He’s waiting for somebody stupid
enough to click on him anyway, at which point he will
buy the book from profnath and make 0.27 in profit. So it’s actually so
crazy it just might work, but it’s also just
entirely crazy and we end up with
$2 million books. I can’t resist but share
one more example of this, and that is a real shirt
available on amazon.com from the aptly named
store Solid Gold Bomb. And it says, keep
calm and rape a lot. And so somebody noticed this on
Amazon and was like, what now? And wrote to Amazon
how dare you. Amazon’s like, we’re
just Google for stuff, we don’t know what
we’re selling, and they wrote to
Solid Gold Bomb. And here’s the explanation–
it rings true– that Solid Gold Bomb offers. They said, well, we
actually have an inventory of 1.4 million shirts with
different things on them, and when we say inventory
we mean nothing. We mean that we only
generate by algorithm words on a shirt that
start with keep calm and then wait to see if
anybody buys any of them. And if they do,
we’ll go to CafePress and have the shirt made
and send it to them. This is an example, again,
of an algorithm that just operates on
sheer volume and lets the search engines sort it out. Undiscoverable, really,
even by the makers, as to what they are, in
fact, doing or offering up to the public. And don’t think academia
as immune either. This recent scandal
is quite notable. When 120 gibberish
papers appeared, not just randomly online,
but in Springer and IEEE. These are journals, I guarantee
you, over at the library, we pay your tuition money to
subscribe to these journals. And it does make me think
we should get our money back because we have enough
of our own gibberish. We do not need to import it. And when I say gibberish
let me be clear. These are words that have
no meaning whatsoever. This is not like a
poorly written paper. This is a paper that
a human mind never experienced or wrote. In fact, they were written by
this MIT gem called SCIgen, the automatic computer science
paper generator, for which I gave it a whirl, and
here’s BIELID– A Methodology for the
Improvement of Rasterization. And it’s like a full paper
I could submit to a journal. And it felt so wrong to even
generate this on the screen. It felt like thought crime. I intentionally
misspelled the names so that there couldn’t be some
scandal six months from now about like, that’s not– or
that it would become my most famous and well cited paper. So this is a problem
within academia. That somehow it would make
sense for people to do that, and for journals
to just publish it. Hello? And how did Springer
and IEEE react? Well, rest assured
they have now, after an intensive
collaboration, come up with a SCIgen
paper detector. So you can rest assured that
another machine has tried to detect the first
machine to put the nonsense paper into the
annals of human knowledge. So one last point on the
academic kind of oddity front, which is in 1996 now,
quite a long time ago, Julie Cohen, a wonderful
scholar of privacy law and personhood at
Georgetown Law, wrote this paper about a
right to read anonymously. She was worried that a
right that we just took for granted– you could open a
book, read it, close the book and get rid of it,
and nobody would know you had read the book
or how far you had read in or what you thought about it–
would be eroded by ebooks. And that suddenly in Soviet
Russia the book reads you. And now years later it’s
like, well, of course it does. In fact, it’s a helpful
way for the teacher to check in on the student and
see how the student is doing. But here, I’m not as concerned
about the student’s right to read anonymously. I am concerned
about that, but I’m thinking more about what this
kind of iterative AB testing will do to us as scholars. We’re going to write stuff, and
I cannot tell you that I could resist the temptation– if I
find out that it’s the fourth paragraph of chapter three
of The Future of the Internet where I lose them, that’s
where they all stop reading, I’m going to put a picture of a
cat right under that paragraph and be like, there’s more
where this came from. And these are ways of adapting
to a market demand for which scholarship isn’t always that. There are hard truths that
maybe 90% of the people aren’t ready to
hear yet and they’re going to throw that book away. But the 10% that tough
it out and keep reading? Those are the people you’re
actually trying to reach. And these are the kinds of
algorithms affected by humans that we are going to face
as academics asking us about our own identities. Now here’s just a snapshot
from earlier today of Google searches. 2.7 billion searches
so far of Google today, adding up to about
1.2 trillion searches a year. This is remember, double X. Not
really predictable by Google, not discoverable by the user,
what goes in or out of it– how can you impose
any standard to Google on what searches say
when we’re talking about this volume of stuff? I don’t think that you can. So here’s a last crack
at another approach that maybe could help. And the idea is,
maybe there should be more than one search engine. I think we like to sort of
chortle a little at Bing because Microsoft had a coming. After all, 20 years ago
they did some bad stuff. But there’s a thought
that says, you know, there should be more than just
one place for your searches, and there should be more
than just one social network. This is not what should happen
when Facebook goes down. It should not be national news. Facebook is down,
but there’s no way for you to know because you
don’t watch the news anymore. You’re just hitting
reload on Facebook. We need competition. And there were days early on
when there was competition. So people remember Dogpile? Some? All right. Those were the days. Dogpile was the search
engine of search engines, and it searched multiple
search engines at once. And you know, if one is
putting a thumb on the scale– or forget the
thumb on the scale. We’re talking about
x and x, so it’s not even predictable to the
engine where it’s screwing up. Have multiple engines
with competing engineers that end up in different
cul-de-sacs with their AB testing. And the users can then see,
from those different engines, differences in
what they’re doing. And there was a time when there
was more than one auction site. There was Yahoo
Auctions, for example, and this site,
Bidder’s Edge, would let you search all of them
simultaneously and thereby get the best deal on the
thing you’re looking for. What stopped Bidder’s Edge? A lawsuit. A lawsuit under the
idea of trespass to the chattel of eBay. That the hamsters on
the eBay server reels were running too fast serving
up pages for Bidder’s Edge to scrape and therefore,
Bidder’s Edge had to shut down. And that accelerated
the primacy of just one natural monopoly through network
effects for your auctions. If there were multiple sites
searchable through a dashboard, that could be much more helpful. Or just think about this. Right now if I tell Siri I
need a ride home– all right, Siri hasn’t gotten
around to that yet. Siri is just like,
I feel you, but– [LAUGHTER] –should this just be fine,
Uber is what you’re getting. Or should it be like, all
right, here are five services. You can compare the prices. If you’re a driver
for Uber and you don’t feel you’re
getting a good shake, should you have to
leave Uber or should you be able to be a driver on your
phone for multiple services and be like, I’m ready to take
a call from any of the five pipelines that come in. I know I can do
only one at a time, but I’ll just do the
one that comes next. That kind of competition
is what we stand for. It’s why we trust the market. It’s why we feel comfortable
being cherry of regulation. But the reality is shaping up
not to have that competition. And to imagine the law assisting
as it did in the Bidder’s Edge case and not having that
kind of competition, strikes me as very wrongheaded
and something that would be an easy, easy fix to make. Now there’s one
more thing I want to say on that front, which
goes more fundamentally to the nature of competition. Because it’s not just one
company against another. It’s the idea of collective
hallucination open facilities versus the more
traditional proprietary and perfectly acceptable
so-called closed ones. And here’s an example
to think about that. OpenTable is something
probably many or all of us have used at one
time or another. Many restaurants advertise
their tables through OpenTable. And it turns out that
restaurants hate OpenTable because now that
they’re signed up and they need the stream
of reservations coming in, because it’s pretty much
the eBay of restaurant reservations, they’re
finding the prices to be so high they’re
making almost nothing off the tables from OpenTable. Did there need to be an
OpenTable and the only hope being that, well, let’s
have three of them and hope that they compete? I think it could
have been different, and for that it’s worth
looking at an idea that has been percolating
for years, really, since the advent of the web. And that’s the idea
of the Semantic Web. And I’m now at the
point in the talk where I worry about
an Annie Hall moment where Marshall McLuhan comes
on as like, you know nothing of my work, because I
see that Tim Berners-Lee, the inventor of the web, is here
so going to tread carefully. But Tim didn’t just
invent the web. It’s like, in this
season, talk about dayenu. It’s like, no. He did more. He actually went so far as
to conceive of something called the semantic web, which
as I can best understand it, treading carefully,
the semantic web says, what if people putting stuff
online on web pages, setting stuff out for the world to
see, were a little bit more structured about their data? They made it easier
for a crawler– a machine crawler, not a human
so much– to crawl that data and get a sense. I see. This is a train time table. Oh, I see. This is a list of tables
available at a restaurant and the times at which
they’re available. If you could do
that as a restaurant rather than the traditional
approach of a flash site that nobody can use, you
could see anybody writing an easy crawler to offer
to users that would say, here are the tables
available at restaurants. If you’d like to
reserve one, we’ll shoot a note through to
the respective website of the restaurant. No intermediary of
any particular kind. Just a web of open,
structured data. That’s the kind of promise, I
think, that the semantic web could and still does
offer for problems of the sort that
otherwise we’re looking at almost too narrow a
conception of competition to solve. And that’s an idea
that quite naturally is coming out of academia. That’s coming from
a different quarter. Academia comes up with solutions
to problems for which there is not an obvious market solution. Because if there is, then the
market can take care of it. You don’t need
academia unless you see a problem so big
the market can’t raise the money to figure it out. Or where you’re
creating or ideating a common good that if
people can connect with and say, yes, I’ll
be a part of that and get some momentum,
possibly without only the Weberian animal
spirits urging them on, you then get something that the
commercial sector picks up on. That’s the story of the internet
over CompuServe or Prodigy. It’s the story of the web over
the stove piped alternatives. And this is not something
that has an ending. It’s a constant kind of thing. It’s what OpenTable could be. It’s the story that Yochai
tells in his Wealth of Networks. So it’s academia that
I think has that role, and that’s been recognized. Here are two authors saying that
search engines use advertising and that’s a problem. That’s a source of
bias, and there’s not an easy way for the market
to strip that bias out all the time. And that’s why they
conclude, “we believe the issue of advertising causes
enough mixed incentives.” “It is crucial to have
a competitive search engine that is transparent
and in the academic realm.” Who wrote these words? Lawrence Page and Sergey
Brin in 1998 in their paper introducing Google to the world. That’s the spirit. I gotta say, I completely
agree with Larry and Sergey. [LAUGHTER] That’s what we’re looking for,
and it’s up to us to step out. Now, Google has still
some of these feelings. This is, again,
why they have pride about just changing search,
and there’s no intimation yet that they’re changing organic
search on the basis of somebody writing them a check. But Google is hedging its bets,
you might say, a little bit. Not that long ago, Eugene
Volokh, law professor at UCLA, announced that he had
written a white paper. First Amendment Protection
for Search Engine Results. You can see where this is going. You cannot reach in and ask any
change to search engine results because it would violate
the First Amendment. By the way, I wrote the
paper as an advocate and not as a disinterested academic, but
you might find it interesting nonetheless. I.e., I may not agree with
the paper I just wrote. [LAUGHTER] For which I so much want to
ask Eugene, as an academic, what do you think of the
paper you just wrote? He’s like, I can’t answer that. I cannot tell you. And it’s funny because if you
actually go to the paper– here’s the first
page of the paper. This White Paper was
commissioned by Google but the views within it should
not be ascribed to Google. Which I’m like, now
I’m really puzzled. Who wrote this paper? Whose view does this
paper represent? And the answer is no one’s. It’s not Eugene Volokh’s view. It’s not Google’s view. This is just a view
that arrived from Mars that the First Amendment should
protect search engine results. Which, of course, the paper
is then credulously reported as Professor Volokh says
that First Amendment protects search engine results. So there’s an intertwining
between academia and dotcom that’s a line we
need to navigate. This is the final set of
remarks I want to give, which is– of course, I
couldn’t give this talk without talking
about this study. This is was study in the
proceedings of the National Academies of Science, one the
most distinguished journals, and it’s people from the core
data science team of Facebook and some people from the
Communication Information Science departments at Cornell. And they just blew through
Betteridge’s Law of headlines. They’re like,
experimental evidence of massive scale
emotional contagion through social networks. This is like, the AI threat
is upon– no, no, no. Here’s what they mean. They meant that they found,
through in cooperation with Facebook manipulating
people’s news feeds from what they were otherwise
destined to be, that they could take
some feeds and check out things that were
about to go into them and evaluate them automatically
for are they happy or sad? And for some people they
only gave them happy things, and for other people they just
gave them more sad things. And they found that people that
had happy things in their feeds tended to share happy stuff in
turn, and people with sad stuff shared sad stuff. This just in. That was what they found. This led to an up roar. There was something
about this that had people feeling like Gandalf
had just turned and strangled Frodo. It was just like,
this is not cool. But the firestorm I
want to talk about here was the reaction
of many academics was– this should have gone
through an institutional review board. This should have
gone through an IRB, and if it wasn’t
going to go to an IRB, academia should have had
nothing to do with it. It just should have been
published by the Facebook people but keep academia pure. We don’t want to be a party
to this kind of– they thought it was like the Stanford
Prison Experiment or something, and that’s why we
can’t have nice things. If we could just undo Milgram
and Stanford– anyway, don’t get me started. So looking at this, my
reaction is the opposite. It’s like, be
careful what you ask for because who were
the critical parties to this experiment? Facebook. They didn’t need the academics. They were just getting
some extra cred in academia by publishing for the
world their findings. If they don’t publish
for the world, in the words of the
common rule– if they don’t contribute to generalized
knowledge, no IRB would apply. So Facebook is
perfectly entitled to do all the
experimentation they want, subject only to the restrictions
hypothetical that we’ve brainstormed about today. As long as they’re publishing
them they’re good to go. That strikes me as a
terrible set of incentives. And the fact is, what
ended up happening with PNAS is they did what
Google did with explanation. They published an editorial
expression of concern. Disturbed by this paper? We are too as the
publishers, but we’re not going to retract it. We’re just going
to tell you that we feel very uncomfortable about
what we did last summer. And that’s all there is to it. But this world of
data and algorithms is not a world
for which there is a natural place for academia. It’s one we have to fight for. This is the particle
accelerator at CERN. Also, the original physical
home of the worldwide web. This particle accelerator is the
kind of public good that it had to be a collaboration
between .gov and .edu, metaphorically speaking,
to generate it. And once that was
done, the results go public because it’s a
public good paid by taxpayers and other foundations. It’s in the realm
of public knowledge. If it somehow was
sensible for dotcoms to build these
things– it’s like, Google just smashed
some atoms, and we just want to tell you the results
are very interesting. A trade secret, but
might be a Higgs boson. For us to know you to find out. That’s the kind of thing that
we wouldn’t accept in physics. I don’t think we would. At least we would want
to think about it. And this is what our
information landscape is doing. It’s shifting, and when it
shifts you run into trouble because the values of academia
cannot be guaranteed to come along with it. That’s what Sergey and Larry
were worried about in 1998. And sure enough, this book
is not just a product. This book is War and Peace. This is an expression
of humanism. This is humanities– one
of its greatest works to hear some tell it. I started reading War and Peace
on the Nook not that long ago and came upon this
bizarre phrase in it. That a vivid glow had
Nooked in her face, and the flame of
the sulphur Nooked by– what’s going on here? So I searched for the word Nook
in the book and it’s all over. I mean, War and Peace is long
to be sure, but what’s going on? It turns out every place
the word Kindle appeared in War and Peace
has been replaced by Nook in an accidental
case of the worst product placement ever. Now, if your Barnes and
Noble you’re like, whoops. Our bad. Would you like your
$0.99– nah, forget it. If you are the Harvard
Law School Library and you’ve done
this– I don’t know. I would like to think the
librarians in the room would be a little bit
more than like, our bad. That would be a problem, right? These are the– seriously,
when I was the director, I was like, on it. So that’s where
academia comes in. And the only thing I think, as
academics, for us to realize, whether we’re thinking about
our role in the production of knowledge or
in whether or not it should be pro-vax or
anti-vax going in a knowledge base, maybe not organic
search in Google– the only thing we
should be thinking of is to be careful not to
assume that we are oracular. Because one of the best gifts
of the past 20 years has been the realization that no one
needs to be the Oracle anymore and only on the basis of
asserted authority able to say, this is true and this isn’t. We use proxies all the time. We desperately need them
because we can’t personally research everything. But those proxies can have
fights with other proxies. What they do can
be made public so that Ralph Nader has
the time to look at it and tell us what he thinks. We can come up with
ways to do that. And in fact, I think it is
to academia’s enduring shame that we’ve remained
at such a distance from Wikipedia
for so many years. In fact, that we didn’t
create it to begin with. That it took somebody else off
of the proceeds of a search engine to create it
because he thought it would be a good thing
for the world, rather than just another dotcom idea. And he didn’t do it
from academic quarters. It’s time to look at
something like Wikipedia and see that the
process that happens in the public eye as truth
is tangled about and asserted and re-fought about
every day– that’s really what is supposed to be
happening in academia as well. And if we can worry a
little bit less about volume and the number of papers
we’re cranking out and the citations that
they’re racking up as robots are
writing the papers, we can focus more on the
actual kinds of conversations, the dialogues, that I feel
personally so amazingly blessed to have had as a part of my
education over the past, wow, like 10-15 years on and off
at this amazing institution. And I just, again,
want to reiterate how privileged
and blessed I feel and how grateful I am for
the many people in this room with whom I’ve had
conversations over the years and who have supported me. And to say I see Harvard
as a great columned place, not identical with other
great columned places but in complementarity
with them, driving forward a
society in which we can enjoy the kind of
fruits of the technology that we are sewing in a
way that is quite safe. And that gets us just
back to the beginning thinking about AI. Nick Bostrom in his book
on superintelligence likens the development of
AI into a bag of marbles for which, if you pull out a
white marble it’s a good thing, if you pull out a different
color marble it’s a bad thing, and he’s just like, one day
we pull out a bad marble and that’s it. And my book is over. And it may be the case,
but I like the fact that we can see inside
the bag and that we’re going to be trying to do it
through so many methodologies and have people whose
full time occupation is peering inside
the bag and doing it for the intrinsic joy of it. A joy that, I again,
feel so lucky to have experienced on many occasions. Thank you very much. [APPLAUSE] The Dean says there’s
time for questions, and of course, if there
isn’t time for questions you’re free to leave. The hostage crisis is over. But questions? I assume there’s a
microphone or something. Ah, yes, Patricia has a mic. [INAUDIBLE] AUDIENCE: Will you be
taking your dad back to the library today? JONATHAN ZITTRAIN:
I wish I could. It will have to be my
sister and her husband and my niece who get the tour,
and maybe Suzanne or others from the library can join us. And the lights are
electric, so even if it shut down we can do it quickly. Thank you. Feel free, Patricia, just
to rove with the mic. AUDIENCE: This is
just a quick comment, but when you had the example
about Google and the Jews, I think the answer might have
been very different if had been Google does nothing
as opposed to Google throws that page up
and puts it at the top. JONATHAN ZITTRAIN:
Say that again. Oh, I see. Yes. That Google chose to buy its own
ad is utterly uncontroversial. Surely it can buy– AUDIENCE: But if they
had not done that– JONATHAN ZITTRAIN: If
had not done that– AUDIENCE: –the answers in the
room might have been different? JONATHAN ZITTRAIN: Surely, yes. I think you’re right. I think people might have
said, at least Google you did that much. And it also might have been at
a time when people– I mean, a lot of people use
Google and a lot of people don’t exactly know whether
Google is supposed to be their tool or their friend. If I’m searching Google for
something, like vaccination, it might not be for a
medical dictionary answer to a question but to see, what’s
going on out there on the web? How much activity is there
around anti-vax stuff for which unadulterated
results would be most helpful. And you’re right that
an explanation that just says, look, this is what
Google is, don’t shoot us, might represent
something helpful there. I don’t know. I think we’re agreeing. AUDIENCE: I think so. JONATHAN ZITTRAIN: Yeah. Yeah, very good. AUDIENCE: So I have a question. JONATHAN ZITTRAIN: Yes. AUDIENCE: You know that I’m
curious about the cartoon on your poster– JONATHAN ZITTRAIN: Oh, yes. AUDIENCE: –so I wonder, as
you were giving the talk, it seemed to me that
it was both about, we don’t know what
our future is, but it also was a little
bit about the difference between the kind of algorithm
where the designers don’t know what they’re
giving versus the ones where they know what
the user doesn’t know. So if you could talk about your
cartoon that would be great. JONATHAN ZITTRAIN:
Yeah, so this cartoon is by John O’Brien of The
New Yorker, who kindly told me to consult The New
Yorker about licensing it, which we did in
the dean’s office. Fronted the $250 needed
to put it on the poster and on the web. But what really struck me
about it was a couple things. First, you’re right that
this is a great example of predictable by the designer
and intentionally, of course, not known and discoverable
by the user until it’s too late in that sort of way. And what I love is, of course,
the juxtaposition of a new car, a Maui vacation, and death. And the idea that, oh,
yes, that’s number three, reflects to me the
ways in which we’re using technologies
of general purpose– Google, Twitter,
Facebook of 2015– for such a ridiculous
range of stuff. If I were having heart
palpitations right now, the first thing I might do is
call 911, but other than that, be Googling heart palpitations
should I be worried? At the same time, I might
be Googling Maui vacation or something– the sublime to
the ridiculous to the utterly grave. The ways in which we’re asking
so much at once is, to me, evocative here. And also, the way that it’s
treated as all fun and games until someone loses an eye. And the reaction
among many of us, I think I might even put
myself in that category, at the time that
Frank Pasquale wrote his first federal search
commission paper was like, come on. It’s Google. What are you going to do? And it’s kind of that idea of
the festive atmosphere that surrounds our joy when
we use new technologies, and I wouldn’t want that to
cloud the very real pedestals on which it is sitting now that
Walter Cronkite used to sit on. And I’m the last person
to say Walter Cronkite was the be all and end
all, but he had a different conception of what
he was doing than Google did. Yes, Lucien. [INAUDIBLE]? Oh, sorry. AUDIENCE: Thank you. You’ve showed us a lot of
examples of people trying to think about these issues, and
surely you dedicate your life to do that as well. How worried are you, if at
all, about not enough people worrying about those issues? JONATHAN ZITTRAIN: What’s
the right level of people worrying about the issues? I think we want enough healthy
skepticism and non complacency that if there are a small
number of people thinking through issues, vetting them
through with representative groups and such– we live in
a specialized society and one of expertise– that there
could be enough pressure to improve things
that where you need a political movement, where
you need a court ready to say something and not fear that
it’s one decision that’s going to bring the
house down, or you need a legislature
willing to do something, that they feel like
the public is tuned in enough to have their back. So that’s why, in putting
together this talk, I wanted to give
a lot of attention to solutions rather than what– I think of some of the
talks I’ve given in the past and it’s really easy
to get people worried and to just be like anxiety,
anxiety, look at this. And if we look at a lot of
talks out there about privacy and such, that’s what they are. And it’s like, well,
OK, I’m worried now. Are you happy? No, I’m worried too. So I don’t know how
many people need to be worried but skeptical,
healthfully critical. And of course, one of the things
left out among the solutions was the user education thing to
the extent that in our schools, day by day, where we’re
warehousing kids for years at a time just trying to
keep them from getting hit by cars or tort
cases about this, to have them in the
schools doing things like editing Wikipedia
and justifying their edits and pushing back when somebody
pushes on them– that’s the kind of thing that
would lead to a population of enough general literacy. That it’s not just
scare them and then tell them to place a call
to a representative, because that gets
automated too, but rather to be a little bit more
thoughtful in their margins than collectively demanding
about what the market produces. Yes? AUDIENCE: I see why you
are drawn to competition, and I think that competition
can help with some of the problems you discussed. JONATHAN ZITTRAIN: Yes. AUDIENCE: But with
some of the problems, I think competition would
make things worse not better. And if you think about the
examples you had about hoping that Google might downplay
or eliminate, say, anti-Semitic sites or
anti-vaccination or mugshots, the reason why you can hope
that Google will do that is if Google can do it without
being threatened of losing substantial market share,
because it’s the default of monopolies– JONATHAN ZITTRAIN: Yes. AUDIENCE: In world
in which there was substantial
competition, if Google were to try to do it then there
would be another service that would say, we’ll just give
you the sites as they come. JONATHAN ZITTRAIN: Yes. AUDIENCE: And all the people
who don’t want paternalism. JONATHAN ZITTRAIN: Yes AUDIENCE: –and they do want
to get anti-Semitic sites and they do want to get the
mugshots would be going there, and that would be a perverse
effect of competition. JONATHAN ZITTRAIN: So
that’s a great point, and the way I would
try to rephrase it without doing injustice to
it in the terms of the talk would be to say,
if to the extent you are concerned
about toxicity, an absence of competition
provides a one stop shop to administer your anti-toxin. If you think that cars can have
certain dangers– AI driven cars, for example– but
there’s only one maker of them, all right. You go sit down with that one
maker and you work it through. And that’s the story of if you
want to stop criminals and do it through lawful interceptive
telephonic communications, when it’s just AT&T, it’s
much easier to affect when it’s VoIP and
all these other things that we would normally think of
as the fruits of competition. So that then just is
either a to-be-sure that if you’re worried
about toxicity the way I’ve defined it, having
only one place to go to do it makes your job easier. But of course, that
has to get weighed against all of the other
benefits that come. And the anti-toxicity thing,
I would be careful not to generalize about. I think many here might be
skeptical of most of the things we would decide as toxic,
and others, of course, I can find, we can find, something
that everybody thinks is. And the fact that we have a
distributed web and BitTorrent and all of these other
things that roughly fall under the rubric
of distributed, if not competition, makes
effective regulation more difficult.
And yes, that does provide a regulability problem. Yep. MARTHA MINOW: There’s one more
thing we need to do, however– JONATHAN ZITTRAIN: OK. MARTHA MINOW: –and that
is come get your chair. JONATHAN ZITTRAIN: [LAUGHS] Thank you. [APPLAUSE]

Jonathan Zittrain | ‘Love the Processor, Hate the Process’

3 thoughts on “Jonathan Zittrain | ‘Love the Processor, Hate the Process’

  • July 18, 2017 at 10:32 pm
    Permalink

    i Love professor JZ

    Reply
  • December 5, 2017 at 6:55 am
    Permalink

    Wow, this deserves so much praise. Thank you for posting this. I wasn't sure what the topic was at first but I was drawn in more and more as he continued to speak. He is such a great speaker and I'm amazed on how he is able to communicate his thoughts in an understanding way as to not get lost in a complex subject. I learned a lot more about the internet and it's past, present and future idealogy in this video then all the years passed since first introduced to it. I recommend anyone as well as academia to showcase the study to students or any individual trying to progress their knowledge on the internet and affects on morality in business and society. Great work.

    Reply
  • December 9, 2017 at 12:51 am
    Permalink

    I'm ready for Law School after that! Is this how you seduce me into debt? Well done

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *