Towards an Innovative Web

Towards an Innovative Web


REIHAN SALAM: OK. Urs Gasser. URS GASSER: So first of all,
let me clarify neither Vint Cerf nor I are Boston
city officials, despite wearing a tie. [LAUGHTER] VINT CERF: I’m sorry. It’s all the clothes I got. It’s just the way it is. URS GASSER: I’m delighted
to be here. Thank you so much for
the invitation. It’s a fantastic,
very rich day. What do we need for
a responsible and innovative web? Cover that in half an hour? Well, we’ll see. I would like to share three
quick observations, hopefully building up on some of the
points that we touched upon throughout the day, and those
are with an eye towards policymaking. My first point is as users,
parents, teachers, corporate decision-makers, policymakers,
we need a balanced understanding and view on both
the opportunities and challenges associated with
digital technologies, especially when it comes
to youth online. Today’s discussion was fantastic
and a very balanced exploration of both risks as
well as these tremendous promises of the internet. In my conversations with parents
and teachers, as well as policymakers sometimes, the
debate is much more one-sided. To make things worse, it’s often
fueled and driven by fear and headlines rather
than by empirical evidence and data. For the future of the web and
for responsible and innovative policymaking for an innovative
web, we really need to emphasize this point: that it is
important to craft policies that are driven by data. Amanda Lenhart was on this
panel earlier today. She and her team are doing
tremendous work, for instance, when it comes to youth
policymaking, where the data they gather can really
inform us. Of course, sometimes you have
to act under conditions of uncertainty. We acknowledge also today there
are certain things we don’t know, and we still have to
make decisions, whether as consumers or as policymakers. In such situations, we really
need to become more innovative, how we incorporate
mechanisms of learning into our decisions and into
our organizations and into our policies. We often don’t get it right
at the first chance. How do we ensure learning? The second point I would like
to make it is more about the approaches or the
solution space. We’ve identified a long list
of challenges and potential problems and risks earlier
today, ranging from privacy, security, safety issues to
concerns of information quality, gaps in skills
and literacy. I think it’s a fair summary to
say one of the consensuses that emerged today is that
there is certainly no silver-bullet solution to any
of these hugely complex phenomenon challenges. As we move forward, I would
argue much of the success of the use of tools as we explored
them today a little bit, whether it’s educational
interventions, whether it’s laws and regulation, whether
it’s reputation systems– you will talk more about that– or other institutions that we
use to deal with some of these challenges, we need to better
understand the tools themselves and how they play
out in the digital environment. We talked a little bit about
the question of how certain solutions scale, especially
when dealing with huge amounts of data. The example of YouTube this
morning was very, very illustrative, where you see that
this amount of videos at some point need to be reviewed
by human beings. The algorithms are good at
flagging and prioritizing but not addressing the core issue. There you see this kind of
bottleneck situation. Moreover, still in this category
of approaches, we need not only to understand
which tool in the tool box is useful to address which sort
of problem that you face online, but also to better
understand the interplay among the different instruments that
we have available and that we talked about today. In the same cluster, just
two quick notes. One, on education. I’m thrilled that we spent so
much time and energy today in discussing the promise of
education as a potential strategy to cope with some
of the thorny challenges. As I said, we didn’t only
talk about challenges but also the promises. Education is really important. At the Berkman Center, for
instance, we have currently an experiment on the way. That’s what I like about
education, you can also run experiments and learn
from them. The experiment that we call the
Youth and Media Lab, where we bring together high school
students and college students with experts in the field on
internet policy and create a research and development lab
for policymaking as well as critical development and
tool development. We’ve heard many other examples
from Esther and Nancy on this power of education. To see it as a potential of
innovation is very exciting. The second quick remark in this
category is about law. Law, of course, is
also a tool. David Drummond earlier this
morning started with a cautionary note. Don’t jump immediately to law
and think law will solve all the problems, such
as cyberbullying. Of course the world is
much more complex. But one point seems important. We should not mischaracterize
law. Law is not only a constraint
on behavior. Law can also be an enabler. Many of the tools and platforms
and applications we’ve seen today have been
enabled by laws, some of them internet-specific laws,
fostering innovation and protecting innovation. That’s another core pillar, I
think, as we move forward, to think carefully about such laws
as an infrastructure for a responsible and innovative
web in the future. Last point quickly– and that’s the third statement
or observation– we should be increasing interoperability across all layers. Of course interoperability is a
concept many of you in this room here are deeply familiar
with at the technological and data layers. Many of the opportunities
characterized today, and the web as such as we presented
it obviously, is all about interoperability. The social web is about the flow
of data across systems and the meaningful exchange
of information. Not only that level of
interoperability is important as we move forward, but
I would argue it’s the organizational interoperability,
the working together of different
stakeholders to address some of the challenges and harness
the opportunities. Parents, teachers, schools,
private companies such as Google and many others need to
work hand-in-hand to create interfaces of collaboration
among institutions. Similarly, at the top layer,
we also need to think about legal and policy
interoperability. We talked a little bit about
legislation, the context of cyberbullying, where you have
different state laws that are not interoperable, putting
companies such as Facebook in a tricky position if they want
to comply with all the laws. International, they mentioned
again, is to be acknowledged where privacy laws– that was
mentioned as well– create all sorts of interoperability
problems for global internet companies. These are my three things. REIHAN SALAM: Thanks
very much, Urs. And Daniel Kent of Net
Literacy, would you like to go next? DANIEL KENT: Sure. I’ve prepared some remarks just
so that I’m cognizant of the five-minute remark. REIHAN SALAM: Terrific. DANIEL KENT: I’m honored to be
here with two of the most forward-thinking internet
visionaries. I’d like to add my
congratulations to Google for convening this discussion. While these gentlemen were busy
inventing and crafting the infrastructure and shaping
the policy that would come to define the internet, I was
growing up as an end user. I’m a little bit embarrassed to
say that it took me until the fourth grade to open up my
beige box and upgrade my RAM to what I think was 256. That issue of confidence is
still a large barrier to individuals around the world
in adopting broadband and computers and access devices and
learning how to navigate through the net. As a digital literacy
practitioner, it was especially poignant when senior
citizens told me that they felt that they were
too old to learn. Sort of switching gears,
organizations that provide digital inclusion data like
Pew and the International Telecommunication Union have
highlighted that digital inclusion is a serious barrier
to computer and broadband use. Today those that are offline
are being left behind. Tomorrow those without access to
computers and the internet will be in a very sense disabled
and unable to access the increasing number of
services that are migrating to the net and not available
to those offline. Those that are disconnected in
North America and abroad will become poorer, detached from an
increasing torrent of rich information, and less able to
compete in school, in the workforce, and in life. As we progress into the 21st
century, those that are not net savvy will be increasingly
relegated to an underclass that may be subject to an
ugliness of a new form of discrimination. Four years ago, I was a teenager
and I can tell you from very recent experience that
to my generation, Web 2.0 and social media is a
double-edged sword. As Dora said earlier this
morning, while it has empowered us, it has also
created new responsibilities. It’s social to tweet and plus 1
or like pages and posts and photos, but unfortunately,
members of the digital generation sometimes make
poor decisions and post inappropriate pictures
or comments. Five years ago, few fully
realized the impact of their future net reputations. But today, scholarship programs,
colleges, and employers are regularly
googling applicants. It’s been recently reported in
the news that some employers are now asking for their
employees to turn over their Facebook logins and passwords. Without addressing any of the
privacy and other issues, it’s clear that what we’re doing
online today will definitely impact our future well-being. Users need to become more aware
and better educated about their net reputation. I commend Google for creating
Me on the Web, a web application that integrates
with your current Google account. It helps users manage and
control what others can find about them on the web and know
what is published about them online, not only what you
publish but also what others publish about you. Google and the Berkman Center’s
initiatives to inform educators and innovators and
policymakers and end users is truly the way to build the
character of our community of digital citizens. As we discuss what’s on the
horizon on this panel, I’ll try to have my comments reflect
a digital inclusion practitioner’s prospective. I’m mindful that without
digital inclusion and literacy, some of us will not
enjoy the benefits of a responsible and innovative
web. REIHAN SALAM: Thanks
very much, Daniel. And Mr. Cerf, with the home
team, would you’d like to share with us your thoughts? VINT CERF: I didn’t know we
were in a competition. This was to be a cooperative
view of what’s actually going on. Let me start out by making
a couple of observations. I want to focus on responsibility for just a moment. Among the various things we
enjoy on the internet, at least many of us, is the freedom
to speak and to hear and to interact and
to share and to collaborate and cooperate. But I think there’s a freedom
that is often not articulated, and that’s freedom from harm. We are not free from harm. We know that the internet is a
place where harms can occur. It’s a very complicated
environment because the perpetrator of a harm may be
in one jurisdiction or one country and the victim
might be in another. This gets to your
interoperability question, where it’s not clear that
jurisdictional boundaries and, let’s say, reciprocity
is available. We have a global problem when
it comes to coping with improving safety on
the internet. I’ve often concluded that there
are only three kinds of things that you can do
in order to cope with the safety problem. One, you can create technical
means to prevent the harm from happening: smarter operating
systems, paranoid browsers that don’t let themselves
get infected, and so on. But frequently the technical
solution doesn’t work here. It’s not adequate. The next thing you do is you
try to figure out how to detect that something
bad has happened and figure out who did it. This is sort of detection
and punishment. That’s what we do when it
comes to drunk driving. We can’t stop you, but we pass
laws saying if we catch you, there will be consequences. That’s the second kind of
response that you can have. The third one is moral
assuasion. I don’t mean to suggest that’s
a weak response. In some cases, saying don’t do
that, it’s bad, it’s harmful, it hurts, what if they
did it to you is a pretty good argument. Those are three elements that
I think are very important. I think also what you’ve been
hearing today on the part that I was able to participate in
underscores another really important point: the power and
importance of infrastructure. That’s really what a lot
of this is about. The internet and the World Wide
Web and the apps that sit on top of mobiles are a
consequence of enabling infrastructure. I would not ever want to
discount the power that that kind of enabling infrastructure
has. I want to come back to the
responsibility question, though, because regardless of
those three mechanisms that I mentioned, taking responsibility
for the way we behave on the network is
a very important thing. What we don’t have, though, on
a global scale is a sense of what the social norm should be
in this online environment. It’s come up several times in
the earlier conversations. I don’t think that we
can make this up. I think we are literally going
to end up having to live through this sort of thing. Let me give you a
trivial example. When we’re walking towards each
other down the street, it’s generally common that we
try to figure out how not to run into each other. It’s just socially more
convenient that way than smashing into each other and
having an argument over who gets out of the way first. That’s a trivial example of a
social norm, but it’s the kind of thing that we need to
find in this cyberspace environment. The people who are engaged
that are going to have to figure out what that is– Parry Aftab has the teenangels
activity that’s been going on for 14 years now. These are kids trying to help
other kids to figure out what’s the right kind
of behavior, what makes sense to them. They are, those kids, are
going to be a lot more credible having opinions and
suggesting ideas than any adult or any other authority
figure is likely to have. Let me give you another example
of participatory involvement in trying to create
a safer environment. Google and Paypal and Qualys
and other several companies have gotten together to form
something called StopBadware, which was originally started in
the Berkman Center and then was exported out as
a not-for-profit. What it does is respond to the
problem of infected websites. When Google does an index of the
World Wide Web, we have a crawler that’s trying to look
at every single web page and makes notes about
the ones that it thinks might be infected. We download the web pages
and a piece of software looks at it. If it thinks that there is
a piece of malware on the website, it makes a little
note saying this site might be dangerous. When somebody does a Google
search and one of the results is a link to one of those marked
sites, if you click on the link, before we let you
go there, we pop up this interstitial web page. It’s bright red. It says don’t go there; there
might be malware on the site. If you happen to be the site
owner, and somebody comes you and says ha, Google says
you’re infected. Of course, most of these
people don’t know that. They think they’ve
been insulted. They make all kinds of noise and
say, I never did anything. Of course, they probably
didn’t. They probably had
a website that wasn’t adequately secured. Somebody else infected
it for them. We send them to StopBadware. StopBadware goes through the
website with a fine-toothed comb and helps them
clean it out. That’s an example of a kind of
responsible behavior in this environment, which we know
is not purely safe, not absolutely secure. This is a continuous process. It’s not something
you just do once. It’s like brushing your teeth. If you brush your teeth once
in your life, it doesn’t do you any good. You have to brush
them every day. It’s like cyber hygiene,
same thing. I realize I don’t want to
go on and on and on. Let me pick one another point,
then get into some discussion. Again this is responsibility. Personal responsibility
for us in my view includes critical thinking. What in heck am I
talking about? How many times have you gotten
a message from somebody that says, the post office is going
to start charging a penny for every email. Go look it up on Snopes first.
Take some responsibility before you propagate nonsense. It’s true that there’s
misinformation on the net, and we’re never going to
get rid of that. But one thing we can do is be
a little bit more thoughtful about deciding whether we
should believe this information or not. That’s called critical
thinking. One of the best lessons that we
can teach our kids is how to think critically about the
information that they get. They’re not going to get misinformation just on the net. They’re going to get it from
movies, television, their friends, their parents
might be misinformed. There are a variety of sources
of misinformation. Kids need to learn how to sort
that out as best they can. If we teach them how to do that,
then they will be able to defend against misinformation
in all of its forms, not just on
the internet. Mr. Chairman, I have one other
anecdote to tell you. This is Kissinger’s complaint. I had lunch with
Henry Kissinger several years ago now. The first thing he said was
he hated the internet. I thought, well, that’s a great
way to start out lunch. The reason he said he didn’t
like it was that people had become satisfied with too
little information. A snippet will do. You understand that
Dr. Kissinger writes 700-page books. He wants people to read all
700 pages of the books. He doesn’t want them to be
satisfied with a paragraph. So he was unhappy about that. On the other hand, the thing
that he liked was Google News, because he could get a sense for
what everybody was talking about in a very rapid way. But he did point out that
he’s very upset that his grandchildren do not know how
to read cursive writing because the only thing they
ever see is print. All those letters that he has
from famous people in history, the kids, his grandchildren,
can’t read them, because they don’t read cursive. I told him I didn’t know
what to do about that. That’s his problem. Mr. Chairman, I turn this over
to you, and thank you for the opportunity to– REIHAN SALAM: Thank you VINT CERF: –hold
forth this way. REIHAN SALAM: Urs, I wonder,
when Vint was talking about freedom from harm, the third
element he mentioned was this idea of moral assuasion. You mention very briefly during
your remarks reputation systems. I wonder if you have
any thoughts about reputation systems and whether we might
see the emergence of global reputation systems that could
help us maintain a more responsible web and also serve
to help people and users navigate the web in a different
kind of way? URS GASSER: Great question. I truly believe that a
reputation system as a governance mechanism has
tremendous potential. But at the same time at least
from what I’ve seen, which is of course only part of the
discussion, it’s extremely hard to think about the design
of such systems because you run into all sorts of issues. VINT CERF: Well, what
if somebody lies? I mean, what if you tell
everybody that your competitor makes bad material and it isn’t
true and now we’re back to critical thinking. URS GASSER: I’ve seen it with
dentists and so forth. And in the professional space
with teachers as well, what are appeal mechanisms,
correction mechanisms. But then perhaps more importantly
also, thinking about again the global context– and Jill could
obviously comment on that much better– for reputation. You need some sort of
identification. That’s not always desirable
that you have full identification on the web, and
that every action, every expression, every comment
can be attributed to a particular speaker. There are mechanisms in computer
science where you can have persistent identities,
build reputation systems but not lose anonymity. That’s a hugely complex
as a design challenge. VINT CERF: Can we keep going
on this just a little bit? First of all, I think anonymity is so very important. We all understand there are
situations where non-anonymous speech is fatal. So we need to preserve in this
environment the ability to speak pseudonymously. Sometimes it’s important to
be able to speak in an authoritative way with your
name and make sure other people can believe that it’s
you and not somebody else pretending to be you. On this side of reputation
systems, though, it is possible in this space to have
brands that do not necessarily identify an individual but which
by their repeated value, their content will establish
brand and therefore establish reputation. There are people on the net who
make comments on blogs. I don’t know who they are. They have a handle. But if the handles tend to be
thoughtful, then you want to give more credibility to that. I am always nervous about the
mechanics of reputation systems for the reasons
that you described. Any engineer who’s trying to
design something needs to think not only how is it going
to work, but how is it going to be abused, and how
do I deal with that? REIHAN SALAM: One thing that
strikes me, and this is a theme that’s come up in a number
of our panels so far, is this idea that many of the
problems that arise in the real world arise in
the web as well. For example, when we’re talking
about reputation systems, credit scores don’t
work terribly well. But it’s a fairly lawless thing
where you have several companies that provide them. It’s something that’s
emerged over time. It’s kludgy. So when you think rigorously
about it, people are providing this public good, but in
a for-profit context. It’s just out there, and we use
it in an imperfect way. That actually brings
me to a remark that Vint had made earlier. You talked about much of what
Google does in the context of identifying sites
with malware. That work could be understood
as a kind of provision of a public good. Yet, of course, Google is
an organization that is constrained. It’s also an organization that
has been very much shaped by the recent retreat of many
websites from the open web to the closed web. And I wonder how you think about
that evolution and the implications of that
for a generative and responsible web? If you’re a sheriff of this
terrain that is shrinking, as more and more of the web is
closed, that presumably introduces all kinds
of dilemmas. VINT CERF: Can I make
two observations? First of all, we’ve had
this happen before. AOL, for example, started out as
a very much closed system. It was forced by the users
to go outward. My honest belief here is that
the pendulum may be swinging. If the users find that a closed
environment gives them a poorer-quality experience– the limitation in the amount
of information we have available, more difficulty in
finding things that are relevant, or ability to see
across a broader spectrum of content, then they’re
going to push back. That’s one possibility. The second thing is that, to
bring back the responsibility point, if companies like Google
can’t see those things and therefore we can help
protect people against, that has an opportunity for someone
to design systems that go in the browser that are
a lot more paranoid about what they download. The browsers will
see the content that’s inside that wall. They now become the operative
opportunity for detecting that there’s something wrong. That means better browser design
than we have today. That’s why we’ve done Chrome. REIHAN SALAM: Urs, I wonder,
because you’re at this intersection of the academic
world and also working on public policy questions, I want
to press a little bit further on that idea of the
public goods provider. When that public goods provider
is a for-profit firm that faces its own larger
strategic context, what do you think about what other civil
society actors can do to help buttress that public
goods role? Or perhaps see to it that that
role is carried out when this firm or that firm fades
from the scene? URS GASSER: It goes a little
bit back to my interoperability point. I truly believe that some of the
challenges we identified and addressed, as well as the
possibilities, we can only deal with them successfully
if we work together. Of course that includes– that working together– includes the development of a
strategy of working together and public policy discussions
about checks and balances. I share your concern that of
course it’s troublesome. Some pieces of core
infrastructure, and Vint made this excellent point, how
important infrastructure is. It could not only be the fiber
optics or the cables in the ground, but also search and
platforms and communication. If that core infrastructure
is in the hands of private companies, we certainly have
an institutional challenge. We need to have a conversation
as a society, how do we build in checks and balances? How do we reallocate mechanisms
of control and supervision? These are tricky questions,
and we don’t have good answers yet. REIHAN SALAM: Daniel, I wonder
given that your mission is to increase net literacy and to
bring people and digital inclusion, it occurs to me that
there is a sense in which having a more cognizable,
accessible space that is moving away from an anarchic web
to a web that consists of walled gardens, might actually
be advantageous from your perspective. Does that make sense to you? DANIEL KENT: No,
it really does. One thing to take in
consideration is it’s probably going to be a balance
going forward. One thing that we do
is use a lot of education for the end user. In that respect, net literacy
is a student-volunteer, completely student-organized
and student-run nonprofit. What we do is teach how
individuals should be safe online, and how to navigate
the web in a responsible manner. Working on both fronts is really
the key to addressing the situation. There are so many
stakeholders. There probably won’t be
one, true solution. REIHAN SALAM: Vint, I wonder
there is this– and this is taking off from
Daniel’s observation. There is this great anxiety
that the so-called appliancization of the web is
changing the way users think about how they use the web. You express this confidence,
it seemed to me, that eventually users might decide,
wait a second, we don’t like these walled gardens. We prefer a more open
environment. Do you fear that the way we use
the web is shaping the way we think about it? What do we imagine to
be the generative potential of the web? Do you worry that a new
generation of web users who are being weaned in the context
of these walled gardens might not demand
a more open web? That in fact, they might
actually be trained to stay between the lines, as it were? VINT CERF: Possible. Let me suggest a couple
of metaphors here that might be useful. First of all, you hear about
the World Wild Web occasionally. People think about the settling
of the West. You have the pioneers, the explorers
and the pioneers come. And they have to shoot the
Indians or whatever it is in order to survive. There’s all this conflict
and lawlessness and everything else. When you finally settle,
you find people demanding more safety. That’s what governments
are about. Governments, in part, are
supposed to be there to act in the public interest
to provide a safer environment for everyone. My belief here is that as we
settle this environment, we’ll be looking for more certainty
and predictability about what things can happen to us, and
what we can do about things that we don’t want
to have happen. It’s conceivable that there will
be some sense of safety arising out of these walled
environments. But I have to tell you that when
the World Wide Web first showed up, there were two really
important lessons that are relevant to something you
said earlier, and something that I think is important
to all of us. The first one is the enormous
amount of content that flowed onto the net. People had a way of expressing
themselves. They wanted to share
what they knew. This was not driven by a desire
for remuneration. It was driven by a desire that
someone find what you knew to be interesting and useful. The second thing that’s very
important, which goes to your point about how we learn to be
safer on the net, do you remember that when web pages
first showed up, the browsers allowed you then, and still
allow you, to see how that web page was constructed? Show source or view source. The webmasters taught
each other. You were allowed to copy some
other webmaster’s technique. You could try out new ideas
and other people could learn from that. What I hope, and I don’t know
how to do this yet, but what I hope is that we have a similar
way of making transparent what people choose to do to make
themselves feel safer and be safer in the internet
environment. If we can find a way of making
the viral learning happen, that would be a good thing. To come back to the primary
question, I believe that there is so much pent-up desire to
share information with other people that there will
be resistance to staying inside the walls. The 60 hours a minute
going into YouTube– I don’t know what the number is
now, but that’s what I’ve heard recently– per minute is evidence of this
great continuing desire to share what you know and share
what you have. I don’t see walled gardens supplying you
necessarily with that capability. REIHAN SALAM: Thanks
so much you guys. I’m afraid we’re already
over time. I wonder perhaps we could
address a couple of questions before we wrap up? VINT CERF: Did you say
wrap up or crap out? REIHAN SALAM: Wrap up. VINT CERF: Wrap up. REIHAN SALAM: Same thing. AUDIENCE: Good afternoon. My name is William Clements
and thank you for the robust panel. Given that we see that the
internet is just a reflection of the people who use it, what
are the best practices or mechanisms that can be
leveraged to increase user-level accountability and
responsibility without sacrificing openness
and creativity? What are some of the best
practice with that? What we do now to implement
that vision? What is the vision of the ideal
web in which we can be innovative yet protected? REIHAN SALAM: That is an
extraordinarily broad question you’ve asked, and I’m
glad you asked it. Daniel, you want to take
a first crack at it? DANIEL KENT: Sure. It’s about empowering the
users and that through education and through a
bottom-up approach, users will be able to grasp the concepts
that they’re not just one individual in a morass
of everything that’s going on in the net. They’re a citizen
in a community. They have a responsibility to
do unto others as you’d want others to do unto you. REIHAN SALAM: Anyone else? URS GASSER: I would share that
view and argue it’s a concentric circle model,
potentially. It starts with every single
individual user, this notion of self responsibility
and being a good citizen on the web. Then the social norm dynamics
kicks in, in the family context, with your friends,
with your online and offline friends. It goes on and on. At the outer circles, you have
companies such as Facebook or Google but also then ultimately policy and lawmakers. It’s again a shared
responsibility. You start with the individual
and the individual user has to role to play here. REIHAN SALAM: We have
time for one more– VINT CERF: Can I get
in on that one? I may wipe out the next
question as a result. I want to give an engineer’s
response to that question. I’m talking to the– if there
any engineers in the audience or who see this video
later, listen. You have a responsibility. When you build systems, you
need to think about how they’re going to be abused. You have to think about how to
build in better protection. You need to give tools to
people who want to be responsible but don’t
necessarily have any time or ability to go down
into the details. When you put automatic braking
systems into the car, you do that to help people be more
responsible because they don’t necessarily know how to go
pump the brakes whenever they’re skidding on ice. The whole point here is that
we don’t give people enough tools to be responsible
citizens. This business about changing
your password every 30 seconds and keeping a list of 700 of
them so none of them are the same, that’s silly. We really need to do a better
job of providing tools that let people be responsible
without so much crazy effort because they can’t. It’s not possible. The point here is that when a
civil engineer gets a license as a civil engineer and builds
a bridge and the bridge collapses, if it’s demonstrable
the civil engineer’s design was faulty,
as opposed to the implementation or the materials
were faulty, he bears liability. I’m sure I’ve just scared
every programmer in the country by suggesting that
there’s liability for bugs in the program. But we should be thinking
along those terms, not necessarily taking
legal liability. Bugs in programs are going
to be with us till the cows come home. Being more thoughtful and
feeling responsibility for dealing with that problem
is absolutely essential. REIHAN SALAM: We have time
for one concise question. URS GASSER: We have
one question. VINT CERF: There’s one way
over there, in the dark. Would you like to come
and be visible so the camera can catch you? AUDIENCE: Should
I come forward? REIHAN SALAM: Please do. VINT CERF: Enter
into the light. AUDIENCE: So there’s been a
craving for fact checking on the internet. A lot of people see things which
are obviously wrong. Then they say, this
is not the fact. We need to make sure
every blogger writes the actual fact. You can imagine situations where
companies like Google, as your type your blog post,
will suggest things that are facts to you in the future. This is what Google is trying to
do, trying to move towards helping the user. My question is, who decides
what fact is? Do you let these companies
decide what fact is? Or do you encourage users to
write things are not factual, but might be imaginative or
creative, which will expand user’s understanding? REIHAN SALAM: It occurs to
me there are some nested assumptions in that question. Urs, would you like to
field that question? URS GASSER: I think it’s– REIHAN SALAM: Or, Vint? VINT CERF: I would actually
like some help, because in spite of the fact that the
question came into the light, I couldn’t hear all of it. I can’t lip read him
from the dark. You need to help me understand
what the question was. REIHAN SALAM: I believe that
the question was about the notion is that we ought to
encourage facts rather than the proliferation of
false statements. And that we might use technology
somehow to encourage the propagation of
information that is true rather than false. VINT CERF: I wish that I had
Harry Potter’s magic wand to make that happen. I don’t think I can. In fact, if anything,
responsible journalism is evaporating on us. We are getting, instead of
reportage and fact, we’re getting opinion. The Op Ed page has bled off
into the news page. If I’ve offended any newspaper
journalists here, it was on purpose. I actually don’t know of
a technology that will necessarily do anything. But the reputation system notion
that came up here feels like that might be
a place there. Lots of eyes looking on
something can react. In the earlier comments that
Tiffany made about the Kony situation and the response of
all those eyes looking at that story and saying, wait a minute,
that might turn out to be the path to which we can deal
with the problem you’re describing. It’s very clear that
it’s a big problem. We have to do something. Critical thinking
is part of it. Reacting if you know better,
is important to share. REIHAN SALAM: One thing I’ll say
to push back against that slightly is that another
way to think about it is tension blindness. There’s this notion that
you could say that x or y isn’t true. Another way of saying it is
that I’m looking at a different angle on
the situation. Because I’m focusing on this
particular aspect of a given large, complex, unruly
phenomenon, it’s necessarily going to be in tension
with someone else’s characterization of it. One thing that worries me is
that when you have the idea of a technological solution, the
underlying premise is that there is some true
interpretation. The trouble is that when you
have a much more small-d democratic way, you have far
more people are able to offer their narratives concerning
some situation. Then necessarily it’s going
to look as though there’s more chaos. There are more things that are
said that are not true. But in fact another way of
looking at it is that, we have the wherewithal, the ability,
to have more angles on a given situation. That is going to look chaotic. You do need to be a responsible
user to be able to navigate that world. But I wouldn’t say that we’re
necessarily seeing a diminution of what is
true in terms of our information diets. VINT CERF: You’re channeling
Esther Dyson. Esther has this wonderful
quote. She says, the antidote for
bad information is more information, not censorship– not that I’m accusing
you of censorship. It’s a really solid
observation. We have the responsibility to
try to find out, as best we can, what the real answers or
what the real truth is. If we don’t do that, then
we harm ourselves. REIHAN SALAM: Everyone, we have
a reception now at which you can pepper these guys with
even more questions. Thank you so much
for joining us. This was a great fun for me. I hope it was fun for you as
well and informative, more to the point, and not
full of lies. VINT CERF: I hope not, anyway.

Leave a Reply

Your email address will not be published. Required fields are marked *