connected to the Internet. To link to the network is not to commit your hard
disk to the use of anyone. The physical layer remains controlled, even if the
code layer is free.
Here we see the source of the compromise that this chapter is all about.
For in an important sense, the cable network is simply asserting the same
rights with "its" equipment that I assert over my machine when connected
to the Internet. My machine is mine; I'm not required to make it open to
the world. To the extent I leave it open, good for the world. But nothing
compels me to support it.
Leaving the ends free to choose, then, creates an opportunity for them to
choose _control_ where the norm of the Internet has been _freedom._ And con-
trol will be exercised when control is in the interest of the ends. When it
benefits the ends to restrict access, when it benefits the ends to discriminate,
then the ends will restrict and discriminate _regardless_of_the_effect_on_others._
Here, then, we have the beginnings of a classic "tragedy of the com-
mons."[10-46] For if keeping the network as a commons provides a benefit to all,
yet closing individual links in the network provides a benefit to individuals,
then by the logic that Garrett Hardin describes in Chapter 2 above, we
should expect the network "naturally" to slide from dot.commons to
dot.control. We should expect these private incentives for control to dis-
place the public benefit of neutrality.[10-47]
The closing of the network by the cable companies at the code layer is
one example of this slide. If DSL providers were given the choice, they too
would do the same. Wireless providers are implementing essentially the
same sort of control. AOL Time Warner is insisting that code using its net-
work be code that it controls.
In all these cases, the pressure to exert control is strong; each step makes
sense for each company. The effect on innovation is nowhere reckoned.
The value of the innovation commons that dot.commons produces is whit-
tled away as the dot.coms rebuild the assumptions of the original Net.
Consider another example of this tragedy in play:
The World Wide Web is crawling with spiders. These spiders capture
content and carry it back to a home site. The most common kind of spider
is one that indexes the contents of a site. The spider will come to a Web
page, index the words on that Web page, and then follow the links on the
Web page to other sites. And by following this process as far as the links go,
these spiders index the Web.
This index, then, is what you use when you run a search on the Web.
[[168]]
p167 _
-chap- _
toc-1 _
p168w _
toc-2 _
+chap+ _
p169