textfiles/internet/varian1.txt

2309 lines
69 KiB
Plaintext
Raw Permalink Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

Preliminary Draft
Some Economics of the Internet
by
Jeffrey K. MacKie-Mason
Hal R. Varian
University of Michigan
November 1992
Current version: June 14, 1993
Abstract. This is a preliminary version of a paper prepared
for the Tenth Michigan Public Utility Conference at Western
Michigan University March 25--27, 1993. We describe the
history, technology and cost structure of the Internet. We
also describe a possible smart-market mechanism for pricing
traffic on the Internet.
Keywords. Networks, Internet, NREN.
Address. Hal R. Varian, Jeffrey K. MacKie-Mason, Depart-
ment of Economics, University of Michigan, Ann Arbor, MI
48109-1220. E-mail: jmm@umich.edu, halv@umich.edu.
Some Economics of the Internet
Jeffrey K. MacKie-Mason
Hal R. Varian
The High Performance Computing Act of 1991 established
the National Research and Education Network (NREN). The
NREN is sometimes thought of as the ``successor'' to the
NSFNET, the so-called backbone of the Internet, and is
hoped by some to serve as a model for a future National
Public Network. It is widely expected that substantial public
and private resources will be invested in the NREN and other
high performance networks during the next 5--10 years. In
this paper we outline the history of the Internet and describe
some of the technological and economic issues relating to it.
We conclude with a discussion of some pricing models for
congestion control on the Internet.
1. A Brief History of the Internet
In the late sixties the Advanced Research Projects Adminis-
tration (ARPA), a branch of the U.S. Defense Department,
developed the ARPAnet as a network linking universities
and high-tech defense department contractors. Access to the
ARPAnet was generally limited to computer scientists and
other technical users.
In the mid-eighties the NSF created six supercomputer
centers which it wanted to make widely available to re-
searchers. Initially, NSF relied on the ARPAnet, Bitnet and
_________________________________________
We wish to thank Guy Almes, Eric Aupperle, Paul Green, Mark
Knopper, Ken Latta, Dave McQueeny, Jeff Ogden, Chris Parkin and Scott
Shenker for helpful discussions, advice and data.
1
several direct university links for this purpose, but planned
from the beginning to develop a network connecting the
centers. The planners of this new network, the NSFNET,
designed it to provide connectivity for a wide variety of re-
search and educational uses, not just for the supercomputers.1
The NSFNET was conceived as a backbone connecting
together a group of regional networks. A university would
connect to its regional network, or possibly to a neighbor
university that had a path to the regional network. The
regional network hooked into a regional supercomputer. All
of the supercomputers were connected together by the high-
speed NSFNET backbone, and thus the whole network was
linked together.
This design was quite successful---so successful that
it soon became overloaded. In 1987 the NSF contracted
with Merit, the Michigan regional network, to upgrade
and manage the NSFNET. Merit, aided by MCI and IBM,
significantly enhanced the capabilities of the network. Since
1985, the Internet has grown from about 200 networks to well
over 11,000 and from 1,000 hosts to over a million. About
370,000 of these hosts are at educational sites, 300,000 are
commercial sites, and about 130,000 are government/military
sites. NSFNET traffic has grown from 85 million packets in
January 1988 to 26 billion packets in February 1993. This is
a three hundred-fold increase in only five years. The traffic
on the network is currently increasing at a rate of 11% a
_________________________________________
1 See Lynch (1993) for a brief but detailed history of the Internet.
2
month.2
The NSFNET was funded by public funds and targeted for
scientific and educational uses. NSF's Acceptable Use Policy
specifically excluded activities not in support of research
or education, and extensive use for private or personal
business. This policy raised a number of troublesome issues.
For example, should access be made available to commercial
entities that wanted to provide for-profit services to academic
institutions?
In September of 1990, Merit, IBM and MCI spun off a
new not-for-profit corporation, Advanced Network & Ser-
vices, Inc. (ANS). ANS received $10 million in initial
funding from IBM and MCI. One of the main reasons for
establishing ANS was to ``: : :provide an alternative network
that would allow commercial information suppliers to reach
the research and educational community without worrying
about the usage restrictions of the NSFNET.'' (Mandelbaum
and Mandelbaum (1992), p. 76). In November 1992, the re-
sponsibility for managing NSFNET Network Operations was
taken over by ANS. Merit, however, retains responsibility
for providing NSFNET backbone services.
In 1991 ANS created a for-profit subsidiary, ANS
CO+RE Systems, Inc., designed to handle commercial traffic
on ANSnet. It seems apparent that the institutional structure
is developing in a way that will provide wider access to
private and commercial interests. According to the Pro-
gram Plan for the NREN, ``The networks of Stages 2 and 3
will be implemented and operated so that they can become
_________________________________________
2 Current traffic statistics are available from Merit Network, Inc. They
can be accessed by computer by using ftp or Gopher to nic.merit.edu.
3
commercialized; industry will then be able to supplant the
government in supplying these network services.''
2. Internet Technology and Costs
The Internet is a network of networks that all use connec-
tionless packet-switching communications technology. Even
though much of the traffic moves across lines leased from
telephone common carriers, the technology is quite different
from the switched circuits used for voice telephony. A tele-
phone user dials a number and various switches then open a
line between the caller and the called number. This circuit
stays open and no other caller can share the line until the call
is terminated. A connectionless packet-switching network,
by contrast, uses statistical multiplexing to maximize use of
the communications lines.3 Each circuit is simultaneously
shared by numerous users, and no single open connection is
maintained for a particular communications session: part of
the data may go by one route while the rest may take a differ-
ent route. Because of the differences in technology pricing
models appropriate for voice telephony will be inappropriate
for data networks.
Packet-switching technology has two major components:
packetization and dynamic routing. A data stream from a
computer is broken up into small chunks called ``packets.''
The IP (Internet protocol) specifies how to break up a
datastream into packets and reassemble it, and also provides
the necessary information for various computers on the
_________________________________________
3 ``Connection-oriented'' packet-switching networks also exist: X.25
and frame relay are examples of such.
4
Internet (the routers) to move the packet to the next link on
the way to its final destination.
Packetization allows for the efficient use of expensive
communications lines. Consider a typical interactive terminal
session to a remote computer. Most of the time the user is
thinking. The network is needed only after a key is struck or
when a reply is returned. Holding an open connection would
waste most of the capacity of the network link. Instead, the
input line is collected until the return key is struck, and then
the line is put in a packet and sent across the network. The
rest of the time the network links are free to be used for
transporting packets from other users.
With dynamic routing a packet's path across the network
is determined anew for each packet transmitted. Because
multiple paths exist between most pairs of network nodes,
it is quite possible that different packets will take different
paths through the network.4
The postal service is a good metaphor for the technology
of the Internet (Krol (1992), pp. 20--23). A sender puts
a message into an envelope (packet), and that envelope is
routed through a series of postal stations, each determining
where to send the envelope on its next hop. No dedicated
pipeline is opened end-to-end, and thus there is no guarantee
that envelopes will arrive in the sequence they were sent, or
follow exactly the same route to get there.
_________________________________________
4 Dynamic routing contributes to the efficient use of the communications
lines, because routing can be adjusted to balance load across the network.
The other main justification for dynamic routing is network reliability, since
it gives each packet alternative routes to their destination should some links
fail. This was especially important to the military, which funded most of
the early TCP/IP research to improve the ARPANET.
5
So that packets can be identified and reassembled in the
correct order, TCP packets consist of a header followed by
data. The header contains the source and destination ports,
the sequence number of the packet, an acknowledgment flag,
and so on. The header comprises 20 (or more) bytes of the
packet.
Once a packet is built TCP sends it to a router, a
computer that is in charge of sending packets on to their next
destination. At this point IP tacks on another header (20 or
more bytes) containing source and destination addresses and
other information needed for routing the packet. The router
then calculates the best next link for the packet to traverse
towards its destination, and sends it on. The best link
may change minute-by-minute, as the network configuration
changes.5 Routes can be recalculated immediately from the
routing table if a route fails. The routing table in a switch is
updated approximately continuously.
The data in a packet may be 1500 bytes or so. However,
recently the average packet on NSFNET carries about 200
bytes of data (packet size has been steadily increasing). On
top of these 200 bytes the TCP/IP headers add about 40; thus
about 17% of the traffic carried on the Internet is simply
header information.
Over the past 5 years, the speed of the NSFNET backbone
has grown from 56 Kbps to 45 Mbps (``T-3'' service).6 These
_________________________________________
5 Routing is based on a dynamic knowledge of which links are up and
a static ``cost'' assigned to each link. Currently routing does not take
congestion into account. Routes can change when hosts are added or deleted
from the network (including failures), which happens often with about 1
million hosts and over 11,000 subnetworks.
6 In fact, although the communications lines can transport 45 Mbps, the
current network routers can support only 22.5 Mbps service. ``Kbps'' is
6
lines can move data at a speed of 1,400 pages of text per
second; a 20-volume encyclopedia can be sent across the net
in half a minute. Many of the regional networks still provide
T1 (1.5Mbps) service, but these too, are being upgraded.
The transmission speed of the Internet is remarkably
high. We recently tested the transmission delay at various
times of day and night for sending a packet to Norway. Each
packet traversed 16 links, and thus the IP header had to be
read and modified 16 times, and 16 different routers had to
calculate the best next link for the transmission. Despite
the many hops and substantial packetization and routing
overhead, the longest delay on one representative weekday
was only 0.333 seconds (at 1:10 pm); the shortest delay was
0.174 seconds (at 5:13 pm).
Current Backbone Network Costs
The postal service is a good metaphor for packet-switching
technology, but a bad metaphor for the cost structure of
Internet services. Most of the costs of providing the Internet
are more-or-less independent of the level of usage of the
network; i.e., most of the costs are fixed costs. If the network
is not saturated the incremental cost of sending additional
packets is essentially zero.
The NSF currently spends about $11.5 million per year
to operate the NSFNET and provides $7 million per year of
grants to help operate the regional networks.7 There is also
_________________________________________
thousand (kilo) bits per second; ``Mbps'' is million (mega) bits per second.
7 The regional network providers generally set their charges to recover
the remainder of their costs, but there is also some subsidization from state
governments at the regional level.
7
an NSF grant program to help colleges and universities to
connect to the NSFNET. Using the conservative estimate of
1 million hosts and 10 million users, this implies that the
NSF subsidy of the Internet is less than $20 per year per host,
and less than $2 per year per user.
Total salaries and wages for NSFNET have increased by
a little more than one-half (about 68% nominal) over 1988-
-1991, during a time when the number of packets delivered
has increased 128 times.8 It is hard to calculate total costs
because of large in-kind contributions by IBM and MCI
during the initial years of the NSFNET project, but it appears
that total costs for the 128-fold increase in packets have
increased by a factor of about 3.2.
Two components dominate the costs of providing a
backbone network: communications lines and routers. Lease
payments for lines and routers accounted for nearly 80% of
the 1992 NSFNET costs. The only other significant cost is
for the Network Operations Center (NOC), which accounts
for roughly 7% of total cost.9 In our discussion we focus
only on the costs of lines and routers.
We have estimated costs for the network backbone as of
1992--93.10 A T-3 (45 Mbps) trunk line running 300 miles
between two metropolitan central stations can be leased for
_________________________________________
8 Since packet size has been slowly increasing, the amount of data
transported has increased even more.
9 A NOC monitors traffic flow at all nodes in the network and trou-
bleshoots problems.
10 We estimated costs for the network backbone only, defined to be links
between common carrier Points of Presence (POPs) and the routers that
manage those links. We did not estimate the costs for the feeder lines to
the mid-level or regional networks where the data packets usually enter and
leave the backbone, nor for the terminal costs of setting up the packets or
tearing them apart at the destination.
8
about $32,000 per month. The cost to purchase a router
capable of managing a T-3 line is approximately $100,000,
including operating and service costs. Assuming 50 month
amortization at a nominal 10% rate yields a rental cost of
about $4900 per month for the router.
Table 1.
Communications and Router Costs
_(Nominal_$_per_million_bits)1_________________________________________________*
*_______
__Year________Communications_____________Routers______Design_Throughput________*
*_______
1960 1.00 2.4 kbps
1962 10.00*
1963 0.42 40.8 kbps
1964 0.34 50.0 kbps
1967 0.33 50.0 kbps
1970 0.168
1971 0.102
1974 0.11 0.026 56.0 kbps
__1992____________________0.00094_______0.00007_______________45_mbps__________*
*_______
Notes: 1. Costs are based on sending one million bits of data approximately
1200 miles on a path that traverses five routers.
Sources: 1960--74 from Roberts (1974). 1992 calculated by the authors
using data provided by Merit Network, Inc.
The costs of both communications and switching have
been dropping rapidly for over three decades. In the 1960s,
digital computer switching was more expensive (on a per
packet basis) than communications (Roberts (1974)), but
switching has become substantially cheaper since then. We
have estimated the 1992 costs for transporting 1 million bits
of data through the NSFNET backbone and compare these
to estimates for earlier years in Table 1. As can be seen in
1992 the line cost is about eight times as large as the cost of
routers.
9
The topology of the NSFNET backbone directly reflects
the cost structure: lots of cheap routers are used to manage
a limited number of expensive lines. We illustrate a portion
of the network in Figure 1. Each of the numbered squares
is an RS6000 router; the numbers listed beside a router are
links to regional networks. Notice that in general any packet
coming on to the backbone has to move through two separate
routers at the entry and exit node. For example, a message
we send from the University of Michigan to a scientist at
Bell Laboratories will traverse link 131 to Cleveland, where
it passes through two routers (41 and 40). The packet goes to
New York, where it again moves through two routers (32 and
33) before leaving the backbone on link 137 to the JVNCnet
regional network that Bell Labs is attached to. Two T-3
communications links are navigated using four routers.
Figure 1. Network Topology Fragment
10
Technological and Cost Trends
The decline in both communications link and switching costs
has been exponential at about 30% per year (see the semi-log
plot in Figure 2). But more interesting than the rapid decline
in costs is the change from expensive routers to expensive
transmission links. Indeed, it was the crossover around 1970
(Figure 2) that created a role for packet-switching networks.
When lines were cheap relative to switches it made sense
to have many lines feed into relatively few switches, and
to open an end-to-end circuit for each connection. In that
way, each connection wastes transmission capacity (lines are
held open whether data is flowing or not) but economizes on
switching (one set-up per connection).
Figure 2. Trends in costs for communications links and
routers.
When switches became cheaper than lines the network is
more efficient if data streams are broken into small packets
and sent out piecemeal, allowing the packets of many users
to share a single line. Each packet must be examined at each
11
switch along the way to determine its type and destination,
but this uses the relatively cheap switch capacity. The gain
is that when one source is quiet, packets from other sources
use the same (relatively expensive) lines.
Although the same reversal in switch and line costs oc-
curred for voice networks, circuit-switching is still the norm
for voice. Voice is not well-suited for packetization because
of variation in delivery delays, packet loss, and packet or-
dering.11 Voice customers will not tolerate these delays in
transmission (although some packetized voice applications
are beginning to emerge as transmission speed and reliability
increases, see (Anonymous (1986)) ).12
Future Technologies
Packet-switching is not the most efficient technology for all
data communications. As we mentioned above, about 17%
of the typical packet is overhead (the TCP and IP headers).
Since the scarce resource is bandwidth, this overhead is
costly. Further, every packet from a data stream must
be individually routed through many nodes (12 seems to
be typical for a transmission within the U.S.): each node
must read the IP header on each packet, then do a routing
calculation. Transferring a modest 3 megabyte data file
_________________________________________
11 Our tests found packet delays ranging between 156 msec and 425 msec
on a trans-Atlantic route (N=2487 traces, standard deviation = 24.6 msec).
Delays were far more variable to a Nova Scotia site: the standard deviation
was 340.5 msec when the mean delay was only 226.2 msec (N=2467); the
maximum delay was 4878 msec.
12 The reversal in link and switch costs has had a profound effect on voice
networks. Indeed, Peter Huber has argued that this reversal made inevitable
the breakup of ATT (Huber (1987)). He describes the transformation of the
network from one with long lines all going into a few central offices into
a web of many switches with short lines interconnecting them so that each
call could follow the best path to its destination.
12
will require around 6,000 packets, each of which must be
individually routed through a dozen or so switches.13 Since
a file transfer is a single burst of demand there may be
little gain from packetization to share the communications
line; for some large file transfers (or perhaps real-time audio
and video transmissions) it may be more efficient to use
connection-oriented systems rather than switched packets.14
Packetization and connection-oriented transport merge in
Asynchronous Transfer Mode (ATM) which is gaining wide
acceptance as the next major link layer technology.15 ATM
does not eliminate TCP/IP packetization and thus does not
reduce that source of overhead; indeed, ATM adds a 5-byte
header to each 53-byte cell, imposing its own 9% overhead.
However, ATM opens end-to-end connections, economizing
on routing computations and the overhead from network
layer packet headers. Networks currently under development
offer speeds of 155 and 622 Mbps (3.4 to 13.8 times faster
than the current T-3 lines used by NSFNET). At those
speeds ATM networks are expected to carry both voice
_________________________________________
13 The average packet size is 350 bytes for FTP file transfers, but for large
files the packets will be about 500 bytes each. The header overhead for this
transfer would be about 8%.
14 If there is a slower-speed link on the file transfer path---say 56 kbps---
then higher speed links (T-1 or T-3) on the path will have idle capacity that
could be utilized if the network is packetized rather than connection-oriented.
15 The link layer is another layer underneath TCP/IP that handles the
routing, physical congestion control and internetworking of packets. Current
examples of such technologies are Ethernet, FDDI and Frame Relay. The
network technology can carry ``anyone's'' packets; e.g., TCP/IP packets,
AppleTalk packets, or Novell Netware packets. Using the postal service
analogy, the TCP/IP layer handles ``get the mail from New York to
Washington; the link layer specifies ``mail from NY to DC should be
packed in shipping containers and loaded onto a semi-trailer bound for
DC.''
13
and data simultaneously. A related alternative is Switched
Multi-Megabit Data Service (SMDS) (Cavanaugh and Salo
(1992)).
ATM is promising, but we may need radically new
technologies very soon. Current networks are meshes of
optic fiber connected with electronic switches that must
convert light into electronic signals and back again. We are
nearing the physical limits on the throughput of electronic
switches. All-optical networks may be the answer to this
looming bottleneck.
The NSFNET backbone is already using fiber optic
lines. A single fiber strand can support one thousand Gbps
(gigabits), or about 22,000 times as much traffic as the current
T-3 data rate. To give some sense of the astonishing capacity
of fiber optics, a single fiber thread could carry all of the
phone network traffic in the United States, including the peak
hour of Mother's Day in the United States (Green (1991)).
Yet a typical fiber bundle has 25 to 50 threads (McGarty
(1992)), and the telephone companies have already laid some
two million miles of fiber optic bundles (each being used at
no more than 1/22,000th of capacity) (Green (1991)).
Thus, although switches are cheaper than lines at the rates
that current technology can drive fiber communications, in
fact we should expect communications bandwidth to be much
cheaper than switching before long. Indeed, it is already an
electronic bottleneck that is holding us back from realizing
the seemingly limitless capacity of fiber. When capacity is
plentiful networks will use vast amounts of cheap bandwidth
to avoid using expensive switches.
``All-optical'' networks may be the way to avoid elec-
tronic switches. In an all-optical network data is broadcast
14
rather than directed to a specific destination by switches, and
the recipient tunes in to the correct frequency to extract the
intended signal. A fully-functional all-optical network has
been created by Paul Green at IBM. His Rainbow I network
connects 32 computers at speeds of 300 megabits per second,
or a total bandwidth of 9.6 gigabits---200 times as much as
the T-3-based NSFNET backbone (Green (1992)).
Despite their promise, all-optical networks will not soon
eradicate the problem of congestion. Limitations on the
number of available optical broadcast frequencies suggest
that subnetworks will be limited to about 1000 nodes, at
least in the foreseeable future (Green (1991), Green (1992)).
Thus, for an internet of networks it will be necessary to pass
traffic between optical subnetworks. The technologies for
this are much further from realization and will likely create a
congested bottleneck. Thus, although the physical nature of
congestion may change, we see a persistent long-term need
for access pricing to allocate congested resources.
Summary
We draw a few general guidelines for pricing packet-
switching backbones from our review of costs. The physical
marginal cost of sending a packet, for a given line and router
capacity, is essentially zero. Of course, if the network is
congested, there is a social cost of sending a new packet in
that response time for other users will deteriorate.
The fixed costs of a backbone network (about $14 million
per year for NSFNET at present) are dominated by the costs of
links and routers, or roughly speaking, the cost of bandwidth
(the diameter of the pipe). Rational pricing, then, should
15
focus on the long-run incremental costs of bandwidth and
the short-run social costs of congestion. More bandwidth
is needed when the network gets congested, (as indicated
by unacceptable transmission delays). A desirable pricing
structure is one that allocates congested bandwidth and sends
appropriate signals to users and network operators about the
need for expansion in capacity.
3. Congestion problems
Another aspect of cost of the Internet is congestion cost.
Although congestion costs are not paid for by the providers
of network services, they are paid for by the users of the
service. Time spent by users waiting for a file transfer
is a social cost, and should be recognized as such in any
economic accounting.
The Internet experienced severe congestion problems
in 1987. Even now congestion problems are relatively
common in parts of the Internet (although not currently on
the T-3 backbone). According to Kahin (1992): ``However,
problems arise when prolonged or simultaneous high-end
uses start degrading service for thousands of ordinary users.
In fact, the growth of high-end use strains the inherent
adaptability of the network as a common channel.'' (page
11.) It is apparent that contemplated uses, such as real-
time video and audio transmission, would lead to substantial
increases in the demand for bandwidth and that congestion
problems will only get worse in the future unless there is
substantial increase in bandwidth:
If a single remote visualization process were
to produce 100 Mbps bursts, it would take only a
16
handful of users on the national network to gener-
ate over 1Gbps load. As the remote visualization
services move from three dimensions to [animation]
the single-user bursts will increase to several hun-
dred Mbps : : :Only for periods of tens of minutes
to several hours over a 24-hour period are the high-
end requirements seen on the network. With these
applications, however, network load can jump from
average to peak instantaneously.'' Smarr and Catlett
(1992), page 167.
There are cases where this has happened. For example
during the weeks of November 9 and 16, 1992, some packet
audio/visual broadcasts caused severe delay problems, espe-
cially at heavily-used gateways to the Internet NSFNET, and
in several mid-level networks.
To investigate the nature of congestion on the Internet
we timed the delay in delivering packets to seven different
sites around the world. We ran our test hourly for 37
days during February and March, 1993. Deliveries can
be delayed for a number of reasons other than congestion-
induced bottlenecks. For example, if a router fails then
packets must be resent by a different route. However, in
a multiply-connected network, the speed of rerouting and
delivery of failed packets measures one aspect of congestion,
or the scarcity of the network's delivery bandwidth.
Our results are summarized in Figure 3 and Figure 4; we
present the results only from four of the 24 hourly probes.
Figure 3 shows the average and maximum delivery delays by
time of day. Average delays are not always proportional to
distance: the delay from Michigan to New York University
was generally longer than to Berkeley, and delays from
Michigan to Nova Scotia, Canada, were often longer than to
Oslo, Norway.
17
Figure 3. Maximum and Average Transmission Delays on
the Internet
There is substantial variability in Internet delays. For
example, the maximum and average delays in Figure 3 are
quite different by time of day. There appears to be a large
4pm peak problem on the east coast for packets to New York
and Nova Scotia, but much less for ATT Bell Labs (in New
Jersey).16 The time-of-day variation is also evident in Figure
5, borrowed from (Claffy, Polyzos, and Braun (1992)).17
_________________________________________
16 The high maximum delay for the University of Washington at 4pm is
correct, but appears to be aberrant. The maximum delay was 627 msec; the
next two highest delays (in a sample of over 2400) were about 250 msecs
each. After dropping this extreme outlier, the University of Washington
looks just like UC Berkeley.
17 Note that the Claffy et al. data were for the old, congested T-1 network.
18
Figure 4. Variability in Internet Transmission Delays
Figure 5. Utilization of Most Heavily Used Link in Each
Fifteen Minute Interval (Claffy et al. (1992))
Figure 4 shows the standard deviation of delays by time
_________________________________________
We reproduce their figure to illustrate the time-of-day variation in usage;
the actual levels of link utilization are generally much lower in the current
T-3 backbone.
19
of day for each destination. The delays to Canada are
extraordinarily variable, yet the delays to Oslo have no more
variability than does transmission to New Jersey (ATT).
Variability in delays itself fluctuates widely across times of
day, as we would expect in a system with bursty traffic, but
follows no obvious pattern.
According to Kleinrock (1992), ``One of the least un-
derstood aspects of today's networking technology is that of
network control, which entails congestion control, routing
control, and bandwidth access and allocation.'' We expect
that if access to Internet bandwidth continues to be provided
at a zero cost there will inevitably be congestion. Essen-
tially, this is the classic problem of the commons: unless
the congestion externality is priced, there will inevitably be
inefficient use of the common resource. As long as users face
a zero price for access, they will continue to ``overgraze.''
Hence, it makes sense to consider how networks such as the
Internet should be priced.
As far as we can tell this question has received little
attention. Gerla and Kleinrock (1988) have considered some
engineering aspects of congestion control. Faulhaber (1992)
has considered some of the economic issues. He suggests
that ``transactions among institutions are most efficiently
based on capacity per unit time. We would expect the ANS
to charge mid-level networks or institutions a monthly or
annual fee that varied with the size of the electronic pipe
provided to them. If the cost of providing the pipe to an
institution were higher than to a mid-level network : : :the
fee would be higher.''
Faulhaber's suggestion makes sense for a dedicated line--
-e.g., a line connecting an institution to the Internet backbone.
20
But we don't think that it is necessarily appropriate for
charging for backbone traffic itself. The reason is that the
bandwidth on the backbone is inherently a shared resource-
--many packets ``compete'' for the same bandwidth. There
is an overall constraint on capacity, but there are is no such
thing as individual capacity level on the backbone.18
Although we agree that it is appropriate to charge a
flat fee for connection to the network, we also think that
it is important to charge on a per packet basis, at least
when the network is congested. After all, during times of
congestion the scarce resource is bandwidth for additional
packets.19 The problem with this proposal is the overhead,
or, in economics terms, the transactions cost. If one literally
charged for each individual packet, it would be extremely
costly to maintain adequate records. However, given the
astronomical units involved there should be no difficulty in
basing charges on a statistical sample of the packets sent.
Furthermore, accounting can be done in parallel to routing
using much less expensive computers.
Conversely when the network is not congested there
is very small marginal cost of sending additional packets
through the routers. It would therefore be appropriate to
charge users a very small price for packets when the system
is not congested.
_________________________________________
18 Although it may be true that an institution's use of the backbo*
*ne
bandwidth is more-or-less proportional to the bandwidth of its connection
to the backbone. That is, the size of an institution's dedicated line to
the backbone may be a good signal of its intended usage of the common
backbone.
19 As we have already pointed out the major bottleneck in backbone
capacity is not the bandwidth of the medium itself, but the switch technology.
We use the term bandwidth to refer to the overall capacity of the network.
21
There has been substantial recent work on designing
mechanisms for usage accounting on the Internet. The In-
ternet Accounting Working Group has published a draft
architecture for Internet usage reporting (Internet Account-
ing: Usage Reporting Architecture, July 9, 1992 draft). ANS
has developed a usage sampling and reporting system it
calls COMBits. COMBits was developed to address the
need to allocate costs between government-sponsored re-
search and educational use, and commercial usage, which is
rapidly growing. COMBits collects an aggregate measure of
packets and bytes usage, using a statistical sampling tech-
nique. However, COMBits only collects data down to the
network-to-network level of source and destination. Thus,
the resulting data can only be used to charge at the level of the
subnetwork; the local network administrator is responsible
for splitting up the bill, if desired (Ruth and Mills (1992)).20
4. Current Pricing Mechanisms
NSFNET, the primary backbone network of the Internet,
has been paid for by the NSF, IBM, MCI and the State of
Michigan until the present.21 However, most organizations
do not connect directly to the NSFNET. A typical university
will connect to its regional mid-level network; the mid-
level maintains a connection to the NSFNET. The mid-level
networks (and a few alternative backbone networks) charge
their customers for access.
_________________________________________
20 COMBits has been plagued by problems and resistance and currently
is used by almost none of the mid-level networks.
21 NSF restricts the use of the backbone to traffic with a research or
educational purpose, as defined in the Acceptable Use Policies.
22
There are dozens of companies that offer connections
to the Internet. Most large organizations obtain direct con-
nections, which use a leased line that permits unlimited
usage subject to the bandwidth of the line. Some customers
purchase ``dial-up'' service which provides an intermittent
connection, usually at much lower speeds. We will discuss
only direct connections below.
Table 3 summarizes the prices offered to large universi-
ties by ten of the major providers for T-1 access (1.5 mbps).22
There are three major components: an annual access fee, an
initial connection fee and in some cases a separate charge
for the customer premises equipment (a router to serve as
a gateway between the customer network and the Internet
provider's network).23 The current annualized total cost per
T-1 connection is about $30--35,000.
_________________________________________
22 The fees for some providers are dramatically lower due to public
subsidies.
23 Customers will generally also have to pay a monthly ``local loop''
charge to a telephone company for the line between the customer's site and
the Internet provider's ``point of presence'' (POP), but this charge depends
on mileage and will generally be set by the telephone company, not the
Internet provider.
23
All of the providers use the same type of pricing: annual
fee for unlimited access, based on the bandwidth of the
connection. This is the type of pricing recommended by
Faulhaber (1992). However, these pricing schemes provide
no incentives to flatten peak demands, nor any mechanism for
allocating network bandwidth during periods of congestion.
It would be relatively simple for a provider to monitor a
customer's usage and bill by the packet or byte. Monitoring
requires only that the outgoing packets be counted at a single
point: the customer's gateway router.
However, pricing by the packet would not necessarily
increase the efficiency of network service provision, because
the marginal cost of a packet is nearly zero. As we have
shown, the important scarce resource is bandwidth, and thus
24
efficient prices need to reflect the current state of the network.
Neither a flat price per packet nor even time-of-day prices
would come very close to efficient pricing.
5. Proposals for pricing the network
We think that it is worthwhile thinking about how such a
pricing mechanism might work. Obviously, our suggestions
must be viewed as extremely tentative. However, we hope
that an explicit proposal, such as we describe below, can at
least serve as a starting point for further discussion.
We wholeheartedly adopt the viewpoint of Clark (1989)
who says ``It is useful to think of the interconnected [net-
works] as a marketplace, in which various services are of-
fered and users select among these services to obtain packet
transport.'' We take this point of view further to examine
what kind of pricing policy makes sense in the context of a
connectionless, packet-switched network.
There are many aspects of network usage that might be
priced. Cocchi, Estrin, Shenker, and Zhang (1992) make
this point quite clearly and describe how a general network
pricing problem can be formulated and analyzed. However,
we will analyze only one particular aspect of the general
network pricing problem in this paper: pricing access and
usage of the network backbone.
The backbone has a finite capacity, so if enough packets
are being sent, other packets will not be able to be sent.
Furthermore, as capacity is approached, the quality of ser-
vice deteriorates, imposing congestion costs on users of the
system. How should a pricing mechanism determine who
will be able to use the network at a given time?
25
6. General observations on pricing
Network engineers tend to take the behavior of the network
users as fixed, and try to adapt the technology to fit this
behavior. Economists tend to take the technology as fixed
and design a resource allocation mechanism that adapts
the users' behavior to the technological limitations of the
network. Obviously these approaches are complementary!
Let us consider some traditional pricing models for net-
work access. One traditional model is zero-priced access.
This is commonly used in highway traffic, for example. This
has the well-known defect of the problem of the commons-
--if each user faces a zero price for access, the network
resources tend to become congested.
Most common forms of pricing for network access use
posted prices: a fixed price schedule for different priorities of
access at different times. For example, the post office charges
a fixed price for different priorities of delivery service, and
telephone companies provide a fixed charge for connections
to different locations at different times of day.
The trouble with posted prices is that they are generally
not sufficiently flexible to indicate the actual state of the
network at a particular time. If, at a point in time, there is
unused capacity, it would be efficient to sell access to the
network at marginal cost, which is presumably close to zero.
Conversely, if the network is at capacity, some users with
high willingness-to-pay may be unable to access the network,
even though other users with lower willingness-to-pay have
access. Pricing by time-of-day helps to achieve an efficient
pattern of usage of network capacity, but it is a rather blunt
26
instrument to achieve a fully efficient allocation of network
bandwidth.24
7. An ideal but impractical solution
An ``ideal'' model for network access would be a continuous
market in network availability. At each point there would be
a price for access to the network. Users who were willing to
pay the price for delivery of a packet would be given access;
users who weren't would be denied access. The price would
be set so as to achieve an optimal level of congestion.
How should the access price be determined? One mech-
anism would be a ``Walrasian tatonnement.'' A tentative
access price would be set. Users would examine the access
price and see if they wanted access. If the sum of the demands
for access exceed the network capacity the price would be
adjusted upward, and so on.
The trouble with this scheme is that the user has to
observe the current price in order to determine whether or not
he wants access. If the time pattern of usage were completely
predictable, there would be no problem. However, packet
traffic on the Internet usage is known to be highly ``bursty.''
8. A smart market
One way to alleviate this problem is to use a ``smart market''
for setting the price of network access at different priorities.25
_________________________________________
24 Posted, flat prices have some benefits. First, accounting and billing use
resources too, and may be too high to justify. Second, many planner and
budget officers want predictable prices so they can authorize fixed funding
levels in advance.
25 The term ``smart market'' seems to be due to Vernon Smith. The
version we describe here is a variation on the Vickrey auction.
27
In a smart market users have only to indicate the maximum
willingness-to-pay for network access. We will refer to this
maximum willingness to pay as the user's ``bid'' for network
access. The router notes the bid attached to each packet and
admits all packets with bids greater than some cutoff value.
We depict the determination of the cutoff priority value
in Figure 6. The staircase depicted is simply a demand curve-
--it indicates how many packets there are at each different
bid.
Figure 6. Demand and supply for network bandwidth.
We take the capacity of the network to be fixed, and we
indicate it by a vertical line in Figure 6. In the case depicted
the demand curve intersects the supply curve at price 8.
Hence, this is the price charged to all users---even those who
have packets with higher bids.
Note that the bid price can be interpreted as a priority
price, since packets with higher bids automatically have
higher priority in the sense that they will be admitted before
packets with lower bids. Note how this is different from
priority-pricing by say, the post office. In the post-office
model you pay for first-class mail even if there is enough
excess capacity that second-class mail could move at the
28
same speed. In the smart market described here, a user pays
at most their bid.
The smart market has many desirable features. It is
obvious from the diagram that the outcome is the classic
supply-equals-demand level of service. The equilibrium
price, at any point in time, is the bid of the marginal
user. Each infra-marginal user is charged this price, so each
infra-marginal user gets positive consumer surplus from his
purchase.
The major differences from the textbook demand and
supply story is that no iteration is needed to determine the
market-clearing price---the market is cleared as soon as the
users have submitted their bids for access.26 This mechanism
can also be viewed as a Vickrey auction where the n highest
bidders gain access at the n + 1st highest price bid.27
We have assumed that the bid-price set by the users
accurately reflects the true willingness-to-pay. One might
well ask whether users have the correct incentives to reveal
this value: is there anything to be gained by trying to ``fool''
the smart market? It turns out that the answer is ``no.'' It
can be shown that it is a dominant strategy in the Vickrey
auction to bid your true value, so users have no incentive to
misprepresent their bids for network access. By the nature of
the auction, you are assured that you will never be charged
_________________________________________
26 Of course, in real time operation, one would presumably cumulate
demand over some time interval. It is an interesting research issue to
consider how often the market price should be adjusted. The bursty nature
of Internet activity suggests a fairly short time interval. However, if users
were charged for packets, it is possible that the bursts would be dampened.
27 Waldspurger, Hogg, Huberman, Kephart, and Stornetta (1992) de-
scribes some (generally positive) experiences in using this kind of ``second-
bid'' auction to allocate network resources. However, they do not examine
network access itself, as we are proposing here.
29
more than this amount and normally you will be charged
much less.
9. Remarks about the smart market solution
Here we consider several aspects of using efficient prices for
packet access to the Internet.
Who sets the bids?
We expect that choice of bids would be done by three parties:
the local administrator who controls access to the net, the
user of the computer, and the computer software itself.
An organization with limited resources, for example, might
choose low bid prices for all sorts of access. This would mean
that they may not have access during peak times, but still
would have access during off peak periods. Normally, the
software program that uses the network would have default
values for service---e-mail would be lower than telnet, telnet
would be lower than audio, and so on. The user could
override these default values to express his own preferences-
--if he was willing to pay for the increased congestion during
peak periods.
Note that this access control mechanism only guarantees
relative priority, not absolute priority. A packet with a
high bid is guaranteed access sooner than a low bid, but no
absolute guarantees of delivery time can be made.28 Rejected
packets would be bounced back to the users, or be routed to
a slower network.
_________________________________________
28 It is hard to see how absolute guarantees can be made on a connection-
less network.
30
Partial congestion
In our discussion we have taken the network capacity to
be exogenously given. However, it is easy to extend the
mechanism to the case where an additional packet creates
congestion for other packets, but does not entirely exclude
them. To do this, we simply need use an upward sloping
marginal cost/supply curve, rather than a vertical one. We
still solve for the same intersection of supply and demand.
Offline accounting
If the smart market system is used with the sampling system
suggested earlier the accounting overhead doesn't have to
slow things down much since it can be done in parallel. All
the router has to do is to compare the bid of a packet with the
current value of the cutoff. The accounting information on
every 1000th packet, say, is sent to a dedicated accounting
machine that determines the equilibrium access price and
records the usage for later billing.29 Such sampling would
require changes in current router technology, however. The
NSFNET modified some routers to collect sampled usage
data; the cost of the monitoring system is significant.
Network stability
Adding bidding for priority to the routing system should
help maintain network stability, since the highest priority
packets should presumably be the packets sent between
routers that indicate the state of the network. These network
``traffic cops'' could displace ordinary packets so as to get
information through the system as quickly as possible.
_________________________________________
29 We don't discuss the mechanics of the billing system here. Obviously,
there is a need for COD, third-party pricing, and other similar services.
31
Routing
As we have mentioned several times, the Internet is a connec-
tionless network. Each router knows the final destination of a
packet, and determines, from its routing tables, what the best
way is to get from the current location to the next location.
These routing tables are updated continuously to indicate the
current topology (but not the congestion) of the network.
Routing tables change to reflect failed links and new nodes,
but they do not change to reflect congestion on various links
of the network. Indeed, there is no standard measurement for
congestion available on the current NSFNET T-3 network.
Currently, there is no prioritization of packets: all packets
follow the same route at a given time. However, if each packet
carried a bid price, as we have suggested, this information
could be used to facilitate routing through the Internet. For
example, packets with higher bids could take faster routes,
while packets with lower bids could be routed through slower
links.
The routers could assign access prices to each link in
the net, so that only packets that were ``willing to pay'' for
access to that link would be given access. Obviously this
description is very incomplete, but it seems likely that having
packets bid for access will help to distribute packets through
the network in a more efficient way.
Capacity expansion
It is well-known that optimal prices send the correct signals
for capacity expansion, at least under constant or decreasing
returns to scale. That is, if an optimally priced network
generates sufficient revenue to pay the cost of new capacity,
32
then it is appropriate to add that capacity. It appears from our
examination of the cost structure of the Internet that constant
returns to scale is not a bad approximation, at least for small
changes in scale. Hence, the access prices we have described
should serve as useful guides for capacity expansion.
Distributional aspects
The issue of pricing the Internet is highly politicized. Since
the net has been free for many years, there is a large
constituency that is quite opposed to paying for access. One
nice feature of smart market pricing is that low-priority
access to the Internet (such as e-mail) would continue to
have a very low cost. Indeed, with relatively minor public
subsidies to cover the marginal resource costs, it would be
possible to have efficient pricing with a price of close to zero
most of the time, since the network is usually not congested.
If there are several competing carriers, the usual logic of
competitive bidding suggests that the price for low-priority
packets should approach marginal cost---which, as we have
argued, is essentially zero. In the plan that we have outlined
the high priority users would end up paying most of the costs
of the Internet.
In any case, our discussion has focused on obtaining an
efficient allocation of scarce network resources conditional
on the pre-existing distribution of budgetary resources. Noth-
ing about efficient pricing precludes the government from
providing cash subsidies for some groups of users to allow
them to purchase network access.
33
10. Role of public and private sector
As we have seen, current private providers of access to
the Internet generally charge for the ``size of the pipe''
connecting users to the net. This sort of pricing is probably
not too bad from an efficiency point of view since the ``size
of the pipe'' is more-or-less proportional to contemplated
peak usage.
The problem is that there is no pricing for access to
the common backbone. In December of 1992, the NSF an-
nounced that it will stop providing direct operational funding
for the ANS T-3 Internet backbone. It is not yet clear when
this will actually happen, although the cooperative agree-
ment between NSF and Merit has been extended through
April 1994. According to the solicitation for new proposals,
the NSF intends to create a new very high speed network
to connect the supercomputer centers which would not be
used for general purpose traffic. In addition, the NSF would
provide funding to regional networks that they could use to
pay for access to backbone networks like ANSnet, PSInet
and Alternet.
The NSF plan is moving the Internet away from the
``Interstate'' model, and towards the ``turnpike'' model.
The ``Interstate'' approach is for the government to develop
the ``electronic superhighways of the future'' as part of an
investment in infrastructure. The ``turnpike'' approach is that
the private sector should develop the network infrastructure
for Internet-like operations, with the government providing
subsidies to offset the cost of access to the private networks.
Both funding models have their advantages and disad-
vantages. But we think that an intermediate solution is
34
necessary. The private sector is probably more flexible and
responsive than a government bureaucracy. However, the
danger is that competing network standards would lead to an
electronic Tower of Babel. It is important to remember that
turnpikes have the same traffic regulations as the Interstates:
there is likely a role for the government in coordinating
standards setting for network traffic. In particular, we think
that it makes sense for the government, or some industry
consortium, to develop a coherent plan for pricing Internet
traffic at a packet level.
A pricing standard has to be carefully designed to contain
enough information to encourage efficient use of network
bandwidth, as well as containing the necessary hooks for
accounting and rebilling information. A privatized network
is simply not viable without such standards, and work should
start immediately on developing them.
35
Glossary30
Asynchronous Transfer Mode (ATM)
A method for the dynamic allocation of bandwidth using
a fixed- size packet (called a cell). ATM is also known as
"fast packet".
backbone
The top level in a hierarchical network. Stub and transit
networks which connect to the same backbone are guaranteed
to be interconnected. See also: stub network, transit network.
bandwidth
Technically, the difference, in Hertz (Hz), between the
highest and lowest frequencies of a transmission channel.
However, as typically used, the amount of data that can be
sent through a given communications circuit.
Bitnet
An academic computer network that provides interactive
electronic mail and file transfer services, using a store-
and-forward protocol, based on IBM Network Job Entry
protocols. Bitnet-II encapsulates the Bitnet protocol within
IP packets and depends on the Internet to route them.
circuit switching
A communications paradigm in which a dedicated com-
munication path is established between two hosts, and on
which all packets travel. The telephone system is an example
of a circuit switched network.
connectionless
The data communication method in which communica-
tion occurs between hosts with no previous setup. Packets
between two hosts may take different routes, as each is
independent of the other. UDP is a connectionless protocol.
Gopher
A distributed information service that makes available
hierarchical collections of information across the Internet.
_________________________________________
30 Most of these definitions are taken from Malkin and Parker (1992).
36
Gopher uses a simple protocol that allows a single Gopher
client to access information from any accessible Gopher
server, providing the user with a single "Gopher space"of
information. Public domain versions of the client and server
are available.
37
header
The portion of a packet, preceding the actual data, con-
taining source and destination addresses, and error checking
and other fields. A header is also the part of an electronic mail
message that precedes the body of a message and contains,
among other things, the message originator, date and time.
hop
A term used in routing. A path to a destination on a
network is a series of hops, through routers, away from the
origin.
host
A computer that allows users to communicate with other
host computers on a network. Individual users communicate
by using application programs, such as electronic mail,
Telnet and FTP.
internet
While an internet is a network, the term "internet"is usu-
ally used to refer to a collection of networks interconnected
with routers.
Internet
(note the capital "I") The Internet is the largest internet in
the world. Is a three level hierarchy composed of backbone
networks (e.g., NSFNET, MILNET), mid-level networks,
and stub networks. The Internet is a multiprotocol internet.
Internet Protocol (IP)
The Internet Protocol, defined in STD 5, RFC 791, is
the network layer for the TCP/IP Protocol Suite. It is a
connectionless, best-effort packet switching protocol.
National Research and Education Network (NREN)
The NREN is the realization of an interconnected gigabit
computer network devoted to Hign Performance Computing
and Communications.
packet
The unit of data sent across a network. "Packet"a generic
term used to describe unit of data at all levels of the protocol
stack, but it is most correctly used to describe application
data units.
38
packet switching
A communications paradigm in which packets (mes-
sages) are individually routed between hosts, with no previ-
ously established communication path.
protocol
A formal description of message formats and the rules
two computers must follow to exchange those messages. Pro-
tocols can describe low-level details of machine-to-machine
interfaces (e.g., the order in which bits and bytes are sent
across a wire) or high-level exchanges between allocation
programs (e.g., the way in which two programs transfer a
file across the Internet).
route
The path that network traffic takes from its source to
its destination. Also, a possible path from a given host to
another host or destination.
router
A device which forwards traffic between networks. The
forwarding decision is based on network layer information
and routing tables, often constructed by routing protocols.
Switched Multimegabit Data Service (SMDS)
An emerging high-speed datagram-based public data
network service developed by Bellcore and expected to be
widely used by telephone companies as the basis for their
data networks.
T1
An AT&T term for a digital carrier facility used to
transmit a DS-1 formatted digital signal at 1.544 megabits
per second.
T3
A term for a digital carrier facility used to transmit a
DS-3 formatted digital signal at 44.746 megabits per second.
Transmission Control Protocol (TCP)
An Internet Standard transport layer protocol defined in
STD 7, RFC 793. It is connection-oriented and stream-
oriented, as opposed to UDP.
39
References
Anonymous (1986). Stratacom, inc. introduces `packetized
voice system'. Communications Week, 2.
Cavanaugh, J. D., and Salo, T. J. (1992). Internetworking
with atm wans. Tech. rep., Minnesota Supercomputer
Center, Inc.
Claffy, K. C., Polyzos, G. C., and Braun, H.-W. (1992).
Traffic characteristics of the t1 nsfnet backbone. Tech.
rep. CS92-252, UCSD. Available via Merit gopher in
Introducing the Internet directory.
Clark, D. (1989). Policy routing in internet protocols.
Tech. rep. RFC1102, M.I.T. Laboratory for Computer
Science.
Cocchi, R., Estrin, D., Shenker, S., and Zhang, L. (1992).
Pricing in computer networks: Motivation, formula-
tion, and example. Tech. rep., University of Southern
California.
Faulhaber, G. R. (1992). Pricing Internet: The efficient
subsidy. In Kahin, B. (Ed.), Building Information
Infrastructure. McGraw-Hill Primis.
Gerla, M., and Kleinrock, L. (1988). Congestion control in
interconnected lans. IEEE Network, 2(1), 72--76.
Green, P. E. (1991). The future of fiber-0ptic computer
networks. IEEE Computer, ?, 78--87.
Green, P. E. (1992). An all-optical computer network:
Lessons learned. Network Magazine, ?
Huber, P. W. (1987). The Geodesic Network: 1987 Report
on Competition in the Telephone Industry. U.S. Gov't
Printing Office, Washington, DC.
Kahin, B. (1992). Overview: Understanding the NREN. In
Kahin, B. (Ed.), Building Information Infrastructure.
McGraw-Hill Primis, NY.
Kleinrock, L. (1992). Technology issues in the design
of NREN. In Kahin, B. (Ed.), Building Information
Infrastructure. McGraw-Hill Primis.
Krol, E. (1992). The Whole Internet. O'Reilly & Associates,
Inc., Sebastopol, CA.
40
Lynch, D. C. (1993). Historical evolution. In Internet System
Handbook. Addison Wesley, Reading, MA.
Malkin, G., and Parker, T. L. (1992). Internet users' glossary.
Tech. rep., Xylogics, Incl. and University of Texas.
Mandelbaum, R., and Mandelbaum, P. A. (1992). The
strategic future of the mid-level networks. In Kahin, B.
(Ed.), Building Information Infrastructure. McGraw-
Hill Primis.
McGarty, T. P. (1992). Alternative networking architectures:
Pricing, policy, and competition. In Kahin, B. (Ed.),
Building Information Infrastructure. McGraw-Hill
Primis.
Roberts, L. G. (1974). Data by the packet. IEEE Spectrum,
XX, 46--51.
Ruth, G., and Mills, C. (1992). Usage-based cost recovery
in internetworks. Business Communications Review,
xx, 38--42.
Smarr, L. L., and Catlett, C. E. (1992). Life after Internet:
Making room for new applications. In Kahin, B. (Ed.),
Building Information Infrastructure. McGraw-Hill
Primis.
Waldspurger, C. A., Hogg, T., Huberman, B. A., Kephart,
J. O., and Stornetta, W. S. (1992). Spawn: A dis-
tributed computational economy. IEEE Transactions
on Software Engineering, 18(2), 103--117.
41
-----------------------------------------------------------------------
This file passed through SEA OF NOISE, +1 203 886 1441...
SHARE & ENJOY!
-----------------------------------------------------------------------