The following text is copyright 1998 by Network World, permission is hearby given for reproduction, as long as attribution is given and this notice is included.

The elusive goal of counting

By Scott Bradner
Network World, 3/23/98

Once upon a time when the 'Net was young, people thought they
knew how big it was - at least from a traffic perspective.

Merit, the organization that managed the NSFnet for the National
Science Foundation, used to publish monthly traffic reports. These
reports listed the amount of traffic that entered and exited the NSFnet
backbone at the exchange points with the regional networks.

The Internet of those days primarily consisted of a set of regional data
networks - sort of geographically constrained Internet service
providers - serving customers and using the NSFnet to exchange
traffic among themselves.

This simple Internet architecture meant that the Merit reports gave a
reasonable idea about what was going on. Even then it was hard to
use these reports to tell what the pattern of traffic exchange was, since
they only listed traffic in and out of the edges of the NSFnet and not
what paths this traffic was taking through the backbone.

Those days of a simple Internet are long gone. There is no longer one
backbone, but rather a dozen or more, depending on your definition
of a backbone. The ISPs no longer are restricted to a specific
territory.

There are many ISP-to-ISP connections and these links form a semi-
random mesh rather than a clean hierarchy. And the ISPs consider
their traffic statistics to be proprietary information.

So we have no real traffic data and even if we did, it would be hard to
understand the effect of the traffic patterns. For example, if I were
going to send data between two sites on different ISPs in Boston, that
data might never have to leave Boston if the two ISPs are
interconnected locally.

Then again, the traffic might have to go through Washington, D.C. if
the ISPs only interconnected at the MAE East Exchange.

That means it is impossible to answer a question that gets asked all the
time: What are the relative traffic loads of the Internet and the public
telephone network?

Because of Federal Communications Commission reporting rules,
there is reasonably good data about what is going on in the phone
network, but nothing more than speculation about the Internet side.

There is a new reason to worry about this lack of an ability to
understand just what is going on in the Internet. Some fear that the
company resulting from the WorldCom/MCI merger proposal would
dominate the Internet business.

In the past, MCI has made extravagant claims about the percentage of
Internet traffic that flows through its network. These were claims that
no one could refute because there was no public data that could be
used to analyze the claims. The charges of potential dominance and
the defenses of limited dominance are currently only bluster because
there is no public data to back them up.

It just might be time to figure out a way to get some real information
about what is going on in this infrastructure that every day is
becoming more vital to the world's economic health.

Disclaimer: Harvard's claims are real, not extravagant. In any case, I
developed the above desire for data on my own.