Bandwidth is a misnomer

SearchNetworking.com expert Loki Jorgenson narrows in on defining bandwidth and reveals what you need to know in order to measure it properly.

This Content Component encountered an error

People often refer to bandwidth as if there was some precise definition in the context of networks. In fact, the word "bandwidth" originally described a range of frequencies or wavelengths in electromagnetism and had no substantial use in networks. Through misuse over time and semantic mutation, bandwidth now often refers to the rate at which data can be transmitted along a network path.

Most people would agree that "bandwidth matters," usually meaning the raw capacity of some part of their network. But as pointed out in an earlier article (Bandwidth scam) -- focusing on it rarely solves problems. The reason is in part that performance is a function of the weakest link in the chain -- and the usual problems are broken links -- and so adding stronger links elsewhere (i.e. more bandwidth) rarely improves the end-to-end performance.

But still, people often ask the question: How do I measure my bandwidth? Or the available bandwidth (after traffic)? How big is my pipe? The short answer is: It depends on what you mean and what you want. The long answer is, well, longer.

Let's suppose bandwidth (in some sense yet to be defined) does matter and you want to know what it is and how to determine what you've got. Let's define our terms and see if we can make some progress toward that objective.

A non-comprehensive list of terms currently in common use might include:

  • Bandwidth
  • Capacity
  • Streaming rate
  • Bitrate
  • Throughput
  • Goodput
  • Transfer rate

with a list of possible modifiers that might include "link," "end-to-end," "maximum achievable," "peak," "bulk transfer," and "steady state." So, for example, how exactly does one distinguish between "maximum achievable end-to-end bandwidth" and "bulk transfer capacity"?

Let's consider some handy definitions provided by CAIDA, the Cooperative Association for Internet Data Analysis, provided as background for the Bandwidth Estimation Workshop in 2003:

Capacitymaximum throughput physically possible on a particular link media.
End-to-end capacityminimum capacity on an end-to-end path; corresponding to capacity of the "narrow link."
Available bandwidthmaximum unused throughput on an end-to-end path, given current cross-traffic; corresponding to the capacity of the "tight link."
Bulk transfer capacitymaximum throughput that the end-to-end network can provide to a single congestion-aware transport layer connection.
Achievable throughputmaximum throughput that an application can obtain depending on the transmission mechanism (e.g. TCP).

And since most of these definitions rely on the term "throughput," let's also define it, and the somewhat related term "goodput" too:

Throughputthe number of bits per unit time transmitted by the source host
Goodput the number of bits per unit time received by the destination host, less duplicates (E.g. goodput represents the usable portion of the throughput – the transmitted data less bits lost, duplicated, or re-transmitted.)

While these definitions can be debated endlessly, a number of observations can be made about their nature. In particular, it is important to further clarify the significance of the following:

  • End-to-end, in contrast with link-by-link
  • The effect of cross-traffic, compared with the empty network path
  • The influence of the application and/or transport layer relative to lower layers
  • The overhead of different protocols at different layers
  • The impact of loss due to congestion or dysfunction versus a "clean" transmission
  • Peak or maximum versus sustained or average rates of data transfer
  • Actual rate of data transfer relative to potential rate

Further distinctions can be made that define:

  • Where the end-to-end network begins/ends -- does it include the NIC and driver of the end hosts, for example;
  • Whether application/OS effects include the size of packets in terms of the cost of headers, either in processing or in data transfer overhead;
  • Whether the transient nature of any effects, such as loss or cross-traffic, are averaged away, preserved as some sort of variation, or ignored in favor of a maximum or minimum value;
  • The importance of duplex -- symmetry of the measure (in both directions) and the meaning of one-way versus two-way capacities.
  • The effects of deliberate mechanisms such QoS or traffic shapers.

So that pretty much defines the vocabulary and the contexts of importance. Now you need to figure out which ones applies to your context. Once you have narrowed your focus, you may be getting close to a unique definition for "bandwidth" (or whatever label you want to use).

And, you might also be ready to select a specific metric and methodology for measuring it. Consider the various ways that you might determine the value that is of importance to you:

  1. Theoretical estimates – in other words, you might actually believe what is written in your network diagrams and logically project the capacity at any given point, or from one point to another;
  2. Point measures – use a tool like MRTG to read and capture the packet counts at router or switch interfaces to determine the rate at which bits are passing through those devices;
  3. End-to-end measures – use one of a variety of methodologies to measure the capacity of specific end-to-end paths:
    1. Bulk file transfers – use a typical TCP application like an optimized FTP client to move a certain amount of data from one host to another;
    2. Flooding – use a software (or hardware) tool like iPerf to create a steady-state stream between two hosts;
    3. Active probing – use a software tool like PathRate or Spruce that infers end-to-end network performance from the behaviors of transmitted packet probes;
    4. Passive inference – use a software tool like Wren to estimate end-to-end bandwidth from passively monitored packet behaviors.

As you can imagine, each methodology has some advantages and disadvantages, depending on what definition of "bandwidth" you settled on. For example, you may be most interested in peak Layer 2 transmit/receive frame rates at a specific device interface -- simple receive or transmit rates over time, possibly while flooding your network, can give you the point capacity. Or you may be most interested in the typical sustainable rate that a particular application can maintain across a specific end-to-end path -- in which case averaged measures, cross-traffic, NICs, and loss are very applicable. Or you may be concerned about the WAN link provided by your ISP and whether or not you are getting what you paid for.

Two of the CAIDA definitions stand out as the most useful -- "end-to-end capacity" and "available bandwidth". Except for the influence of cross-traffic in the latter, they might both be reduced to a common set of "typical" constraints:

  • End-to-end
  • Layer 1-3 effects (e.g. no application or transport layer effects)
  • Applicable to a clean network path (i.e. do not include degradation due to loss or other network dysfunction)
  • Do not include protocol, frame/packet size, or other limiting effects
  • One-way
  • Instantaneous (peak) value

Software like iPerf or NetPerf will likely be the most useful to you. They require clients at each end but provide a clear view of packet-level performance.

In the end though, bandwidth remains in the eye of the beholder. And you need to know what you are beholding in order to measure it properly.

NOTE: For more precise definitions of bandwidth, consult the Authoritative Dictionary of IEEE Standards Terms, Seventh Edition, ANSI/IEEE Std. 100 (ISBN 0738126012) - amongst the 17 definitions, there is not a one that directly relates to the maximum rate of data transfer on packet networks. Bandwidth is still a misnomer (thanks to Roger Freeman for providing the pointer).

References:
CAIDA Taxonomy of Performance Tools
BWEst 2003
MRTG
iPerf
NetPerf
Bandwidth estimation tools
IETF draft: Defining network capacity


NetworkingChief Scientist for Apparent Networks, Loki Jorgenson, PhD, has been active in computation, physics and mathematics, scientific visualization, and simulation for over 18 years. Trained in computational physics at Queen's and McGill universities, he has published in areas as diverse as philosophy, graphics, educational technologies, statistical mechanics, logic and number theory. Also, he acts as Adjunct Professor of Mathematics at Simon Fraser University where he co-founded the Center for Experimental and Constructive Mathematics (CECM). He has headed research in numerous academic projects from high-performance computing to digital publishing, working closely with private sector partners and government. At Apparent Networks Inc., Jorgenson leads network research in high performance, wireless, VoIP and other application performance, typically through practical collaboration with academic organizations and other thought leaders such as BCnet, Texas A&M, CANARIE, and Internet2. www.apparentnetworks.com
This was first published in June 2006

Dig deeper on Unified Communications Resources

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchMobileComputing

SearchNetworking

SearchTelecom

SearchITChannel

SearchEnterpriseWAN

SearchExchange

Close