Transformations of Networking — Part 3
- Transformations of Networking — Part 1
- Transformations of Networking — Part 2
- Transformations of Networking — Part 3 (this article)
- Transformations of Networking — Part 4
- Transformations of Networking — Part 5
According to Wikipedia, the first switch was designed by Kalpana in 1989. By 1994 the company was purchased by Cisco Systems, and we all know who Cisco is now.
Network switching went through a few evolutions in the ten years from 1990 to 2000; additional functionality to STP (spanning-tree), link aggregation and VLANs are the ones that I’ll touch on. These are largely incremental technical innovations that are useful on their own, but not groundbreaking. What is really useful here is that putting these protocols together allowed network administrators to built flexible, fault-tolerant, stable networks.
The problem with broadcast traffic
Even though switches had reduced the network overhead by sending network traffic only down those ports which require it, there was another type of traffic getting ready to rear its ugly head. As with many things, a network configuration that works well on a small scale often falls apart as networks get larger and larger.
A broadcast is a packet designed to be sent to every host on the network. A unicast packet is designed to be sent to one host only.
Broadcast traffic is by definition destined for all hosts on a network, so when a switch receives a broadcast packet, it floods every other port with these packets and every host on the network must examine the incoming packet, and determine if it wishes to act upon it, or throw the packet away.
The problem here is with scalability. With less that 200 hosts on a network, the broadcasts are usually not a problem, but as networks get larger and larger the sheer volume of the broadcast traffic can actually reduce total network bandwidth as the switch is filling server uplinks, router and firewall uplinks, switch uplinks and even client PC interfaces. In addition, processing broadcast traffic requires CPU attention by client PCs and servers, and this can place a high (and hard to troubleshoot) load on these devices.
Common but Inefficient Approaches to Scaling IP Networks
When a site runs out of IP addresses, sometimes a network administrator will simply create a larger IP range:
- Original: 10.10.10.0 255.255.255.0 yeilds 254 usable addresses
- Larger: 10.10.10.0 255.255.254.0 yeilds 510 usable addresses
- Largest: 10.10.10.0 255.255.252.0 yeilds 1022 usable addresses
If a site actually filled up one of these larger IP spaces, there could be a huge amount of broadcast traffic bumping around the network.
Sometimes network administrators will just overlay IP ranges on top of each other within the same LAN:
- Original: 10.10.10.0 255.255.255.0 yeilds 254 usable addresses
- Secondary: 192.168.10.0 255.255.255.0 yeilds 254 usable addresses
The problem with both cases is that every host on the network still receives the broadcast traffic, and this traffic is forwarded across network trunks, to servers and to end clients.
A (much) Better Approach to Scaling IP Networks
It is much better to create VLANs, as this keeps the broadcasts trapped within each VLAN and reduces the overall load on the network. VLANs always require a router to transfer traffic between them; this can sometimes be done on the switch itself (if it is a Layer 3 switch), or through a router or server.
VLANs — 802.1Q
A VLAN, or Virtual LAN on a switch keeps hosts in one VLAN from seeing data traffic (generally broadcast traffic) from other VLANs. This provided a huge advantage in security, and in reducing the overall effect of broadcast traffic. At about this time, networks consisting of hubs and switches alone were experiencing slowdowns due to large broadcast zones.
Once switches were able to identify a port as being part of a particular VLAN, network administrators wanted to be able to send a VLAN to another switch which would have a corresponding port on that same VLAN.
To do this, the 802.1Q protocol was devised, which adds a tag (a small header) to traffic (identifying the VLAN) between switches when more than one VLAN is put onto a single physical interface. Thus, the concept of VLAN tagging was born and network administrators were able to create VLAN trunks, which were simply regular interfaces configured to tag traffic from different VLANs on them.
See Cisco’s guide on configuring 802.1Q trunks.
The ability to send VLANs across switches, sometimes across campuses gave network administrators the flexibility to create VLANs based on whatever distinction fit the business best — not only by physical location, but other organizational format such as departments or security zones.
Spanning Tree Protocol — 802.1D
For the first time, STP allowed network adminstrators to design networks with fault-tolerance in them.
In the diagram above, with three switches (potentially even in the same wiring closet, but not necessarily) it is possible to provide some level of fault-tolerance to client. The diagram describes a situation where one of the links has failed; STP detects the failed link, and opens up a previously blocked port to re-establish communication.
This scenario can happen if a cable actually breaks, a fiber transceiver fails, or someone accidentally unplugs the wrong port. It happens.
This diagram describes a total switch failure; for example if the root switch fails in this scenario, STP will detect the lost connection and open that previously blocked port so at least the two secondary switches will maintain communication. A good network administrator could anticipate likely failure modes, and place a backup server on one of these other switches to ensure that network clients always have services. I’ll talk more in-depth on this in another article.
Problems and Limitations of STP
First, if a switch detects a network change (neighboring switch failure, cable failure) it goes into ‘learning mode’ where the switch will not pass traffic on any ports. All the other switches in the LAN will also go into learning mode, forcing a total network failure — although temporary it is very inconvenient as STP convergence can take over a minute.
To combat this slow convergence time, the IEEE created RSTP (Rapid STP) which can (when properly configured) converge in under a second.
Learning mode and end-user inconvenience
Secondly, whenever a port comes online STP puts that individual port into learning mode. This can sometimes cause end-clients to fail at DHCP, or at least take a very long time (over a minute). The real problem is that this test is largely inappropriate — what are the chances of detecting a loop at an end user’s port?
To ensure that end-users are not inconvenienced by what is largely an inappropriate test, Cisco devised a solution. portfast configured on an interface tells STP to immediately place that port into forwarding mode — effectively bypassing the learning phase of STP. However, as it is still possible (and it has happened) for an end user to create a loop, Cisco provided a second configuration parameter to go along with this: bpduguard. Bpduguard is a very simple program; it simply watches for BDPU frames and if it sees one, it shuts down the port immediately. That way if an end user inadvertently created a loop, it doesn’t affect the rest of the network.
For more info on portfast and bpduguard, see the Cisco documentation here.
Link Aggregation — 802.3ad
Link aggregation allows network administrators to bundle multiple network connections into a single, logical, virtual network interface. This approach can give network trunks with greater bandwidth, and higher-availability as there are more links to accidently trip over or cut with scissors. It happens.
Link aggregation works by identifying a traffic flow, most commonly done by IP address. Each device will remember which physical interface a particular flow is assigned to, and will send traffic down that interface. In this way, traffic between two hosts doesn’t benefit from bandwidth increases, however many hosts to a server, or many hosts to many hosts will approach 50/50 load-sharing on the virtual link.
See Cisco’s guide on configuring LACP between switches.
LACP isn’t only for network hardware, as it can provide the same fault-tolerance and bandwidth scaling for servers. This can be very useful for virtualized servers, as they often have high bandwidth requirements and the tolerance for NIC failure is very low. In this case, a network administrator works with the local system adminsitrator to configure the server using the server software, and the local switch is configured to match the parameters.
This diagram shows a server with two physical interfaces bundled into a single logical interface. This configuration allows the server and network administrator the ability to double bandwidth to the server, provide seamless NIC configuration changes (replacing server NICs, changing cabling infrastructure), and also high-availability in the event of a NIC failure, or a cable failure.
Above is a diagram that shows an advanced configuration which I won’t get into right now, suffice it to say that the Cisco 3750 series switch allows for more fault-tolerance built into the network than a standard L2 or L3 switch.
All of this technology is still in use today, and the configurations are still valid for many sites. While this period in networking saw the first Layer 3 switches, these devices didn’t come into common use until after the year 2000 — the next stop in the transformation of networking.