When Ethernet was originally designed, computers were fairly slow and networks were rather small. Therefore, a network running at 10 Mbps was more than fast enough for just about any application. Nowadays, computers are several orders of magnitude faster and networks consist of hundreds or thousands of nodes, and the demand for bandwidth placed on the network is often far more than it can provide.
When the load on a network is so high that it results in large numbers of collisions and lost frames, the productivity of the users is greatly reduced. This is called congestion and it can be solved in one of two ways: either scrap the entire network currently in place and replace it with a faster one; or install an Ethernet switch to create multiple small networks.
At the basic level, a switch can be thought of as a bridge with many ports and low latency. The reasoning behind switches is one of "divide and conquer". We divide the network into many small networks, thus conquering a congestion problem. It is also worth noting that each of the new subnetworks each have the full Repeater Count available to it. Switches are thus useful in segmenting networks that exceed the maximum number of repeaters allowed.
In a "traditional" Ethernet network, there is 10 Mbps of bandwidth available. This bandwidth is shared among all of the users of the network who wish to transmit or recieve information at any one time. In a large network, there is a very high probability that several users will make a demand on the network at the same time, and if these demands occur faster than the network can handle them, eventually the network seems to slow to a crawl for all users.
An excellent analogy for this situation is to look at a road between two points. At 4:00 a.m. there is usually very little traffic, and anyone who does travel can get to his or her destination quickly and unimpeded. However, at 8:00 a.m., rush hour hits and suddenly there are more cars on the road that can be handled. The result is that traffic slows to a crawl, and a trip that would take ten minutes at 4:00 a.m. now takes over an hour to complete. The road itself and the cars have not changed, however there are now too many cars using the road at one time to be accomodated properly.
Now, let's look at a redesign of the highway having the congestion problem. If we built a dedicated road from each person's driveway directly to his or her destination. then the rush hour congestion problem would go away. Each person would be able to drive at full speed at all times, and all of the other traffic would be irrelevant. It is impossible to set this up in a highway system, however it is possible to do it in a network.
Switches allow us to create a "dedicated road" between individual users (or small groups of users) and their destination (usually a file server). The way they work is by providing many individual ports, each running at 10 Mbps interconnected through a high speed backplane. Each frame, or piece of information, arriving on any port has a Destination Address field which identifies where it is going to. The switch examines each frame's Destination Address field and forwards it only to the port which is attached to the destination device. It does not send it anywhere else. Several of these conversations can go through the switch at one time, effectively multiplying the network's bandwidth by the number of conversations happening at any particular moment.
Another analogy which is useful for understanding how switches increase the speed of a network is to think in terms of plumbing. For sake of argument, assume that every PC on a network is a sink, and a 10 Mb/s connection is a 1/2-inch pipe. Normally, a 1/2-inch pipe will allow enough water to flow for one or two sinks to have enough water pressure to fill quickly. However, putting more sinks on that same 1/2-inch pipe will drop the water pressure enough that eventually the sinks take a very long time to fill.
To allow all sinks to fill quickly, we can connect the source of water to a larger (6-inch) pipe, and then connect each sink to the 6-inch pipe via its own 1/2-inch pipe. This guarantees that all sinks will have enough water pressure to fill quickly. See Figure One for an image of this concept.
"Fat Pipe" Increasing Performance
Most network operating systems now use a "Client-Server" model. Here, we have many network users, or "clients" accessing a few common resources, or "servers." If we look at our previous highway example, an analogy would be to have a hundred roads for individuals all converging at two or three common points. If these common points are the same width as our individual roads, then they cause a major bottleneck, and the end result is exactly the same as if everyone was sharing one small road. This totally defeats the purpose of building all the individual roads in the first place.
The solution is to widen the road to our shared resource so that it can support the full load of most or all of the individual roads at once. In other words, we increase the bandwidth to our servers while connecting our clients at 10 Mbps. This is usually referred to as a High Speed Backbone. In networking slang, it is commonly called a "Fat Pipe."
A high speed backbone network is usually run at a speed of 100 Mbps, and is used to interconnect all of the servers and switches on the network. A diagram of such a setup is shown in Figure Two.
Switching With High Speed Backbone
In Figure One we have two File Servers, two "Power Users" who regularly tranfer large files, and ten "Undemanding Users" who mainly use the network for only an occasional print job or e-mail message (due to space constraints, only two of the "Undemanding Users" are actually shown in the drawing).
This layout is splitting our overall network into four subnetworks. From left to right these subnetworks are outlined in Red, Green, Blue, and Violet. The Red subnetwork is a shared 10 Mbps setup, with all of the "Undemanding Users" sharing 10 Mbps of bandwidth. The Green and Blue subnets are dedicated 10 Mbps connections, sometimes referred to as "Private Ethernets." Here, each of the two power users has 10 Mbps of bandwidth dedicated to his or her machine, and this bandwidth is not shared with anyone else. Finally, we have our Violet subnetwork. This one is a Fast Ethernet setup running at a speed of 100 Mbps, and the bandwidth is shared by the two servers.
This is the most common way of setting up a switched network, and almost always results in an optimal price/performance ratio. We limit the amount of expensive Fast Ethernet hardware needed by only using it where its cost is justified by the performance it gives in handling the load at that point in the network, while leveraging an existing investment in 10 Mbps equipment in less demanding parts of the network. As a 10/100 switch is a fairly costly piece of equipment, each port we dedicate to a user is also rather expensive, so again these are only dedicated to individual users where that user's load justifies it. Finally, we can set up shared subnetworks which lump anywhere from two up to 100 users on one switch port.
The key to success in setting up a high speed backbone network such as this is to properly balance the demand of the users to maximize the number of users on each port while still maintaining high performance. It has been found that 12 users on one switch port is the maximum allowing superb performance. However, this number will vary widely depending on each user's usage patterns.
There are three basic types of switches on the market at this time. They all perform the same basic function of dividing a large network into smaller subnetworks, however the manner in which they work internally is different. The types are known as Store and Forward, Cut Through, and Hybrid. A description of each type is shown below:
Please note that the above three switch types only apply when the source and destination ports are running at the same speed. If the switch has to perform a speed conversion, as is usually the case when using a High Speed Backbone, then the switch must operate in a Store and Forward mode, and the difference between the switch types becomes a non-issue.
Designing a switched Ethernet network is actually a fairly straight forward process. The first step is to evaluate the traffic flow through you expect each user or group of users to generate. For example, if all of your application programs will reside on the file servers, then the network will experience a very heavy load as users start, use, and quit various programs. In such a case, you should limit as much as possible the number of users per switch port, and possibly consider connecting each user directly to a switch port. On the other hand, if most of your applications will reside on each PC's hard drive, then you will need to evaluate how often each user will use a network server to save or retrieve data, and how big the files being transferred will be.
Analisys of the network will most likely find that you have a large number of users who are not going to place a heavy load on the network, and a smaller number of users who will place a large load on the network. We now group the Undemanding Users together on a hub and connect each hub to a switch port. Our more demanding users will usually be either directly connected to the switch, or if they are on hubs, fewer of them will be sharing each switch port than on the Undemanding User portion.
One part of the design which requires serious thought is the choice of the technology for a High Speed Backbone. If the backbone is expected to carry a heavy load, then the preferred technology to use for it is FDDI, followed closely by 100VG-AnyLAN. Both of these are deterministic in nature, and allow a greater network utilization than networks built with a Fast Ethernet backbone, which is used where utilization is somewhat lower. Please consult our pages on each technology to help you to decide on a high speed backbone type.
One point which should be kept in mind regarding the design of a switched network is that traffic patterns vary by user and time. Therefore, just taking a "snapshot" of network usage patterns may lead to the wrong conclusions and result in a design which is not optimal. It is always advisable to monitor usage patterns over a period of several days to a week to decide how to allocate network bandwidth optimally. Also, in almost all cases, a process of trial and error may be required to fully optimize the design.