A network switch is a device that expands the network and can provide more connection ports in the subnet to connect more computers. It has the characteristics of high cost performance, high flexibility, relative simplicity, and easy implementation. So, what is the role of network switches in the data center?
When a network switch interface receives more traffic than it can handle, the network switch chooses to either cache it or the network switch to drop it. Buffering of network switches is usually caused by different network interface rates, sudden bursts of traffic on network switches or many-to-one traffic transmission.
The most common problem that causes buffering in an optical network switch is sudden changes in many-to-one traffic. For example, an application is built on multiple server cluster nodes. If one of the nodes simultaneously requests data from the network switches of all other nodes, all replies should arrive at the network switches at the same time. When this happens, all network switches flood the ports of the supplicant's network switch. If the network switch does not have enough egress buffers, then the network switch may increase some traffic, or the network switch may increase application latency. Sufficient network switch buffers can prevent packet loss or network latency due to low-level protocols.
Most modern data center switching platforms solve this problem by sharing the switching cache of the network switches. Fabric switches have a buffer pool space allocated to specific ports. Network switches share switching caches that vary widely between vendors and platforms. Some network switch manufacturers sell network switches designed for specific environments. For example, some network switches have large buffer processing, which is suitable for many-to-one transmission scenarios. Network switches In environments that can distribute traffic, network switches do not need to deploy buffers at the switch level.
Huge network fabric switch buffers mean that the network doesn't drop any traffic, but it also means increased network switch latency - data that is stored by a network switch needs to wait before being forwarded. Some network administrators prefer smaller buffers on network switches to let the application or protocol handle some traffic down. The correct answer is to understand the traffic patterns of your application's network switches and choose a network switch that fits those needs.