You just had that new 10Gbps+ WAN link installed. Woo!
It finally gives you a massive link between your key locations and promises to deliver amazing performance.
To give it a nice trial run, you find a file on a server on the opposite side and pull it over. Seems a little sluggish.
So you break out another laptop and pull the same file. Seems MUCH better. What's the deal? Why would one machine perform better than another on the same network link?
There are a few reasons this could happen on fast connections, we will look at them one at a time.
1. Your 10Gbps isn't really 10Gbps.
Hate to break it to you, but your link may not be what you think.
If a network link looks like a duck (has blinky lights), sounds like a duck (your provider says it's 10Gbps), and quacks like a duck (has an ok speedtest.net result), DOESN'T make it a duck!
There are a million factors that go into delivering a reliable high-speed link. Chances are that one in that million went wrong and you have less than the promised bandwidth. Skip the standard, biased, go-to speed test and do the thing yourself.
To do so, check out iPerf 3. It's a free throughput testing tool that can blast the heck out of your link. Install it on both ends of the connection, enable one endpoint as a server and the other as a client, and bam! check out that link throughput. If you get less throughput than expected, make sure that the machine you are using has a NIC capable of the speeds you are testing - for example, make sure that you have a 10Gbps card on each end in order to test 10Gbps. Also, use the -w option to set the TCP Window Size (both send and receive) to something that will fill the network. To do so, you will need to calculate the bandwidth delay product - BW * Latency = Amount to be sent/received. More on this in another article.
If you see low throughput with packet loss, check out your network to see if there are any local indicators of packet loss (ethernet errors or interface discards). If your stuff is clean, it may be time to call the ISP.
2. TCP Congestion Algorithm
Just because you have bandwidth does not mean that you will have throughput.
TCP is responsible for sending data from one point to another reliably. However not all TCP's are created equal. There are factors in the protocol that govern how much data can be sent at once, received at once, or recovered at once.
In order to play fair and avoid selfishly taking up all network bandwidth for itself, TCP has a governor called the TCP send window (also congestion window) that controls the amount of data that will be put on the wire at one time. At the start of a connection, TCP has no idea how much throughput the network in between the endpoints can handle. So, many algorithms will send data cautiously and will consider the amount of perceived packet loss and network latency in order to determine how much they will send at one time.
Some operating systems may use a TCP Congestion Algorithm that is better designed to recover from loss, latency, or congestion. For example, older versions of Windows will use CTCP or NewReno instead of Cubic, which can recover from packet loss a little better.
Look, this is a massive topic with a ton of detail. In another article, we will discuss the minutiae of this topic, but for now, just know that some OS's utilize TCP algorithms that can utilize high-bandwidth, high-latency networks (long, fat networks) much more efficiently than others.
3. Multiple Connections vs One Connection
Some file transfers utilize one TCP connection. They will send all data over a single channel.
This approach absolutely has its bottlenecks as we have discussed to this point. However some file transfers will make use of several TCP connections working in tandem. Instead of having a single connection perform at 10Gbps, they may use 10 connections at 1Gbps each. This is easier for an operating system to attain and sustain, and will likely achieve better performance than a single-threaded approach.
Make sure you understand which one your transfer is using - tuning it if possible.
Not all file transfers are equal. Achieving a comfortable amount of throughput involves a good highway (network bandwidth), a fast car (TCP algorithms that can use the bandwidth) and in some cases, multiple lanes (more than one connection).
Hopefully this gives you a few ideas when you experience disappointing performance from that sparkly new network connection!
Author Profile - Chris Greer is a Chief Packet Head for Packet Pioneer LLC and a Wireshark Network Analyst. Chris regularly assists companies around the world in tracking down the source of network and application performance problems using a variety of protocol analysis and monitoring tools including Wireshark. Chris also delivers training and develops technical content for Wireshark and several packet analysis vendors.