Windows Vista: How to fix network bottlenecks?

For the most part it’s not a problem – I just assumed that Vista’s beefy network stack was responsible, and made a mental note to fix it at some point. And then forgot. But recently I encountered a really frustrating problem. Using Remote Desktop Protocol (RDP) to connect from a Vista Enterprise machine to a […]

For the most part it’s not a problem – I just assumed that Vista’s beefy network stack was responsible, and made a mental note to fix it at some point. And then forgot.

But recently I encountered a really frustrating problem. Using Remote Desktop Protocol (RDP) to connect from a Vista Enterprise machine to a Windows 2003 server, the performance was so woeful that I was starting to worry whether there was a problem with the server.

As it happens, my fears about the server were unfounded, but the thought that Vista’s networking stack was to blame turned out to be correct. The Next Generation TCP/IP stack in Windows Vista supports something called Receive Window Auto-Tuning. This is nothing to do with the RDP “window” you see on the screen, but rather a TCP buffer which TCP/IP clients use to ensure smooth transmission.

Essentially, the TCP receive window is the amount of data that is sent in one chunk before waiting for an acknowledgement to come back from the machine at the other end. It's one of the trickiest things to optimise in TCP/IP because you need to balance throughput with reliability -- if you transmit too much data in one go, if there is an error in the data flow, the whole lot has to be sent again. Windows XP was originally tuned for dial-up connections (lots of errors, so very small chunks of data were sent) but this caused performance problems on high speed broadband networks.

In XP SP2, Microsoft increased the receive window value for better performance on broadband, but it's still not optimal for many situations.

Vista is supposed to improve on this situation: Vista has a new feature called "Receive Window Auto-Tuning" which constantly monitors bandwidth capacity and latency, and adjusts the TCP window on the fly for any given situation. It also enables TCP window scaling – by default the maximum TCP window size if 65,535 bytes, but window scaling allows a client to advertise that it’s optimised to receive a bigger window than this – it’s designed to prevent TCP window bottlenecks in high-bandwidth environments. Vista’s maximum advertised TCP window is 16MB.

Therefore, in a given situation a Vista machine will typically receive much more network data than a Windows XP, which can result in network spiking. This isn’t necessarily a problem in itself, but it does increase the importance of actually using TCP/IP QoS (Quality of Service), which is installed and enabled by default in Vista.

The problem with Vista's new-fangled network stack: All this automatic tuning of the Vista network stack sounds great in theory, but the problem is that some clients don’t support TCP window scaling, or do but don’t have it enabled. Additionally, some firewall products also don’t support it. In either scenario, the result is dropped packets which affects network performance horrendously -- your traffic is literally dropping into a black hole, never to be seen again.

So if you’re experiencing excessive network lags on your Vista machine, especially compared to non-Vista machines, it might be worthwhile disabling auto-tuning. Do this by opening up an administrative Command Window (right-click, Run as administrator), and type in the following command:

netsh interface tcp set global autotuninglevel=disabled

You may also need to type in:

netsh interface tcp set global rss=disabled

The changes take effect straight away, with no reboot needed. Bear in mind that this is a global change, so it may really be worth your while to sit down and nut through your network's QoS settings to get things running happily without disabling auto-tuning.

Microsoft, Windows Vista, Network, Networking, Troubleshooting, Tips and Tricks

Source:→ APCMag