I have a friend whose company delivers IT training through remote lab simulation for customers all over the world. There was a problem with his newest product, which was a remote lab that included servers, shared storage, all in a real world virtualization set up. All of the users outside of the United States were having performance and latency issues.
Application delivery over the Internet is difficult when real-time interactions are required. This is because most applications are transmitted using the TCP/IP protocol, which is not optimized for speed, but for reliability. TCP uses dynamic windowing, packet sequence ordering, and packet retransmission requests in order to ensure reliable delivery of traffic. Because of the nature of TCP, applications transmission speeds start slow, then ramp up as long as no packets are dropped. As soon as packets are dropped, requiring a retransmission, the TCP window size shrinks, as does bandwidth.
What happens when TCP/IP traffic is transmitted worldwide, especially over the Internet when it goes through a number of different service providers and their peering points, is that packets are dropped as a matter of course. Due to the inherent nature of the protocol and situation, this means bandwidth, performance, and latency are going to be poor for an application requiring interactivity. Large organizations have always done with this issue by building out a worldwide private Wide Area Network.
However, my friend was not delivering applications to members of large organizations, but customers over the Internet. He needed faster application performance without putting any applications on their PCs, or any special hardware near them. This means he somehow had to work around the issues of the inherent TCP limits of application delivery over the Internet. In addition to getting fast transmission, it would be even better to compress the application at the head end and expand it at the remote site, further increasing application bandwidth.
He could have done this himself, but he would need a special acceleration server at the head end, and then one at every Internet delivery location near his customers. The servers would need to compress data at the head end, expanded at a remote site, as well as transmit the packets over multiple redundant routes to make sure nothing was dropped. Of course it would need to be monitored all the time.
This would’ve cost too much money for my friend. Instead, he was able to find a company that offers this type of acceleration as a service. This company already has tens of thousands of servers in data centers worldwide, and they are already accelerating content for customers. Types of content that can be optimized over the Internet are:
– Improve VDI (virtual desktop infrastructure) performance.
– Speed up any end-user applications delivered by HTML or IP.
– Remote access Virtual Private Network acceleration.
– Speed up large file transfers.
– Offload WAN to the Internet.
If you are experiencing slow application performance over the Internet, you should definitely look for a company that’s able to provide this service. Most organizations typically do a test run with whatever application is causing the most issues and problems at the time. A local reseller that is competent and experienced in improving network performance would be able to recommend a solution for you.