JDM Digital

SPDY: Experimental Protocol for a Faster Web

SPDY (pronounced “SPeeDY”) is a protocol developed by Google for web content delivery. SPDY manipulates web traffic, with particular goals of reducing web page load latency and improving web security.

As part of the “Let’s make the web faster” initiative, Google is experimenting with alternative protocols to help reduce the latency of web pages. One of these experiments is SPDY, an application-layer protocol for transporting content over the web, designed specifically for minimal latency. In addition to a specification of the protocol, Google has developed a SPDY-enabled Google Chrome browser and open-source web server.

In lab tests, Google has observed up to a 64% reductions in page load time. The hope is to engage the open source community to contribute ideas, feedback, code, and test results, to make SPDY the next-generation application protocol for a faster web. More details can be found in Google’s Chromium whitepaper on SPDY.

MaxCDN, one of our favorite CDN providers, has already added SPDY as an option. JDM Digital is performing our own lab tests on SPDY. More on that to come.

Background: web protocols and web latency

Today, HTTP and TCP are the protocols of the web. TCP is the generic, reliable transport protocol, providing guaranteed delivery, duplicate suppression, in-order delivery, flow control, congestion avoidance and other transport features. HTTP is the application level protocol providing basic request/response semantics. While we believe that there may be opportunities to improve latency at the transport layer, our initial investigations have focussed on the application layer, HTTP.

Unfortunately, HTTP was not particularly designed for latency. Furthermore, the web pages transmitted today are significantly different from web pages 10 years ago and demand improvements to HTTP that could not have been anticipated when HTTP was developed.

The following are some of the features of HTTP that inhibit optimal performance:

Single request per connection.

Because HTTP can only fetch one resource at a time (HTTP pipelining helps, but still enforces only a FIFO queue), a server delay of 500 ms prevents reuse of the TCP channel for additional requests. Browsers work around this problem by using multiple connections. Since 2008, most browsers have finally moved from 2 connections per domain to 6.

Exclusively client-initiated requests.

In HTTP, only the client can initiate a request. Even if the server knows the client needs a resource, it has no mechanism to inform the client and must instead wait to receive a request for the resource from the client.

Uncompressed request and response headers.

Request headers today vary in size from ~200 bytes to over 2KB. As applications use more cookies and user agents expand features, typical header sizes of 700-800 bytes is common. For modems or ADSL connections, in which the uplink bandwidth is fairly low, this latency can be significant. Reducing the data in headers could directly improve the serialization latency to send requests.

Redundant headers.

In addition, several headers are repeatedly sent across requests on the same channel. However, headers such as the User-Agent, Host, and Accept* are generally static and do not need to be resent.

Optional data compression.

HTTP uses optional compression encodings for data. Content should always be sent in a compressed format.

Speed Matters

Research has shown page loading speed matters, not just for user experience, but also for SEO. We’re gonna do a little of own research on SPDY, but the data so far looks very promising, particularly when combined with more traditional tactics for increasing page loading speed.

Get the Email

Join 1000+ other subscribers. Only 1 digest email per month. We'll never share your address. Unsubscribe anytime. It won't hurt our feelings (much).

Subscribe

Exit mobile version