The HTTP standard in its current version 1.1 has been around since — will you believe it — June 1999 where it was detailed as an effective superset of HTTP 1.0 in RFC 2616. To be quite honest, it’s done pretty damn well, all things considered. Most of this is of course due to the extreme versatility of the protocol, while, on a more technical note, the lack of a successor is probably also partly driven by the fact that the protocol designers were actually quite foresighted. Features like standardised compression and caching semantics, which people even to this date are struggling to understand that they need to use, have provided application designers with very cheap performance wins, thus making HTTP an obvious choice.

But, while the protocol has done pretty well, it was by no means designed for the kind of web applications we’re developing these days. Try opening your favorite social network in a browser and use whatever developer tool is available to see just how many resources are fetched on page load. It’s a lot. Randomly loading my Twitter feed required nothing short of 38 requests. Serving up your Facebook news feed requires even more requests, especially if you have a lot of those “look at pictures of the alien in my womb” kind of friends. And the trend is only going towards more and more, larger and larger resources needed per page load.

Why is this a problem? Well, HTTP’s only feasible means of achieving concurrency at the moment is to increase the number of raw connections to the server, which has a considerable overhead in terms of both networking and server resources. Theoretically, this problem should have been solved with HTTP 1.1’s introduction of pipelining allowing for some level of multiplexing. But, because support for the feature is impossible to detect and the feature itself often being faultily implemented, it’s never really been employed in the wild. Because opening 38 connections isn’t feasible either, browsers often limit the number of concurrent connections to a handful, meaning that serving up the aforementioned Twitter feed requires something like 10 roundtrips across the connections. Inefficient? You bet your ass!

While the inefficiency of the status quo isn’t news to people who do this kind of stuff for a living, Google were the first to actually take affair. Granted, having a massive server infrastructure and one of the most popular browsers made them an obvious first mover, but that’s a side note. The result? SPDY, an HTTP-esque binary protocol with (somewhat) forced transport layer encryption, gzip compression and proper request multiplexing. Add the fact that one of the most inefficient parts of the HTTP protocol itself — its text based nature and the hacks that’s brought with it — is entirely removed by the binary nature of the protocol and we should be all set for the future, right?

Well, not quite. First of all, this is a standard created by virtually just one company, which means that it’s likely that further input could be beneficial before establishing a final standard. Secondly, as Varnish author, Poul-Henning Kamp points out SPDY in itself is virtually nothing but repackaging. There are no functional improvements to HTTP offered by SPDY — the foresightedness of HTTP 1.0 and 1.1 is nowhere to be seen. Furthermore, SPDY has a few problems in that it’s very heavily tied to specific security and compression implementations, which doesn’t boast well for incremental improvements.

What’s really needed then is a new major version of the HTTP protocol decided upon by the community and industry as a whole, not just one player. HTTP 2.0, as it would be, should have the technical advantages of SPDY mixed with the features needed by the modern web and the web to come — and the industry seems to agree.

We’ve waited a long time for this major version to finally surface. It was therefore with great anticipation that I sat down to read the first draft of the HTTP 2.0 standard a few days ago. But, what met me? Well, I’ll be darned if that entire document isn’t just the SPDY protocol with IETF’s name slapped on top of it! In other words, so far, IETF has completely and totally disregarded any and all community input and instead gone the lazy route of starting out with SPDY as a “good enough” solution. Sure, they claim that they will deliver improvements, but, please, someone point me to any standard evolving very radically beyond an initial draft?

To be honest, this direction makes HTTP 2.0 a completely uninteresting concept. If it’s just SPDY, why don’t we just stick to SPDY, which is already adopted by quite a few of the major browsers and even major social networks like Twitter? I think it’s about time that IETF starts proving that it’s not just a big bureaucratic organ governing stagnation, but that they’re actually in it to improve the web.