Web 2.0 - is the Canary Dying?

My friend Ben says that AJAX is the canary in the coal mine of Web 2.0.

Could poor AJAX application design be the carbon monoxide that lulls it into a permanent sleep?

We've recently been hearing complaints like "Your {web,app,proxy,security}-server is breaking my AJAX application!" It appears that multi-threaded AJAX clients are getting confused when their various HTTP requester threads receive unexpected HTTP responses.

The key is "unexpected HTTP responses."

We've seen no problem with AJAX or other JavaScript based clients that make sure that their HTTP requests and response handling follow the HTTP spec. AJAX client failures related to HTTP can generally be traced to a failure to properly process HTTP responses. The most common problem with AJAX clients is that their writer's make assumptions about the kind of HTTP responses they expect, rather than to follow the spec and handle whatever could legally arrive as the response.

Then there is the implication that the aforementioned {web,app,proxy,security}-server know that its receiving requests from a single AJAX client.

Imagine 50 browsers with users simultaneously performing an HTTP Get on a server protected URL. Then imagine a single AJAX client doing the same thing. There is nothing in the HTTP protocol that enables the server to distinguish the browser requests from those of the AJAX client, or correlate the AJAX client's requests. Neither HTTP User Agent - simple browser or AJAX clients - can make assumptions about the nature of any others HTTP requests. In the case of requests coming from multiple Browsers, this of course seems obvious; perhaps less so with multiple and asynchronous requests from a single AJAX client.

Cross-Request Correlation?

HTTP State Management (Cookies) or other state management techniques (like SSLID) must be initialized some way from the client. In a security environment - this initialization generally reflects an authentication event. If the authentication and state mechanism results in a portable artifact like an HTTP Cookie, there's nothing to stop the user agent (browser or AJAX client) from sharing it with other requesters - especially if its an AJAX client spawning multiple requests. But the important point is that the application must manage and synchronize "authentication state," including any state artifacts, across it's threads' requests.

If HTTP requests are to be coordinated in an application, two things must happen. The first is that each and every HTTP request should be prepared to handle any legal HTTP response specified in RFC 2616. Secondly, it must assure that it only assumes what it itself controls. If the application sends multiple requests, then it may use cookies or some other construct to "maintain state" between serialized requests, or may even "share cookies" across parallel/simultaneous requesters to do so across it's separate threads. But it's the application that must do so. To assume without explicit coding that all threads automatically share any other thread's state is a classic parallel processing design error.

A well behaved AJAX client would assure that each thread capable of sending an HTTP request could handle both the full array of legal HTTP responses (200 ok, 302 found, 401 unauthorized, 500 internal server error, authentication challenges, etc.) and effectively manage any parallel connection(s) it spawned.

The fundamental challenge we face is that many AJAX applications are poorly executed and fail to fulfill the HTTP contract by not being able to handle arbitrary but legal HTTP responses. They don't know what to do with login pages, redirects, displacement pages, etc. The server that could detect and anticipate an ill behaved HTTP requester, and modify its behavior accordingly would be omniscient indeed.

AJAX is great technology, but magic it ain't.