11.07.2007

Web 2.0 - is the Canary Dying?

My friend Ben says that AJAX is the canary in the coal mine of Web 2.0.

Could poor AJAX application design be the carbon monoxide that lulls it into a permanent sleep?

We've recently been hearing complaints like "Your {web,app,proxy,security}-server is breaking my AJAX application!" It appears that multi-threaded AJAX clients are getting confused when their various HTTP requester threads receive unexpected HTTP responses.

The key is "unexpected HTTP responses."

We've seen no problem with AJAX or other JavaScript based clients that make sure that their HTTP requests and response handling follow the HTTP spec. AJAX client failures related to HTTP can generally be traced to a failure to properly process HTTP responses. The most common problem with AJAX clients is that their writer's make assumptions about the kind of HTTP responses they expect, rather than to follow the spec and handle whatever could legally arrive as the response.

Then there is the implication that the aforementioned {web,app,proxy,security}-server know that its receiving requests from a single AJAX client.

Imagine 50 browsers with users simultaneously performing an HTTP Get on a server protected URL. Then imagine a single AJAX client doing the same thing. There is nothing in the HTTP protocol that enables the server to distinguish the browser requests from those of the AJAX client, or correlate the AJAX client's requests. Neither HTTP User Agent - simple browser or AJAX clients - can make assumptions about the nature of any others HTTP requests. In the case of requests coming from multiple Browsers, this of course seems obvious; perhaps less so with multiple and asynchronous requests from a single AJAX client.

Cross-Request Correlation?

HTTP State Management (Cookies) or other state management techniques (like SSLID) must be initialized some way from the client. In a security environment - this initialization generally reflects an authentication event. If the authentication and state mechanism results in a portable artifact like an HTTP Cookie, there's nothing to stop the user agent (browser or AJAX client) from sharing it with other requesters - especially if its an AJAX client spawning multiple requests. But the important point is that the application must manage and synchronize "authentication state," including any state artifacts, across it's threads' requests.

If HTTP requests are to be coordinated in an application, two things must happen. The first is that each and every HTTP request should be prepared to handle any legal HTTP response specified in RFC 2616. Secondly, it must assure that it only assumes what it itself controls. If the application sends multiple requests, then it may use cookies or some other construct to "maintain state" between serialized requests, or may even "share cookies" across parallel/simultaneous requesters to do so across it's separate threads. But it's the application that must do so. To assume without explicit coding that all threads automatically share any other thread's state is a classic parallel processing design error.

A well behaved AJAX client would assure that each thread capable of sending an HTTP request could handle both the full array of legal HTTP responses (200 ok, 302 found, 401 unauthorized, 500 internal server error, authentication challenges, etc.) and effectively manage any parallel connection(s) it spawned.

The fundamental challenge we face is that many AJAX applications are poorly executed and fail to fulfill the HTTP contract by not being able to handle arbitrary but legal HTTP responses. They don't know what to do with login pages, redirects, displacement pages, etc. The server that could detect and anticipate an ill behaved HTTP requester, and modify its behavior accordingly would be omniscient indeed.

AJAX is great technology, but magic it ain't.

10.17.2007

of Hubris and Humility

It is said that hubris is not the act of thinking too highly of oneself, but rather not thinking highly enough of others.

It is the latter, not the former that reveals the inner working of our heart, the orientation of our conduct, and our true attitude towards ourselves. One who has high regard for others need not deprecate ones own abilities, gifts, talents, or choices. Indeed, a healthy and accurate view of oneself, of both ones strengths and weaknesses reveals itself in the strength to value, even honor the gifts and abilities of others. The failure to do so is not only a failure to recognize the value represented by the achievement of others, but it also reveals a depth of insecurity that underscores a truly self-centered attitude and the behaviour by which it is marked.

Jesus walked in humility, and that humility was not bred in insecurity, but rather in the absolute knowledge that though "He existed in the form of God, He did not regard equality with God a thing to be grasped, but emptied Himself, taking the form of a bond-servant, and being made in the likeness of men." The man who drove the currency exchange out of the temple with a hastily assembled whip was no wimp. Neither was the man who washed his own disciples feet. It was precisely because He was secure in the Identity given him by His Father that he could do so, and teach those who would follow Him the way by which they might share both his attitude and His service.

~r

ref: Phillipians 2:6-7, Isaiah 60:1-2a

10.16.2007

Tilting at Windmills 1: Identity Management v Access Control Management

I said I'd say why I use the term access control rather than the term identity management. I'm tilting at windmills, I know - but I don't think we manage Identities - though its a cool marketing idea. In fact we manage symbols we bind to notions of real persons like "Alice" or "Bob", and that we bind to non-real but useful notions such as "root," or "Administrator." We manage identifiers, which may be defined as the attributes of entities, real and non-real, having representations in some system. A login identifier or username is no more an identity than is my driver's license. The latter is in fact a certificate that contains identifying attributes about me; it is neither me nor my identity.

Access control management is the name that has historically (until about 2001) encompassed the management of subjects or users - their names and symbolic references; objects, resources, or targets - entities on which subjects may act; actions which describe what actions a subject may execute upon an object; and conditions or context unrelated to attributes of subjects or objects, which will inform an access control decision. Separating user administration from access management and calling it Identity Managment, has had debatable success from a number of perspectives.

Renaming something may enable us to look at it from different perspectives, and learn new things about it. The whole introduction of life cycle management into user administration has been a demonstrable benefit of identity management. But this wasn't a consequence of renaming it, it was a consequence of trying to address user administration in enterprise scale. I think the confusion caused by the divorce between access control management and user administration, I'm sorry - Identity Management - may have provided short term marketing benefits, but in the long run it has damaged the application of well founded principles.

While marketing departments and techno-strategists will vociferously defend their newly named product or try to explain why identity management is not user administration, or for that matter why SOA isn't client/server, or an attribute service isn't a virtual or meta-directory - call me a fundamentalist (no fun, and slightly mental) - but I ain't buyin' it.

~r

10.15.2007

Identity Mashups and Gravy . . .

Identity Mashup refers to the problem of properly rendering distributed information controlled by separate entities. Identity only refers to the type of information, and is important as a distinction only insofar as it relates to the access control model (user-centric policy control) and not to the information (user attributes).

For those inclined towards reductionism, the entire Identity Mashup/User-Centric Identity/Privacy/Cardspace/Higgins/SXIP constellation simply concerns access control and its management. That suggests to me at least that we can employ a fundamental understanding of access control theory to the problem. (Why access control and not Identity Management in following blog)

Traditional access control is implemented with a centralized security policy model such that access control policy is centrally administered across the resources protected. User-centric access control turns this model on its head - giving the users or account holders the ability to set access control policy on attributes associated with themselves.

User-Centric access control is related to the privacy problem. Its about a non-system owner (a user) controlling access control policy on data elements specifically delegated to it by the system owner (service). But in the end its all about the enforcement of a policy that says who can get to what when. It not important that the what are attributes about a particular person to the essential technical problem.

There are at least two differences between traditional access control systems and those that are user-centric. The first concerns who controls the governing policy - not the mechanism by which access control is effected. The second is akin to the digital rights management (DRM) problem - how to control access and use of remote information.

Cardspace, Higgins, and OpenID seek to address the first concern, providing the user with mechanisms to specify the information (user attributes) to which the user is willing to grant access. The second is problematic if the system on which the information to be accessed is not directly within the user's control, and its not by definition. The problem is exponentially more complicated if the information is distributed across multiple systems, which likely it is.

The system of which the Mashup is a part, is concerned with accessing potentially distributed sources of user data, and doing so in a manner that user-specified or otherwise distributed policy can be reliably enforced. This is true whether the implementation is an Eclipse Project Higgins Identifier Agent, Micrsoft(R) CardSpace, Higgins' Identifier Attribute Service (IdAS), Facebook, MySpace, or LinkedIn. Its also true whether the content of the mashup is rendered on the browser via Ajax or on LinkedIn as an aggregation of my connections' profiles.

Bob Blakley in his recent Burton blog reiterates his long held position that privacy (user-centric policy management) problems are at their core legal, social, and economic problems. I contend legal, social, or economic solutions are the default when the technical problem is intractable, its solution illusive or incomplete. Even if the enforcement challenge of Identity Mashups can be solved - the DRM aspects of user-centric policy management suggest that technical solutions for Identity 2.0 may suffer a similar fate as those for DRM. Certainly legal recourse is the soup du jour for those whose technologies are insufficient to the access control challenge. Can you spell RIAA?

9.11.2007

OpenID - New Threat or Misdirection?

My friend Ivan aimed me here. Suffice it to say this post (and its myriad of contributors) have illuminated OpenID's risks far better than could I.

But I have to ask, (this one's for you, Ivan), why ALL the incredible attention to OpenID, when the largest bank in the US continues to foist upon its customers commercial security snake oil that remains susceptible to MITM. It has now compounded its sins by adding "mobile authentication" as yet another mechanism by which you, the hapless victim, are "assured" your transaction is safe despite being routed through a man-in-the-middle.

The security principle violated in this "security technology" is the registration principle: Authentication requires prior registration of user and credentials in a channel distinct from the authentication channel. The registering of a "new" computer in the same channel as authentication occurs enables MITM - which this video demonstrates.

In "fairness," what do you, the user have to do wrong in order for this vulnerability to be exploited.
  1. User doesn't validate the SSL session (and associated certificate) of the site to which you're routed. If the user clicks-through the SSL warning (certificate not recognized or doesn't match URL) you've done wrong. But if you type https://www.bofa.com instead of https://www.bankofamerica.com you're browser will tell you the certificate might be bogus - but you'll "click-through" any way because it looks like BofA. Doesn't it?
  2. You enter your username but the site says it doesn't recognize your computer and asks you the name of your favourite poodle (or whatever challenge phrase you've established). You logged in with this computer yesterday, but what the heck - maybe BofA is confused. You respond "Fifi." You've done wrong. It didn't recognize you because you were communicating with a "zombie" which in turn was communicating with your bank as if it were you. You've just registered the "zombie" as an authorized device for your account and didn't even know it. Of course "adaptive security" will mitigate the number of accounts that are accepted from one zombie. But how many zombies are needed to clean out your account? (Answer: 1)
  3. After "re-registering" your computer, which might legitimately occur if you've cleaned your cookie cache for example, your bank shows you the picture of the emperor penguin you previously selected as your authenticating penguin. You believe this "proves" you're talking to your bank. You've done wrong. The lawyers will claim it DOES prove you're talking to your bank. The problem is that it DOESN'T PROVE you're not talking through a THIRD PARTY who has just acquired enough data to electronically transfer money out of your account to the max limit each day until either you notice strange withdrawals or you account is over drafted when your mortgage check hits.
How do you do right?
  1. Validate certificates in an SSL session. Check that your browser is using an SSL URL like https://... NOT http://... and that the certificate is recognized by the browser.
  2. Learn how SSL works. Talk to your friends about it. Discuss it at dinner with your kids. Bring it up at your church group. Don't expect others to know what you do not. There's nothing wrong with not knowing how to lock your door. But going on vacation while leaving your valuable inside without this knowledge is just foolish. Ask until someone can answer in a way you can understand.
SSL is neither convenient, simple, or consumable. But who ever said holding on to money was?

SSL can provide real security. That other stuff is a digital platitude of the deceptive kind.

Cheers!

8.06.2007

Protection Racket or Libel Suit?

CNet News.com has reported that
... as part of VDA's business model, vendors are asked to pay for the bugs it discovers, or its consulting services, otherwise VDA threatens to sell the bug to a third party or make the details of the security flaw public.
Is VDA's founder Jared DeMott just another racketeer? Or is there a libel suit on the winds?

Findlaw says that most states define extortion as "the gaining of property or money by almost any kind of force, or threat of 1) violence, 2) property damage, 3) harm to reputation, or 4) unfavorable government action."

"Pay up or else!" seems to be what CNet is reporting about VDA. But then again, I'm no legal scholar.

Apparently neither is Mr. DeMott.

Don't Miss!



A couple of "don't miss" articles might be useful.

Jeff Crume's recent article on the myths and reality of Directories is a positive discussion of a topic that has been a source of considerable teeth gnashing if not outright nonsense.

Infoweek claims hacking attempts are up 81% this year, riding on the backs of Man In The Middle Attack kits reported to being sold at various hacker sites.

MITM based phishing continues to be not only theoretically possible, but a straight-forward exercise for anyone conversant in HTTP based technology. My new friends at Indiana University illustrate the an alternate view of the problem I discussed in a previous post. They also have a nice repository of papers if you're interested in a more academic treatment of phishing.

Cheers!

~r

7.20.2007

Instance-Level Access Control

In the world of Twitter, OpenID, and Web 2.0, you might expect to find this in an archive from 2004 - but here it is. If you don't want to read all of it, the bottom line is:
  1. The failure to adequately define and specify terms renders the the development of security software an unknown and thereby risky venture,
  2. Instance Level Access Control is typically used to mean access control that is more specific than what you have today, and
  3. Application developers and users don't care about fuzzy terms if the software they're using does something useful and/or is fun.
The solution
  1. Define all terms, especially if they pertain to security, risk, privacy, or compliance; and the applications that feature them.
  2. Do #1 even if you think "everyone" knows what you mean. They probably don't.
  3. Instance Level Access control generally means understanding a protected resource at the level at which its expressed by an application. If so, it reflects the level at which the semantics of access control can be captured enforced.
Instance-Level Access Control?

I contend that in application architecture and for a host of non-security specific technologies, a certain degree of fuzziness of terms its not harmful and may even aid use and adoption. An user discovers capability despite or perhaps in spite of what it is called, and just appreciates the fact that it does something useful for them.

Whereas this fuzziness may be a great strategy for the conceptualization, design, and development of certain types of application, it has deleterious effects on the realization of security capability, generally inhibiting the goal of risk mitigation. This is simply because risk is proportional to uncertainty.

Consider the Federal Financial Institutions Examination Council (FFIEC) and its regulations around an institutions "Customer Identification Program" (CIP). While the intent is clearly to have in place mechanisms by which an institution can reasonably identify its customers, (no complaint there), various vendors have invented new terms like "2nd Factor" or "Multi-Factor" authentication.

Some of the new "2nd factor" and "Multi-factor" authentication products fulfill the regulation while providing additional security capabilities not required by FFIEC. One of these is a popular vendor's version of mutual authentication which is a potential new threat vector and reported to be exploitable by phishers. While perhaps a good subject for a subsequent post this particular technology is only an example. My point is that the arbitrary use of previously precise terms leads fuzzy product targets and the misunderstanding of their characteristics. This results in a myriad of negative consequences for both product designers and their consumers.

So what of Instance Level Access Control? Bottom line is that I don't know what people mean when they tell me they need "instance level access control,"
(If you do, please comment!) because each has person/organization/document means something different. A security document's failure to adequately define its terms, or to assume that it shares with the reader a common understanding ends in predictable, if unfortunate results.

At the risk of being just one more voice in the cacophony (don't you talk about instance level authorization at your dinner table?) let me go back to a time when J2EE security started to address its access control requirements with a model that included users and their identities in access control decisions, rather than just the system's code as system entities.

The original model provided for access control at the level of Java classes and methods on those classes. The result was that you could determine unambiguously (most of the time) whether or not a particular combination of user, class, and method was implied by a particular request. The "instance level" problem was (and still is) that there was (is) no way to distinguish between an object named "Ford Motor Company" as an instance of the class "AutoManufacturer," from an object "Bayerische Motor Werke" also of the class "AutoManufacturer." This meant that there was (is) no way at the EJB container to enforce policy whereby one set of users could work only with Ford (objects), and the other set only with BMW.

Java Authorization Contract with Containers (JSR 115) promised to correct all this. What it gave us was the previous J2EE model (must maintain backward compatibility of course!) - and what my friend refers to as "a hole in the container through which to push stuff." This "hole in the container" is expressed in a "context handler" which is code you or your staff write to tell the container how to perform access control on particular classes. This "hole" provides a mechanism to distinguish one object from another, by descriminating on instance data in the class, maybe a field called "Name" for example. Thereby you can write authorization policy which differentiates between two different objects (distinct class instances) of the same class.

But note, there is nothing in JACC that gives you general Java object instance level access control, unless you consider an EJB method's parameters to define an instance. My friend also likes to point out that container based security - beyond the specification's mandatory container types (Web, EJB, etc.) - is about externalizing into the container based mechanism code you'd have to write inside your applications to enforce access control policy anyway.

One might envision a JACC based context handler that might be able to introspect all the public (and dare I say private?) fields in an object, and then based on some meta-data be able to divine what to do with it. But then I've thrown OO's encapsulation out the window, that we tend to consider good qualities of object oriented design and development.

The example may seem strained - but this is more an example perhaps of overpromising and under delivering. I contend the "original" instance based problem was as I have characterized it - distinguishing two distinct instances (objects) of the same class at run time. JACC aimed and didn't achieve it (yet). I think the example is relevant because Instance Level or Instance Based authorization was given as primary objective for the specification. At best, instance based access control in JACC is under-specified; and it certainly isn't delivered with the mandatory compliance requirements it describes.

This isn't a slam of J2EE security. It is intended however to distinguish between what is suggested by terms such as "delivers instance based security" and what it actually does. It is a useful marketing term, if you want to imply capability your customer's can't yet achieve but for which they'd buy your product if they could. My marketing friends tell me this is a cynical view. I suggest it may be cynical of them to say so.

If you've made it this far, you're either an obsessive geek or will read anything. Thanx.

~r

7.03.2007

of Legislators and Software Architects

I resisted as long as I can. I was invited today to participate with some of my more learned colleagues in the writing a paper the objective of which is 'to define a new set of IT requirements for more effective and efficient design and implementation of “Internal Control”...'

Why on earth should anyone want to define a new set of IT requirements, when I'd venture to guess most IT professionals would confess no lack of requirements, and indeed, would prefer some of the old ones closed?

In this matter software architects are becoming like legislators, or worse, yellow-journalists: If they don't have or don't like the requirements (or laws or news) which reflect a business driver (or policy implementation or news) for their activity, they will now employ their friends to create new and improved ones.

We don't need new requirements, we need to solve the hard problems that we've obscured with new technology. Remember "distributed computing?" It was to be solved with network services then RPC then DCE then CORBA, and now SOA. Remember "access control?" ACL's vs RBAC, MAC vs DAC (ok I'm mixing my metaphors) - let's not solve the problem, lets rename the solution. Access control to Access Management to Identity Management to Federated Identity Management to Identity Enabled Services to ...

Of course who ever made a career by solving old problems. The lesson is, if you don't like the work you're doing, create some new work.

"Meet the New Boss, same as the Old Boss..."

~ciao~