Privacy Advocacy, System Design, and Consumer Convenience

When I think about the properties of a system that requires personally identifiable information, two immediately come to mind. First, as a party relying on such information, I collect personally identifiable information (PII) in order to facilitate the discharge of an obligation on the part of the person who is the ostensible subject of the information. Once that obligation has been discharged (I have been paid), its has no further reason to exist.

However, consumers like "convenience" when returning to, say, BuyMyStuff.com - and thereby delight in allowing BuyMyStuff to retain their credit card numbers, expiration dates, and CVC code, (even though the latter is intended to demonstrate the physical presence of the card used). Merchants are happy to offer that convenience if it means customers will more frequently visit their site instead of a competitors. The unintended side-effect of the  merchant's retention of this data is  the creation of a target of opportunity.

One might argue that Privacy advocacy has reduced indiscriminate PII retention, or at least offered consumers with clearer "opt-out" clauses that have the same effect, and especially for cases where a Merchant retains consumer information in order to target them with more focused sales and marketing campaigns.

But the first case is the more problematic one, in that rather than retaining preferences and buying behaviours for the simple purpose of facilitating more spending, it is an attack vector for those who want to perpetrate fraud against the subjects of the data and their CC providers. You might call this kind of retention bad. I would agree.

There is also a privacy case which questions the extent to which governments should be permitted to surveil their populace. The data aggregation implied by the so-called "Real-ID" effort represents such a case, and paranoia aside, should give us reason to pause and ask "Why?" 

Systems designed with the principle of Fitness -for-Purpose necessarily constrain themselves to capabilities which support their primary objectives. The more general the design of "General Purpose Systems," the more unintended side-effects they will exhibit when broadly deployed. I believe this will be observed whether for so-called "Identity Management" (if it looks like user administration . . ., etc.) or "National Security" ("lets keep our borders secure!").

At the end of the day (I'm going home in a minute) - the systems we deploy should reflect explicit requirements and expected consequences, not hidden agendas and/or unintentional consequences. Our jobs should be to discover each, and be scrupulously honest about their implications to our customers and society at large. The degree of rancor which can be observed in this debate outside of our happy little corporation is I think a measure of the public's distrust of us in our design, or vendors (or governments) in their deployments. We might consider what it means to address each.