One of the aspects of the Prism-related discussions that hasn’t sat well in my mind is the use of the term “surveillance” as well as “spying”. That’s not to say that the American government hasn’t been infringing on privacy, but I don’t think those terms adequately described what large-scale metadata collection entails. After all, the image conjured by the use of those terms is active investigation, whereas from what I understand of the programmes, most of the metadata collection isn’t actually used, ever. The government doesn’t care about most of us.
This evening, I’ve been re-reading William Gibson’s Neuromancer and a line popped out at me, which I think relates to the unease I feel about the use of surveillance as a term in relation to the metadata debate. It is, I admit, a discussion between a drugged up hacker, Case, and an artificial-intelligence called Wintermute, but bear with me:
“Bullshit. Can you read my mind, Finn?” He grimaced. “Wintermute, I mean.”
“Minds aren’t read. See, you’ve got the paradigms print gave you, and you’re barely print-literate. I can access your memory, but that’s not the same as your mind.”
Governments are undoubtedly using mass data collection as a means of identifying and surveilling individuals and groups. But the act of mass data collection isn’t the same as the act of surveillance. Rather, I think the word we’re looking for is access, or ingress – the act or right of entering our private lives. In many ways, I think this is rather worse for the ‘privacy’ of the average citizen than active surveillance. After all, traditional surveillance works best from the moment it begins – it is far easier for a state or a company to acquire information about your present and into your future than it is to acquire a similar volume or quality of data about your past. Your past is protected by the inability of others to access data that has long since been deleted or forgotten, or went untracked.
Representing this idea in a cheap Excel graph, the sum total of data available to an entity trying to surveil me at the point in time indicated by the black line looks like this:
In other words, if someone gets interested in me, then they can surveil quite a lot, but it’s the decision to take an interest that determines how much they get to see. In terms of privacy, quite a lot of what I have done is private and hidden from them, by dint of it being in the past. Conversely, the mass collection of metadata makes the point in time that an entity starts to actively surveil me irrelevant – all that data is preserved in a huge serverfarm somewhere in America.
Cue shoddy Excel graph 2:
The point I’m trying to make is that under the second set of circumstances, the amount of information available to a potential ‘snooper’ is independent of the timing of the act of surveillance. A kid born today, where intelligence agencies hoover up this kind of info lives forever to the right of the “Metadata collection begins” point on the graph. Governments aren’t necessarily surveilling everyone, but they’re building the datasets required to ingress into anyone’s history, back to the earliest point of metadata collection, whenever they are interested. How does one control this? The common option is tied to the use of “surveillance” as a concept: stop the government from collecting any data. That’s quite unlikely, I think. What interests me is that we have no control over the temporal limits of metadata collection (how far back records go), nor do we have any control (realistically) over deletion of metadata. We only have trust. I trust Google (perhaps stupidly) to delete data when I ask them to, but who trusts an intelligence agency to do the same?