I realised something this morning. Our legal and criminal justice systems are fundamentally broken in one respect: if the victim of perceived crime does not have a name, the police will not be able to pursue an investigation.
I understood this recently as I turned what most would perceive as shit tech experienced over more than a year into what I began to feel was deliberate conspiracy to interfere in home telecommunications and computer networks. I was pretty sure who might be behind it, but had no proof and so did not feel able to give a name or names when asked to do so by the police whom I reported my suspicions to.
The matter remained unpursued as a result, even as the events were highly unusual.
The other day I – similarly – discovered, and duly reported on these pages, that it is not illegal, not a crime anyway, to sell mobile phones and not have a customer-support process in place to block via the IMEI the functioning of a device if it is stolen. The Cheshire police told me that it would be a case of internal business procedures, but that at least in English & Welsh law there was no legislation which required companies of any size to even keep records of the IMEI of a device so the customer could consult in case of theft.
This to me seems absurd, but underlines my point at the top of today’s post: where process requires or does not require something, where societal harm exists but ways of doing do not, however much money one might have – and for most of us this is not the case – an investigation will not be carried out.
Not because the police are playing silly buggers.
Simply because the police must prioritise according to what they unavoidably must do.
This, of course, in very small but key cases does allow the powerful to play silly buggers under the pretence that process and procedure tie their otherwise collaborative hands. But such thoughts are too grand for this particular post, and perhaps would need to be expanded upon elsewhere, or at another time.
What I do see, after studying International Criminal Justice for a year at MA level, is that the most important things which take place in the world are a-legal. A-legal as in a-moral, is what I mean to say.
And it’s not even the laws or legislation which exist and obfuscate life, instead of clarifying exactly where right should ennoble us and obligation could become us. No. It is all prior to such codification: if you do not have a name or a process or a procedure, you are surely unable to do any justice for any victim who suspects the contrary.
I would hope, therefore, that a cavalry might now exist in 21st century-land: a cavalry we might obtain real succour from.
Big and Open Data, machine-learning, and artificial intelligence are huge ideas which have the relative virtues in popular parlance of saying everything and defining little. Like love, these terms currently mean almost anything we could imagine them to, and provide almost every solution – even where the driving problem is not quite properly contemplated.
Yet, despite all, I remain a convinced optimist of the positives technology can deliver, in particular where the right frameworks, societal structures, goals and people are in charge.
Let us imagine, then, that we shall not simply use the above-mentioned triumvirate to simplify and cheapen existing criminal justice and legal systems: instead, let us imagine we use technology to add to both what neither have before delivered.
Let us take the example of total surveillance, where everyone is judged guilty and must prove their innocence both daily and repeatedly – certainly on interacting with governments, private-sector organisations and semi-official institutions various. Here, machines scour our telecommunications and devices persistently to detect anomalies which might indicate anti-societal thoughts and exchanges before they flower into concrete real- or (these days, perhaps even more strikingly) virtual-world actions. This is done, and widely accepted by many democratic citizens, as being necessary to prevent terrorism.
But my thought this morning runs as follows: if the capacity exists to both identify and prevent crimes of terrorism before they become suffered realities, and if such systems are no longer to be changed or rolled back, and I mean ever (a reasonable assumption, surely, in the light of the dynamics so-called Western liberal democracies now exhibit), why not begin to apply them in all other sorts of areas? What, really, is there to stop us? And, even, why should we stop?
And thus, in so doing, why not construct what we might term “good-behaviour machines” – that is to say, “societally-accepted-behaviour algorithms and intelligences” – which, instead of only scouring people’s everyday communications for threats to civic security and safety, served to record and analyse all network and individual interactions in order to better detect all zemiologically harmful activities?
We could then, as a hierarchically just, peer-to-peer and egalitarian civilisation, act and intervene on the Big and Open Data collected, both in cases where the victims were aware of the zemiology being committed against them as well as in cases where the victims were utterly in the dark.
What do you think? We might not be able to fight total surveillance, as a de facto reality now impossible to un-engineer. We might not even have to hand any more the theology of sousveillance to help us generate effective oversight from beneath. But if we legislate the tools now in place to detect everyone’s actions, from the very top of the societal trees to the very bottom, and put their pursuit and prosecution in the hands of machine intelligences, who’s to say the outcomes would be any worse than currently is the case?
After all, nothing can possibly be more nefarious than the human learning most Western top-level political leaders seem to be demonstrating at the moment.