New Zealand police have been setting up a $9 million facial recognition system run by a private US-company. Radio New Zealand reports that 15,000 facial images a year would be recorded which may expand up to 10-fold. (Article)
Unless police can determine human behaviour before something happens, the technology is only useful for investigative reasons as it delivers identification after the fact. A crime would have to be commited to identify any offenders and bring them to justice. The industry labels this reactive. In this scenario CCTV and facial recognition technology have to prove its worth by showing an increase of crime detection rates. The consequences of facial recognition technology for every individual New Zealander appearing in public start at privacy concerns that reach far into human rights concerns (e.g. here).
The apparent fact that police and government utilise a private entity — located outside New Zealand — can lead to a few conclusions. For instance, the governmental inaptitude in handling information technology and grasping its implications show a common pattern of privatisation. While global movements emerge that demand bringing essential services back into the power of democratic institutions, politicians seem to hop on the train of new technology that may affect individual rights and freedoms profoundly.
Promoters of facial recognition systems have to justify its application. If arguments try to point out the crime-preventive character of such technology, i.e. active prevention, there seem to be two possibilities which are predetermination and deterrence. Let’s investigate both options.
Prevention by predetermination
To prevent a crime police has to know that it is about to happen. The precondition for this is to recognise and action that will open an objective chain of causality ending in an offence. This is called proactive. That much for theory. How could this be possible? One approach could be to determine beavioural patterns, e.g. carrying a crow bar or weapon in an environment that does not provide for its necessity. This would require a predictive algorithm computers can calculate.
Another approach is to analyse crimes by time and place. Similar to Google’s “most busy times” indicator on a business listing this alternative is used to create statistics by learning when and where things happen. This enables to stuff any holes in the fence. Crime has to happen first to later be minimised or eradicated by appropriate measures.
A common method of preventive measures through face recognition is to tap into a database of known offenders and terrorists that may provide a match from the face recognised. The face of a human becomes a key in a (privately owned) database. This is mainly used in marketing by facial recognition providers to justify their applications. It eludes privacy concerns by subtly stating that “one who hasn’t done wrong has nothing to be afraid of.”
But how much could an individual trust a corporation generally? How much would you trust your own government, especially once the it uses private contractors? Ultimately facial recognition is not about trust but about awareness of its use and potential. More questions arise: Will my face be stored nonetheless on (international) servers? Are there laws in place that guarantee people will ever be deleted from an offender’s database after an offence has barred? And how would my elected government make sure it will happen globally? It can, at this stage, not guarantee that my facial data won’t be traded internationally (as they apparently are).
Interpol has a database of facial images from 160 countries “which make a unique global criminal database” which may give more food for thought.
Deterrence by consciousness
The conscious acknowledgment of everyone in public space of being watched can be a deterrent for crime. This scenario shifts the issue of facial recognition and crime prevention from the algorithmic world of data processing to the psychological plane. And this is the most frightening aspect of all in itself. The psychological impact of being watched is self explaining. Tech companies will most probably do anything to avoid this. Bringing the reality of facial recognition systems into mind would simply develop an adverse effect on its use. The current state of politics is based on the idea of a social contract that is negotiated out of individual freedom forming the democratic collective power. Deterrence could lead to recognising a “police state”, hence be diametral to a free society. We (an our governments) may be insted fed with marketing pinpointing security advantages and assurances of data security. The issue is currently not the devastating effect on privacy itself but the fact that governments and individuals do not consciously grasp the impact. The spying glass is indeed focusing on everyone of us but since we cannot see it, we live assured that there’s nothing to be afraid of. Governments on the other hand, are concerned about social and political stability, free or not, where IT may offer solutions that capacitate control (“of the situation”).
Privacy is a part of freedom and shares therefore the same problem: It cannot be valued unless its boundaries are consciously (i.e. physically & emotionally) threatened. Facial recognition technology has such a strong impact on every individual’s life that it may pay not to trust but to inform and be able to enter a public dialog on its merits and dangers.