As a team of engineers, we are firm believers in technology for good, however face recognition companies must actively monitor the deployment of their technologies to ensure that that value is upheld, irrespective its deployment.
In consideration of smart cities – it is generally the police who would use face recognition software. The criminal databases they check video sources against shall remain entirely within the police systems and are not shared with any third parties. In this capacity, the software is a hyper-efficient update to the traditional policing methods that police camera operators would have been manually doing for a number of years.
At a software level, the algorithms must work with unique face features which makes it impossible to restore the original image of the face; something that is of distinct importance in protecting privacy. This means that when a person passes a camera linked to a system running our software, the image of their face is transformed into a digital imprint consisting of several lines of code, and only this digital imprint is compared against a database of similar digital imprints. It is impossible to restore the original image of the face from this digital imprint.
In a scenario where the software is being used for crime prevention, once a match is found between a database of wanted criminals, and the digital imprint of someone passing a camera, then the designated police officers receive an alert. Digital imprints of other people passing cameras are deleted once cross referenced against databases.
For commercial applications face recognition software collects anonymised data, limited to relative age and gender in order to generate consumer data and insight that enables businesses to improve brand communication and customer journeys. In cases where a business is required to react to certain people (for example, the VIP-scenario, where businesses use the system to recognise special guests to offer them VIP services) we expect businesses to receive written confirmation from individuals authorising the use of their images for such purposes.
Face recognition companies invest a lot of resource into maintaining a system that is both fully reliable and secure from vulnerabilities. The software becomes highly configurable and modular, which allows to make substantial additions or turn some features off for different markets – for example, customers can blur the faces of unknown people in the system, so that the operators will never receive any of their footage.
How do we ensure your software is not being misused
In every country we work in strict compliance with local laws and regulations, and we thoroughly check and vet our clients. Once our software is deployed, it is imperative that we ensure privacy by having no contact with the process of ongoing implementation, and clients do not need to use servers owned or operated by Recognito, allowing them to operate our software on their own secure servers.
Questions have been raised around misidentification and bias, and we are aware that other face recognition systems have displayed such issues in the past. However, unlike organisations that have encountered this problem, we trained our software’s neural networks with stimuli from different ethnicities, skin tones, and genders in equal proportions. The distinct way our neural network is built, combined with our engineering talent has ensured we mitigate against biases.
We are strong supporters of the deployment of face recognition, as long as it is done in a controlled and responsible manner that serves the public good first and foremost.