This audio was created using Microsoft Azure Speech Services
Facial recognition predicts the probability that two different images contain face data from the same person, essentially confirming that the images are a match, and hence the same person. The maturity of the digital camera technology has been a catalyst to the emergence of facial recognition technology, but is stressing the data transfer network, as well as data storage. These stored images can be as large as 200MB to 1GB, so the network must be fast, and the image located in close proximity – essentially the definition of 5G. But concerns over privacy seem to be growing all over the world.
Fiction Meets Reality for 5G-powered Facial Recognition
In his book, 1984, George Orwell warned of a dystopian future where the authoritarian “Big Brother” regime monitors its citizens through television-like “telescreens.” Dystopian is defined as an imaginary community or society that is undesirable or frightening. It is literally translated from Greek as “not-good place.” Orwell imagined a system of public surveillance where people do not have privacy in public or even in their own homes. Enabled by facial recognition, “Big Brother” has become a universal symbol for intrusive government and oppressive bureaucracy.
Recently, San Francisco, the world technical innovation hub, became the first U.S. city to ban police use of facial recognition technology in police investigations.
“We can have security without being a security state,” San Francisco supervisor Aaron Peskin said at a recent Board of Supervisors meeting.
Numerous civil rights groups are supporting the city ordinance. In London (and most everywhere else), the opposite is happening. Police leverage facial recognition to the full extent. But people are starting to realize they are being filmed and want none of it. Police have stopped people who cover their faces when walking by facial recognition cameras. Sometimes, they are presented with a fine. Or, in some cases, they are even arrested!
However, one of the main arguments against facial recognition in San Francisco is the technical accuracy of the systems. Storing and updating faces in a local database poses an enormous technical challenge. Every face has around 80 nodal points that are the different peaks and valleys that make up facial features. For example:
- Distance between the eyes
- Width of the nose
- Depth of the eye sockets
- The shape of the cheekbones
- The length of the jaw line
Technology Enhancements for Facial Recognition Rendering
Two-dimensional facial recognition systems have been in use for more than three decades. Mature 2D recognition systems are available that achieve low error rates in controlled environments, but are quite sensitive to illumination, pose variation, make-up, and facial expressions. Today, there are more sophisticated 3D images and systems and also systems that convert 2D images into 3D for more accurate comparisons. The advent of 3D face recognition has made this technology accurate, fast, and as simple as fingerprint recognition.
The image capture takes place using very high-resolution cameras – usually 16 MP. The cameras also support Wide Dynamic Range (WDR), allowing high-quality imaging even in low-light and extreme-light conditions. Many image-processing functions are executed on the edge device itself (i.e. the camera). However, facial recognition solutions still require significant processing at the back-end edge data center. A good rule of thumb is 20 cameras per server. The images from the surveillance cameras feed into the facial recognition application suite in the edge data center, which then searches them against a reference database. The face images in the reference database can vary greatly, but 30,000 images is a good median number.
Given that the face-recognition engine runs at the edge, and the process needs to work in real-time, the flow of data from the cameras to the facial recognition application needs to have the lowest latency possible. The fabric can either be wired or wireless, but the facial recognition engine edge data center must be located in close proximity to the camera network. The operation for this system in a 5G cluster would be ideal.
Technology Advancements Continue – As Do Cultural Impacts – for Facial Recognition
Facial recognition, as a security application, has reached a level of maturity that allows it to be deployed for critical situations. There are still issues of high volumes overwhelming the system or reference databases. The use of edge data centers in a 5G cluster can advance the performance and accuracy of facial recognition systems.
Maybe it shouldn’t be surprising that San Francisco has banned facial recognition technology. Even George Orwell’s, 1984 has been banned and challenged for its controversial content. But I believe that 5G will power facial recognition in a much more accurate and unbiased way that will fuel a breaking down of barriers to gain wider acceptance. To anticipate that acceptance, Schneider Electric is taking strides to partner with tech companies like IBM for integrated approaches – like this reference design – to managing physical resources, and ultimately enabling industry-leading solutions like 5G to power facial recognition innovations.