Essentially, the process of face recognition is performed in two steps. The first involves feature extraction and selection and the second is the classification of objects. Later developments introduced varying technologies to the procedure. Some of the most notable include the following techniques:
Traditional
Some face recognition algorithms identify facial features by extracting landmarks, or features, from an image of the subject's face. For example, an algorithm may analyze the relative position, size, and/or shape of the eyes, nose, cheekbones, and jaw. These features are then used to search for other images with matching features.
Other algorithms normalize a gallery of face images and then compress the face data, only saving the data in the image that is useful for face recognition. A probe image is then compared with the face data. One of the earliest successful systems is based on template matching techniques applied to a set of salient facial features, providing a sort of compressed face representation.
Recognition algorithms can be divided into two main approaches: geometric, which looks at distinguishing features, or photometric, which is a statistical approach that distills an image into values and compares the values with templates to eliminate variances. Some classify these algorithms into two broad categories: holistic and feature-based models. The former attempts to recognize the face in its entirety while the feature-based subdivide into components such as according to features and analyze each as well as its spatial location with respect to other features.
Popular recognition algorithms include principal component analysis using eigenfaces, linear discriminant analysis, elastic bunch graph matching using the Fisherface algorithm, the hidden Markov model, the multilinear subspace learning using tensor representation, and the neuronal motivated dynamic link matching.
3-Dimensional recognition
Three-dimensional face recognition technique uses 3D sensors to capture information about the shape of a face. This information is then used to identify distinctive features on the surface of a face, such as the contour of the eye sockets, nose, and chin.
One advantage of 3D face recognition is that it is not affected by changes in lighting like other techniques. It can also identify a face from a range of viewing angles, including a profile view. Three-dimensional data points from a face vastly improve the precision of face recognition. 3D research is enhanced by the development of sophisticated sensors that do a better job of capturing 3D face imagery. The sensors work by projecting structured light onto the face. Up to a dozen or more of these image sensors can be placed on the same CMOS chip—each sensor captures a different part of the spectrum....
Even a perfect 3D matching technique could be sensitive to expressions. For that goal a group at the Technion applied tools from metric geometry to treat expressions as isometries
A new method is to introduce a way to capture a 3D picture by using three tracking cameras that point at different angles; one camera will be pointing at the front of the subject, second one to the side, and third one at an angle. All these cameras will work together so it can track a subject's face in real time and be able to face detect and recognize.
Skin texture analysis
Another emerging trend uses the visual details of the skin, as captured in standard digital or scanned images. This technique, called Skin Texture Analysis, turns the unique lines, patterns, and spots apparent in a person's skin into a mathematical space.
Surface Texture Analysis works much the same way facial recognition does. A picture is taken of a patch oasda distinguish any lines, pores and the actual skin texture. It can identify the contrast between identical pairs, which are not yet possible using facial recognition software alone.
Tests have shown that with the addition of skin texture analysis, performance in recognizing faces can increase 20 to 25 percent.
Facial recognition combining different techniques
As every method has its advantages and disadvantages, technology companies have amalgamated the traditional, 3D recognition and Skin Textual Analysis, to create recognition systems that have higher rates of success.
Combined techniques have an advantage over other systems. It is relatively insensitive to changes in expression, including blinking, frowning or smiling and has the ability to compensate for mustache or beard growth and the appearance of eyeglasses. The system is also uniform with respect to race and gender.
Thermal cameras
A different form of taking input data for face recognition is by using thermal cameras, by this procedure the cameras will only detect the shape of the head and it will ignore the subject accessories such as glasses, hats, or makeup. Unlike conventional cameras, thermal cameras can capture facial imagery even in low-light and nighttime conditions without using a flash and exposing the position of the camera. However, a problem with using thermal pictures for face recognition is that the databases for face recognition is limited. Diego Socolinsky and Andrea Selinger (2004) research the use of thermal face recognition in real life and operation sceneries, and at the same time build a new database of thermal face images. The research uses low-sensitive, low-resolution ferroelectric electrics sensors that are capable of acquiring long-wave thermal infrared (LWIR). The results show that a fusion of LWIR and regular visual cameras has greater results in outdoor probes. Indoor results show that visual has a 97.05% accuracy, while LWIR has 93.93%, and the fusion has 98.40%, however on the outdoor proves visual has 67.06%, LWIR 83.03%, and fusion has 89.02%. The study used 240 subjects over a period of 10 weeks to create a new database. The data was collected on sunny, rainy, and cloudy days.
In 2018, researchers from the U.S. Army Research Laboratory (ARL) developed a technique that would allow them to match facial imagery obtained using a thermal camera with those in databases that were captured using a conventional camera. This approach utilized artificial intelligence and machine learning to allow researchers to visibly compare conventional and thermal facial imagery. Known as a cross-spectrum synthesis method due to how it bridges facial recognition from two different imaging modalities, this method synthesize a single image by analyzing multiple facial regions and details. It consists of a non-linear regression model that maps a specific thermal image into a corresponding visible facial image and an optimization issue that projects the latent projection back into the image space.
ARL scientists have noted that the approach works by combining global information (i.e. features across the entire face) with local information (i.e. features regarding the eyes, nose, and mouth). In addition to enhancing the discriminability of the synthesized image, the facial recognition system can be used to transform a thermal face signature into a refined visible image of a face. According to performance tests conducted at ARL, researchers found that the multi-region cross-spectrum synthesis model demonstrated a performance improvement of about 30% over baseline methods and about 5% over state-of-the-art methods. It has also been tested for landmark detection for thermal images.
Application
Mobile platforms
Social media
Social media platforms have adopted facial recognition capabilities to diversify their functionalities in order to attract a wider user base amidst stiff competition from different applications.
Founded in 2013, Looksery went on to raise money for its face modification app on Kickstarter. After successful crowdfunding, Looksery launched in October 2014. The application allows video chat with others through a special filter for faces that modifies the look of users. While there is image augmenting applications such as FaceTune and Perfect365, they are limited to static images, whereas Looksery allowed augmented reality to live videos. In late 2015, SnapChat purchased Looksery, which would then become its landmark lenses function.
SnapChat's animated lenses, which used facial recognition technology, revolutionized and redefined the selfie, by allowing users to add filters to change the way they look. The selection of filters changes every day, some examples include one that makes users look like an old and wrinkled version of themselves, one that airbrushes their skin, and one that places a virtual flower crown on top of their head. The dog filter is the most popular filter that helped propel the continual success of SnapChat, with popular celebrities such as Gigi Hadid, Kim Kardashian and the likes regularly posting videos of themselves with the dog filter.
DeepFace is a deep learning facial recognition system created by a research group at Facebook. It identifies human faces in digital images. It employs a nine-layer neural net with over 120 million connection weights, and was trained on four million images uploaded by Facebook users. The system is said to be 97% accurate, compared to 85% for the FBI's Next Generation Identification system. One of the creators of the software, Yaniv Taigman, came to Facebook via their acquisition of Face.com.
ID verification
The emerging use of facial recognition is in the use of ID verification services. Many companies and others are working in the market now to provide these services to banks, ICOs, and other e-businesses.
Face ID
Apple introduced Face ID on the flagship iPhone X as a biometric authentication successor to the Touch ID, a fingerprint based system. Face ID has a facial recognition sensor that consists of two parts: a "Romeo" module that projects more than 30,000 infrared dots onto the user's face, and a "Juliet" module that reads the pattern. The pattern is sent to a local "Secure Enclave" in the device's central processing unit (CPU) to confirm a match with the phone owner's face. The facial pattern is not accessible by Apple. The system will not work with eyes closed, in an effort to prevent unauthorized access.
The technology learns from changes in a user's appearance, and therefore works with hats, scarves, glasses, and many sunglasses, beard and makeup.
It also works in the dark. This is done by using a "Flood Illuminator", which is a dedicated infrared flash that throws out invisible infrared light onto the user's face to properly read the 30,000 facial points.
Deployment in security services
The Australian Border Force and New Zealand Customs Service have set up an automated border processing system called SmartGate that uses face recognition, which compares the face of the traveller with the data in the e-passport microchip. All Canadian international airports use facial recognition as part of the Primary Inspection Kiosk program that compares a traveler face to their photo stored on the ePassport. This program first came to Vancouver International Airport in early 2017 and was rolled up to all remaining international airports in 2018–2019. The Tocumen International Airport in Panama operates an airport-wide surveillance system using hundreds of live face recognition cameras to identify wanted individuals passing through the airport.
Police forces in the United Kingdom have been trialling live facial recognition technology at public events since 2015. However, a recent report and investigation by Big Brother Watch found that these systems were up to 98% inaccurate.
In May 2017, a man was arrested using an automatic facial recognition (AFR) system mounted on a van operated by the South Wales Police. Ars Technica reported that "this appears to be the first time [AFR] has led to an arrest".
Live facial recognition has been trialled since 2016 in the streets of London. It will be used on a regular basis from Metropolitan Police from beginning of 2020.
United States
Flight boarding gate with "biometric face scanners" developed by U.S. Customs and Border Protection at Hartsfield–Jackson Atlanta International Airport.
The U.S. Department of State operates one of the largest face recognition systems in the world with a database of 117 million American adults, with photos typically drawn from driver's license photos. Although it is still far from completion, it is being put to use in certain cities to give clues as to who was in the photo. The FBI uses the photos as an investigative tool, not for positive identification. As of 2016, facial recognition was being used to identify people in photos taken by police in San Diego and Los Angeles (not on real-time video, and only against booking photos) and use was planned in West Virginia and Dallas.
In recent years Maryland has used face recognition by comparing people's faces to their driver's license photos. The system drew controversy when it was used in Baltimore to arrest unruly protesters after the death of Freddie Gray in police custody. Many other states are using or developing a similar system however some states have laws prohibiting its use.
The FBI has also instituted its Next Generation Identification program to include face recognition, as well as more traditional biometrics like fingerprints and iris scans, which can pull from both criminal and civil databases. The federal General Accountability Office criticized the FBI for not addressing various concerns related to privacy and accuracy.
Starting in 2018, U.S. Customs and Border Protection deployed "biometric face scanners" at U.S. airports. Passengers taking outbound international flights can complete the check-in, security and the boarding process after getting facial images captured and verified by matching their ID photos stored on CBP's database. Images captured for travelers with U.S. citizenship will be deleted within up to 12-hours. TSA had expressed its intention to adopt a similar program for domestic air travel during the security check process in the future. The American Civil Liberties Union is one of the organizations against the program, concerning that the program will be used for surveillance purposes.
In 2019, researchers reported that Immigration and Customs Enforcement uses facial recognition software against state driver's license databases, including for some states that provide licenses to undocumented immigrants.
China
Boarding gates with facial recognition technology at Beijing West Railway Station
Lots of public places in China are implemented with facial recognition equipment, including railway stations, airports, tourist attractions, expos, and office buildings.
As of late 2017, China has deployed facial recognition and artificial intelligence technology in Xinjiang. Reporters visiting the region found surveillance cameras installed every hundred meters or so in several cities, as well as facial recognition checkpoints at areas like gas stations, shopping centers, and mosque entrances. In 2020, China provided a grant to develop facial recognition technology to identify people wearing surgical or dust masks by matching solely to eyes and foreheads.
In October 2019, a professor at Zhejiang Sci-Tech University sued Hangzhou Safari Park for abusing private biometric information of customers. The Park uses facial recognition technology to verify the identities of its Year Card holders. It is viewed as the first lawsuit in regards to the facial recognition systems in China.
Kommentare