Topic > Human face recognition: description of methods

Human face recognition is a difficult problem in computer vision. Early experiments in computer vision tended to focus on toy-related problems in which the observed world was carefully controlled and constructed. Perhaps boxes shaped like regular polygons were identified or simple objects such as scissors were used. In most cases the image background has been carefully controlled to provide excellent contrast between the analyzed objects and the surrounding world. Facial recognition clearly does not fall into this category of problems. Facial recognition is challenging because it is a real-world problem. The human face is a natural and complex object that tends not to have easily (automatically) identifiable contours and characteristics. For this reason, it is difficult to develop a mathematical model of the face that can be used as prior knowledge in the analysis of a particular image. Say no to plagiarism. Get a tailor-made essay on "Why Violent Video Games Shouldn't Be Banned"? Get an original essay Facial recognition applications are widespread. Perhaps the most obvious is that of human-computer interaction. It could make computers easier to use if when you simply sit at a computer terminal, the computer could identify you by name and automatically load your personal preferences. This identification could also be useful for improving other technologies such as voice recognition, since if the computer was able to identify the individual who is speaking, the observed voice patterns could be classified more accurately than the voice of the known individual. Human face recognition technology could also have uses in security. Face recognition could be one of several mechanisms used to identify an individual. Facial recognition as a security measure has the advantage that it can be performed quickly, perhaps even in real time, and does not require extensive equipment to implement. Furthermore, it does not entail particular inconveniences for the subject to be identified, as instead happens in retinal scans. However, it has the disadvantage of not being an infallible authentication method, since the appearance of the human face is subject to various sporadic changes on a daily basis (shaving, hairstyle, acne, etc.), as well as gradual changes over time (ageing). For this reason, facial recognition is perhaps best used as an augmentation to other identification techniques. A final domain where facial recognition techniques could be useful is search engine technologies. In combination with face detection systems, it could allow users to search for specific people in images. This could be done by asking the user to provide a picture of the person to be found or simply providing the person's name for known individuals. One specific application of this technology is criminal mugshot databases. This environment is perfectly suited for automatic face recognition since all poses are standardized and lighting and scale are kept constant. Clearly, this type of technology could extend online searches beyond the textual clues that are typically used when indexing information. Facial recognition. Development through historyFacial recognition is one of the most relevant applications of image analysis. It is a real challenge to build an automated system that matches the human ability to recognize faces. Although humansare quite good at identifying familiar faces, we are not very good when faced with a large quantity of unknown faces. Computers, with almost unlimited memory and computing speed, are expected to surpass human limits. Facial recognition remains an unsolved problem and a required technology. A simple search for the phrase “facial recognition” in the IEEE Digital Library generates 9422 results. 1332 articles in just one year 2009. There are many industrial sectors interested in what it can offer. Some examples include video surveillance, human-computer interaction, cameras, virtual reality or law enforcement. This multidisciplinary interest drives research and attracts interest from different disciplines. Therefore, it is not a problem limited to computer vision research. Facial recognition is a relevant topic in pattern recognition, neural networks, computer graphics, image processing, and psychology. In fact, the first works on this topic date back to the 1950s in the psychological field [21]. They attached themselves to other issues such as facial expression, the interpretation of emotions or the perception of gestures. Engineering began to show interest in facial recognition in the 1960s. One of the first researchers on this topic was Woodrow W. Bledsoe. In 1960, Bledsoe, along with other researchers, founded Panoramica Research, Inc., in Palo Alto, California. Most of the work done by this company involved AI-related contracts from the US Department of Defense and various intelligence agencies [4]. During 1964 and 1965, Bledsoe, together with Helen Chan and Charles Bisson, worked on using computers to recognize human faces [14, 15]. Because funding for this research was provided by an unnamed intelligence agency, little of the work has been published. He subsequently continued his research at the Stanford Research Institute. Bledsoe designed and implemented a semi-automatic system. Some coordinates of the face were selected by a human operator and then computers used this information for recognition. He described most of the problems that even 50 years later facial recognition still suffers from: variations in lighting, head rotation, facial expression, aging. Research on this topic still continues, trying to measure subjective facial characteristics such as the size of the ears or the distance between the eyes. For example, this approach was used in Bell Laboratories by A. Jay Goldstein, Leon D. Harmon, and Ann B. Lesk. They described a vector, containing 21 subjective features such as ear protrusion, eyebrow weight or nose length, as the basis for recognizing faces using pattern classification techniques. In 1973, Fischler and Elschanger attempted to automatically measure similar characteristics [34]. Their algorithm used local pattern matching and a global fit measure to find and measure facial features. In the 1970s, other approaches existed. Some have tried to define a face as a set of geometric parameters and then perform pattern recognition based on those parameters. But the first to develop a fully automated facial recognition system was Kenade in 1973. He designed and implemented a facial recognition program. It ran in a computer system designed for this purpose. The algorithm automatically extracted sixteen facial parameters. In his work, Kenade compares this automated extraction to a human or manual extraction, showing only a small difference. He got a rate ofcorrect identification of 45-75%. He demonstrated that better results were obtained when irrelevant features were not used. In the 1980s, several approaches were actively followed, most of which continued with previous trends. Some work has attempted to improve the methods used to measure subjective characteristics. For example, Mark Nixon presented a geometric measurement for eye distance [5]. The pattern matching approach has been improved with strategies such as "deformable patterns". This decade also brought new approaches. Some researchers build facial recognition algorithms using artificial neural networks [1]. The first mention of eigenfaces in image processing, a technique that would become the dominant approach in subsequent years, was made by L. Sirovich and M. Kirby in 1986 [10] . Their methods were based on principal component analysis. Their goal was to represent an image in a lower dimension without losing much information, and then reconstruct it [6]. Their work would later become the foundation for many new facial recognition algorithms. The 1990s saw the widespread recognition of the aforementioned eigenface approach as the basis for state-of-the-art and early industrial applications. In 1992, Mathew Turk and Alex Pentland of MIT presented work that used eigenfaces for recognition [11]. Their algorithm was able to locate, track and classify a subject's head. Since the 1990s, the area of ​​facial recognition has received a lot of attention, with a notable increase in the number of publications. Many approaches have been adopted which have led to different algorithms. Some of the most relevant are PCA, ICA, LDA and their derivatives. Different approaches and algorithms will be discussed later in this work. Recognition Algorithm Design Viewpoints The most obvious features of the face were used in the beginning of face recognition. It was a sensible approach to mimic human face recognition ability. We tried to measure the importance of some intuitive features [2] (mouth, eyes, cheeks) and geometric measures (distance between the eyes [8], width-length ratio). Nowadays it is still a relevant issue, especially because discarding some facial features or parts of a face can lead to better performance [4]. In other words, it is crucial to decide which facial features contribute to good recognition and which are no better than added noise. However, the introduction of abstract mathematical tools like eigenfaces has created another approach to facial recognition. It was possible to calculate similarities between faces by obviating those features relevant to humans. This new point of view allowed a new level of abstraction, leaving the anthropocentric approach behind. There are still some human-relevant characteristics that are still being taken into account. For example, skin color [9, 3] is an important feature for face detection. The position of some features such as mouth or eyes is also used to perform a normalization before the feature extraction phase [12]. To summarize, a designer can apply knowledge provided by psychology, neurology, or simple observation to algorithms. On the other hand, it is essential to perform abstractions and approach the problem from a purely mathematical or computational point of view. Structure of face recognition system Face recognition is a term that includes several sub-problems. Various classifications of these can be found in the bibliographyproblems. Some of them will be explained in this section. Finally, a general or unified classification will be proposed.Generic facial recognition systemThe input of a facial recognition system is always an image or video stream. The output is an identification or verification of the subject or subjects appearing in the image or video. Some approaches [15] define a facial recognition system as a three-step process - see Figure 1.1. From this point of view, the Face Detection and Feature Extraction phases could be performed simultaneously. Figure 1.1: A generic facial recognition system. Face Detection is defined as the process of extracting faces from scenes. Therefore, the system positively identifies a certain region of the image as a face. This procedure has many applications such as face detection, pose estimation or compression. The next step, feature extraction, involves obtaining relevant facial features from the data. These features could be certain regions, variations, angles, or measurements of the face, which may or may not be relevant to humans (e.g., eye spacing). This stage has other applications such as facial feature detection or emotion recognition. Finally, the system recognizes the face. In an identification task, the system would report an identity from a database. This phase involves a comparison method, a classification algorithm and an accuracy measure. This phase uses methods common to many other areas that also perform some classification processes: sound engineering, data mining, and more. These phases can be merged or new ones can be added. Therefore, we could find many different engineering approaches to the face recognition problem. Face detection and recognition could be performed in tandem or proceed to an expression analysis before normalizing the face [10]. Face Detection ProblemStructureFace Detection is a concept that includes many subproblems. Some systems detect and localize faces at the same time, others first perform a detection routine and then, if positive, try to localize the face. So, some tracking algorithms may be needed – see Figure 1.2. Figure 1.2: Face detection processes. Face detection algorithms usually share common steps. First, a data size reduction is performed, in order to achieve a feasible response time. Preprocessing could also be performed to adapt the input image to the algorithm's prerequisites. So, some algorithms analyze the image as it is, while others try to extract some relevant facial regions. The next step usually involves extracting facial features or measurements. These will then be weighted, evaluated or compared to decide whether a face exists and where it is located. Finally, some algorithms have a learning routine and include new data in their models. Face detection is, therefore, a two-class problem where we need to decide whether or not there is a face in an image. This approach can be seen as a simplified face recognition problem. Facial recognition must classify a given face and there are as many classes as there are candidates. As a result, many face detection methods are very similar to face recognition algorithms. In other words, techniques used in face detection are often used in face recognition. Feature Extraction Methods There are many feature extraction algorithms. They will be discussed later in this document. There [10].