Confronting Flaws Within Rising Facial Recognition Technology
Understanding the processes necessary for accurate facial recognition aids in comprehending the flaws of AI and how it can be improved, especially in crime.
Reading Time: 4 minutes
In 2017, Apple introduced the iPhone X, replacing Touch ID with the more innovative Face ID. This technology, pioneered by Apple, has been incorporated into many devices since its release. The implementation of facial recognition has not only increased in personal devices, but also in many other fields like agriculture, medication, and in particular, law enforcement. However, the question of its true worth arises when the artificial intelligence (AI) powering facial recognition begins to exhibit strong racial biases and raises ethical concerns regarding its practical applications.
The developmental process of facial recognition is no doubt complex. The integration of AI to enhance data collection and approximate facial features utilizes complex mathematical algorithms to match and compare facial features to datasets with random and photometric features (geometrical angles, specific nodal points in the face, etc). Creating a single computational model for facial detection is extremely difficult since each face is a meaningful visual stimulus and is highly multi-dimensional. Therefore, techniques like eigenfaces, which use statistical patterns to translate faces into mathematical weights for comparison, are popular in computer-vision works due to their independence in using full three-dimensional models for analysis.
With eigenfaces, facial recognition becomes more efficient. Eigenfaces are a mathematical way to represent and collect facial features, summarizing and breaking down the most common and important features of the face. Each eigenface is a vector, representing the direction in the dimensional space of the image (usually the weights or distances of the most prominent features on the face). Upon initialization, the eigenface is defined, filtered, and calculated in order to be prepared to characterize the face. The image is then processed to verify whether it contains a human face, using the comparison of these weights to those of other people. If the image is classified as a human face, it is then analyzed further to determine if this person’s face is known or unknown in the database by comparing its weights to the ones of existing images.
Additionally, different mechanical tools like RGB-D scanners combine color and depth within the camera to channel different colors and their percentages in order to speedily match faces, making data acquisition more accurate and efficient. With the combination of machine-like scanners that can contour important features of the face with eigenfaces, which extract these features and match them to other faces, the accuracy and reliability of these methods for facial recognition greatly increase.
Delving deeper into the process of data analysis, sophisticated algorithms and formulas are necessary to search extensive databases and computer memory so that faces can be compared quickly. Algorithms derived from the Euclidean distance formula, cosine similarity, and a range of coefficient vectors are used to both code and calculate the statistical similarity of facial features between thousands of people. However, different facial recognition systems use different algorithms, meaning the accuracy of them can fluctuate between these systems. For example, Clearview AI, a platform popular for its face storing, generates vectors given an image of a face. These embedding vectors, ranging from negative one to one, correspond to the similarity between the two faces. However, Kairos, another facial detection company, uses individual pixels within a picture to input decimal values in a code, which then matches a face with a 60 percent comparison threshold. Since they use different measurements and methods, the accuracy of facial recognition systems varies, but many are still able to quickly match faces, increasing the efficiency of tasks in society that could otherwise only be done manually before, like verifying identities at border crossings, unlocking devices, and more.
Additionally, with the rise of facial recognition technology, law enforcement agencies have an easier time conducting criminal investigations, finding missing persons, creating Amber Alerts, managing surveillance systems, and more. Facial recognition connects camera footage and analyzes it with the aid of AI to match suspects with their identification. Additionally, facial recognition can be applied retrospectively to old cases, helping to solve cold cases by identifying individuals previously unrecognized or absent from criminal databases.
Though facial recognition algorithms claim to have accuracy rates higher than 90 percent, solely relying on facial recognition in criminal investigations poses a perilous threat to the justice system. According to a new study by the National Institute of Standards and Technology, Black and Asian people are 10 to 100 times more likely to have false positives in relation to Caucasian people. Facial recognition systems are more likely to misidentify people of color and women, making them a huge factor in wrongly convicting suspects. Though multiple explanations are issued on this topic, the main one seems to be due to a bias built up over time as AI models are given data during their training period that causes them to generalize certain minorities as criminals. For instance, the NYPD has databases with over 42,000 Black and Latinx people who are suspected of ‘gang’ affiliation with no concrete evidence. In addition, Black people are more likely to be overrepresented in mugshots. Both of these are examples of stored data that facial recognition might use to make predictions, which can cause an extremely harmful bias when AI starts to process these faces for crimes. There are also few government rules that can regulate whether or not facial recognition can be used as solid evidence in court. Solely relying on this technology can be damaging to people of color, especially when the risk is as high as facing legal penalties.
Though new techniques are continuously being administered to improve the technology, it is clear that facial recognition cannot be relied on entirely for serious matters such as those relating to law enforcement. The ethical drawbacks of this technology are what make it more important for procedures that incorporate AI to be 100 percent accurate before being taken into these serious settings. Though this level of precision may not be attainable at this time, it is important to note that its uses in simpler manners, such as phone verification, can be a convenient step forward for society, but its implementation in higher, more consequential areas like crime requires more time and development in order to be truly successful.