Facial Recognition Is Spreading Everywhere

Facial Recognition Is Spreading Everywhere



Facial recognition technology (FRT) dates back 60 years. Just over a decade ago, deep-learning methods tipped the technology into more useful—and menacing—territory. Now, retailers, your neighbors, and law enforcement are all storing your face and building up a fragmentary photo album of your life.

Yet the story those photos can tell inevitably has errors. FRT makers, like those of any diagnostic technology, must balance two types of errors: false positives and false negatives. There are three possible outcomes.

Three Possible Outcomes

White figures and an orange hooded figure, focusing on the hooded figure in a split design.a) identifies the suspect, since the two images are of the same person, according to the software. Success!

Abstract figures: orange hoodie enlarged, white, yellow, and orange on left, black background.b) matches another person in the footage with the suspect’s probe image. A false positive, coupled with sloppy verification, could put the wrong person behind bars and lets the real criminal escape justice.Brandon Palacio

Three white icons and one orange hoodie icon on left, large orange hoodie icon on right.c) fails to find a match at all. The suspect may be evading cameras, but if cameras just have low-light or bad-angle images, this creates a false negative. This type of error might let a suspect off and raise the cost of the manhunt.Brandon Palacio

In best-case scenarios—such as comparing someone’s passport photo to a photo taken by a border agent—false-negative rates are around two in 1,000 and false positives are less than one in 1 million.

In the rare event you’re one of those false negatives, a border agent might ask you to show your passport and take a second look at your face. But as people ask more of the technology, more ambitious applications could lead to more catastrophic errors. Let’s say that police are searching for a suspect, and they’re comparing an image taken with a security camera with a previous “mug shot” of the suspect.

Training-data composition, differences in how sensors detect faces, and intrinsic differences between groups, such as age, all affect an algorithm’s performance. The United Kingdom estimated that its FRT exposed some groups, such as women and darker-skinned people, to risks of misidentification as high as two orders of magnitude greater than it did to others.

Five faces arranged left to right, from easy to hard to recognize.Less clear photographs are harder for FRT to process.iStock

What happens with photos of people who aren’t cooperating, or vendors that train algorithms on biased datasets, or field agents who demand a swift match from a huge dataset? Here, things get murky.

Facial Recognition Gone Wrong

THE NEGATIVES OF FALSE POSITIVES

Detroit Police SUV with American flag decal on side under bright sunlight.2020: Robert Williams’s wrongful arrest cost him detention. The ensuing settlement requires Detroit police to enact policies that recognize FRT’s limits. iStock

ALGORITHMIC BIAS

Red sign reads "Security cameras in use" with camera graphic.2023: Court bans Rite Aid from using facial recognition for five years over its use of a racially biased algorithm. iStock

TOO FAST, TOO FURIOUS?

Back of ICE officer in tactical gear facing a house.2026: U.S. immigration agents misidentify a woman they’d detained as two different women. VICTOR J. BLUE/BLOOMBERG/GETTY IMAGES

Consider a busy trade fair using FRT to check attendees against a database, or gallery, of images of the 10,000 registrants, for example. Even at 99.9 percent accuracy you’ll get about a dozen false positives or negatives, which may be worth the trade-off to the fair organizers. But if police start using something like that across a city of 1 million people, the number of potential victims of mistaken identity rises, as do the stakes.

What if we ask FRT to tell us if the government has ever recorded and stored an image of a given person? That’s what U.S. Immigration and Customs Enforcement agents have done since June 2025, using the Mobile Fortify app. The agency conducted more than 100,000 FRT searches in the first six months. The size of the potential gallery is at least 1.2 billion images.

At that size, assuming even best-case images, the system is likely to return around 1 million false matches, but at a rate at least 10 times as high for darker-skinned people, depending on the subgroup.

Responsible use of this powerful technology would involve independent identity checks, multiple sources of data, and a clear understanding of the error thresholds, says computer scientist Erik Learned-Miller of the University of Massachusetts Amherst: “The care we take in deploying such systems should be proportional to the stakes.”


🔗 Read Full Article:
Click Here


📰 Source: Lucas Laursen

Credits: This content is sourced from the original publisher. All rights belong to the respective owner.

Comments

Popular posts from this blog