Rite Aid banned from using facial recognition software after falsely identifying shoplifters


Rite Aid has been banned from using facial recognition software for five years, after the Federal Trade Commission (FTC) found that the U.S. drugstore giant’s “reckless use of facial surveillance systems” left customers humiliated and put their “sensitive information at risk.”

The FTC’s Order, which is subject to approval from the U.S. Bankruptcy Court after Rite Aid filed for Chapter 11 bankruptcy protection in October, also instructs Rite Aid to delete any images it collected as part of its facial recognition system rollout, as well as any products that were built from those images. The company must also implement a robust data security program to safeguard any personal data it collects.

A Reuters report from 2020 detailed how the drugstore chain had secretly introduced facial recognition systems across some 200 U.S. stores over an eight-year period starting in 2012, with “largely lower-income, non-white neighborhoods” serving as the technology testbed.

With the FTC’s increasing focus on the misuse of biometric surveillance, Rite Aid fell firmly in the government agency’s crosshairs. Among its allegations are that Rite Aid — in partnership with two contracted companies — created a “watchlist database” containing images of customers that the company said had engaged in criminal activity at one of its stores. These images, which were often poor quality, were captured from CCTV or employees’ mobile phone cameras.

When a customer entered a store who supposedly matched an existing image on its database, employees would receive an automatic alert instructing them to take action — and the majority of the time this instruction was to “approach and identify,” meaning verifying the customer’s identity and asking them to leave. Often, these “matches” were false positives that led to employees incorrectly accusing customers of wrongdoing, creating “embarrassment, harassment, and other harm,” according to the FTC.

“Employees, acting on false positive alerts, followed consumers around its stores, searched them, ordered them to leave, called the police to confront or remove consumers, and publicly accused them, sometimes in front of friends or family, of shoplifting or other wrongdoing,” the complaint reads.

Additionally, the FTC said that Rite Aid failed to inform customers that facial recognition technology was in use, while also instructing employees to specifically not reveal this information to customers.

Face-off

Facial recognition software has emerged as one of the most controversial facets of the AI-powered surveillance era. In the past few years we’ve seen cities issue expansive bans on the technology, while politicians have fought to regulate how police utilize it. And companies such as Clearview AI, meanwhile, have been hit with lawsuits and fines around the world for major data privacy breaches around facial recognition technology.

The FTC’s latest findings regarding Rite Aid also shines a light on inherent biases in AI systems. For instance, the FTC says that Rite Aid failed to mitigate risks to certain consumers due to their race — its technology was “more likely to generate false positives in stores located in plurality-Black and Asian communities than in plurality-White communities,” the findings note.

Additionally, the FTC said that Rite Aid failed to test or measure the accuracy of their facial recognition system prior to, or after, deployment.

In a press release, Rite Aid said that it was “pleased to reach an agreement with the FTC,” but that it disagreed with the crux of the allegations.

“The allegations relate to a facial recognition technology pilot program the Company deployed in a limited number of stores,” Rite Aid said in its statement. “Rite Aid stopped using the technology in this small group of stores more than three years ago, before the FTC’s investigation regarding the Company’s use of the technology began.”


Atoms Lanka Solutions