*First in the series
Late last month, the Federal Trade Commission (FTC) filed a complaint against Rite Aid seeking a permanent injunction and sanctions for the pharmacy chain’s widespread use of facial recognition technology deployed to reduce the risk of shoplifting and other criminal activity in its stores, and its failure to prevent virtually all reasonably foreseeable harm to consumers. The complaint, coming on the heels of the FTC’s November 21, 2023, adoption of a resolution streamlining FTC staff’s ability to issue civil investigative demands (CIDs) by pre-authorizing the use of enforcement procedures in AI-related investigations, is evidence that the FTC is delivering on its pledge to use its authority to protect individual rights when violations arise due to “advanced technology.” It also confirms that the FTC continues to take an early and firm leadership position in the evolving legal environment for AI technology. Rite Aid’s complaint makes it clear that the FTC views its authority to impose penalties for alleged misuse or inappropriate use of AI by companies as broad and far-reaching.
RiteAid’s use of facial recognition technology
According to the complaint, the use of facial recognition technology in Rite Aid stores spanned approximately eight years, from October 2012 to July 2020. The technology relied on a “registration database” that contained images of individuals from law enforcement and media reports, as well as images of individuals Rite Aid identified as having engaged in actual or attempted criminal activity in its stores. Cameras installed in Rite Aid stores captured live images of customers as they moved through the store, and the facial recognition technology compared the customer’s live image to registered images in the database to identify possible matches with registered individuals. When the technology detected a match, an alert was generated and sent to store employees with instructions ranging from observing and monitoring the individual in the store to calling police and removing the individual from the premises.
The FTC alleged that throughout Rite Aid’s use of its facial recognition technology, there were thousands of false positives and numerous complaints from customers that they were followed in stores, harassed in person by employees, wrongfully denied service, and even wrongfully arrested. Specifically, the FTC alleged that the harms caused by the inaccurate matching system disproportionately affected racial minorities and women. The FTC concluded that Rite Aid’s alleged poor management of its facial recognition program was egregious enough to constitute an unfair business practice in violation of the Federal Trade Commission Act (FTCA).
Rite Aid’s major failures according to the FTC
The FTC identified several significant deficiencies that led Rite Aid to breach its duty to protect customers from reasonably foreseeable harm. First, Rite Aid allegedly failed to conduct proper due diligence when selecting an AI technology vendor for the program, including inquiring about the accuracy of the technology to identify matches. According to the complaint, neither the third-party vendor nor Rite Aid conducted much pre-deployment accuracy testing, and the enrollment images Rite Aid uploaded to its database largely did not meet the image quality standards the vendor recommended to improve the accuracy of the technology. Rite Aid also failed to establish regular monitoring and reviews of the technology, and failed to take corrective action to fix deficiencies in the database or the technology itself, despite warnings that it was generating high levels of false positives during the first few years of use. Finally, the complaint alleges that Rite Aid failed to adequately train store-level employees to properly use and understand the technology and to respond appropriately to match alerts.
The FTC alleges that Rite Aid’s deployment of facial recognition technology with insufficient controls from top to bottom caused the technology to generate false positives, increasing the risk that employees would respond to false positives, subjecting customers to undue embarrassment, deprive them of needed medication, or be wrongfully detained or arrested. The settlement agreement reached between the FTC and Rite Aid states that Rite Aid cannot use any type of facial recognition technology for five years and cannot update its use of the technology until it has established adequate procedures for testing, data integrity and controls, and vendor and employee oversight.
The FTC’s action against Rite Aid sends a clear message about the risks associated with the expanded use of AI technologies. If companies fail to incorporate an infrastructure framework for appropriate vendor selection, data governance, and internal training as they develop and implement AI technologies into their business practices, they may be subject to litigation as well as regulators. The range of customer harms identified in the FTC complaint provides a roadmap for plaintiffs seeking to satisfy cognizable claims against companies using customer-facing technologies that result in harm ranging from minor to severe.
Key Takeaways
The Rite Aid complaint makes it clear that businesses of all kinds need to put in place appropriate safeguards early and often to use AI technologies safely and effectively. In response to growing demand for guidance on appropriate AI technology use, the National Institute of Standards and Technology published the Artificial Intelligence Risk Management Framework in January 2023. The framework addresses each of the key failures identified in the Rite Aid complaint. As regulators across industries join the scramble to govern and oversee the use of rapidly evolving AI technologies, the magnitude of the potential risks to businesses seeking to integrate AI into their business practices continues to grow. The Rite Aid complaint serves as a cautionary tale for all businesses that the AI field is no longer a lawless place, and there are many new sheriffs in town.
In part two of this article, we present a roadmap to help businesses. Map, measure, manage and govern AI risks and uses.
Subscribe to a viewpoint