Secret Service Announces Test of Face Recognition System Around White House

Dec 04, 2018
By:
Jay Stanley

In yet another step toward the normalization of facial recognition as a blanket security measure, last week the Department of Homeland Security published details of a U.S. Secret Service plan to test the use of facial recognition in and around the White House.

According to the document, the Secret Service will test whether its system can identify certain volunteer staff members by scanning video feeds from existing cameras “from two separate locations on the White House Complex, and will include images of individuals passing by on public streets and parks adjacent to the White House Complex.” The ultimate goal seems to be to give the Secret Service the ability to track “subjects of interest” in public spaces.

Physical protection of the president and the White House is not only a legitimate goal but a vital one for protecting the stability of our republic. And while this pilot program seems to be a relatively narrowly defined test that does not in itself pose a significant threat to privacy, it crosses an important line by opening the door to the mass, suspicionless scrutiny of Americans on public sidewalks. That makes it worth pausing to ask how the agency’s use of face recognition is likely to expand — and the constitutional concerns that it raises.

First, it represents yet another example of DHS’s determination to deploy face recognition, (despite the fact that Congress has never authorized its use on the public within the United States). Like the technology’s incipient deployment by U.S. Customs and Border Protection at airport gates and its planned rollout by the Transportation Security Administration in airports more broadly, its use by the Secret Service would be a milestone.

Face recognition is one of the most dangerous biometrics from a privacy standpoint because it can so easily be expanded and abused — including by being deployed on a mass scale without people’s knowledge or permission. Unfortunately, there are good reasons to think that could happen. The Secret Service envisions using the technology to provide early warning about “subjects of interest” who are approaching the White House “prior to direct engagement with law enforcement.”

We don’t exactly know how the Secret Service determines if someone is a “subject of interest.” The agency says they could be flagged through a variety of means, including “social media posts made in public forums” as well as suspicious activity reports and media reporting. Unfortunately, our government agencies have a long history of labeling people threats based on their race, religion, or political beliefs. Just last year, for example, a leaked document revealed that the FBI had prepared an intelligence assessment wrongly profiling Black activists as threats based on their race and beliefs, labeling them “Black Identity Extremists.”

The Secret Service’s use of face recognition is of special concern when it comes to protesters. The Trump administration is already attempting to limit protests near the White House, and the Secret Service has a problematic history in its handling of protests — including mistreating protesters because of their political opinions. The addition of face recognition to the mix does not bode well in light of this history.

Then there’s the question of where this leads. Exactly how wide a radius does the Secret Service want to monitor? Is there any reason to think it wouldn’t want to follow its “subjects of interest” 24/7 and nationwide if technology makes that easy enough? Let’s also keep in mind that the agency’s mission includes protecting not only the White House but also presidents and vice presidents when they travel; presidents’ and vice presidents’ immediate families; former presidents, their spouses, and their minor children; major presidential and vice presidential candidates and their spouses; and foreign heads of state. The agency’s authority also includes investigation of certain financial crimes. If it begins using face recognition as a principal tool, that’s not going to be an issue just for people in downtown Washington, D.C.

Nor is the relative narrowness of the Secret Service’s mission necessarily going to limit the expansion of this technology. The record of military intelligence agencies charged with protecting the security of military bases on U.S. soil provides a good example of this. Those agencies have used their narrow mission as a rationale to engage in very broad surveillance — for example collecting data on millions of domestic airline passengers; creating a database logging “raw, non-validated” reports of “anomalous activities” anywhere within the United States; monitoring peaceful political protests by pacifist Quakers far from any military base; and even deploying undercover agents to infiltrate such groups.

How far-ranging does the Secret Service believe its monitoring efforts need to be to fulfill its mission? Whatever the answer is today, there is good reason to be concerned about what that answer might be in the future — especially if unregulated face recognition and other technologies make it cheap and easy to extend the tentacles of its surveillance outwards.

The deployment announced in this document is just a test, and for now, the agency promises not to retain images except matches with its volunteer employees. But thousands of people going about their business in the busy urban area around the White House are still having their faces scanned, some of whom will likely be falsely matched to target subjects. (The agency none-too-helpfully notes that “individuals who do not wish to be captured by … cameras involved in this pilot may choose to avoid the area.”) And there’s no promise that privacy protections will survive an expansion and normalization of public face surveillance.

The program is another blinking red light for policymakers in the face of powerful surveillance technologies that will present enormous temptations for abuse and overuse. Congress should demand answers about this new program and the government’s other uses of face recognition. And it should intercede to stop the use of this technology unless it can be deployed without compromising fundamental liberties.

Jay Stanley is the Senior Policy Analyst at the ACLU Speech, Privacy, and Technology Project.