“I have all of you on camera, I’m going to come to your house!” These are the words, spoken from an LAPD helicopter at recent anti-ICE protests in Los Angeles, which sent a shiver through one crowd and rang out across digital rights communities. The declaration, quoted by the Los Angeles Times and seconded by privacy rights advocates, distilled a new approach to confrontation: not merely between police and protesters, but between communities and the technology employed to monitor, recognize, and occasionally harass them. Here, in this charged environment, a new weapon FuckLAPD.com has come into existence, seeking to redirect how the spotlight is aimed at law enforcement itself.

It was created by artist Kyle McDonald, and it enables any user to upload a picture of an LAPD officer and search for a match from over 9,000 headshots collected through public records. The website works on photos locally on people’s computers, ensuring that no photos or data are transmitted or saved on the site. The function of the tool is obvious: to assist in the identification of officers who hide their badges in protest or in other interactions, providing a type of grassroots oversight. As McDonald stated, “We deserve to know who is shooting us in the face even when they have their badge covered up.”
The timing is not coincidental. Los Angeles has emerged as a hotbed of controversy over police authority, surveillance, and protest freedoms. During the recent anti-ICE protests, police ramped up its use of surveillance, employing helicopters, drones, and in the opinion of privacy activists menacingly, facial recognition to track protesters for potential reprisals. The Surveillance Technology Oversight Project characterized the scenario as “authoritarianism, plain and simple,” cautioning that “federal immigration authorities are using military-grade spyware to chill Angelenos’ constitutional right to protest, with the help of LAPD’s own sweeping surveillance capabilities”.
But the reality of technology is more complex than what the language of politics creates. Though the LAPD has insisted again and again that it does not use or minimizes facial recognition, records indicate that police officers have employed such programs almost 30,000 times since 2009, tapping into a county-operated database that cross-checks investigate images against a registry of almost 9 million mugshots (In spite of previous denials, LAPD has employed facial recognition software 30,000 times in past decade, records indicate). Over 300 officers have access to the tool, and it has been used to create investigative leads on a variety of cases, from gang crimes to protest-related offenses. Yet, the department maintains that “no individuals are arrested by the LAPD based solely on facial recognition results,” and that the technology is not utilized in live crowd scanning or streaming.
Despite these assurances, oversight is still patchy. The LAPD’s inspector general recently discovered that the department does not have a strong process for monitoring the results or success of facial recognition searches, so it cannot confirm how frequently misidentifications do occur or if policies at the state and internal levels are being adhered to (LAPD Doesn’t Fully Track Its Use of Facial Recognition). The danger of a “false positive” where a match results in the wrongful arrest and identification of an innocent individual is still a “paramount concern.” Interestingly, studies have determined that facial recognition technology is much more likely to misidentify people of color, women, and older individuals. According to a federal study, these systems incorrectly identified Black and Asian faces 10 to 100 times more frequently than white faces.
The technical constraints are not theoretical. A recent experiment with synthetic faces and the Deepface/ArcFace algorithm illustrated that image quality is a critical factor for facial recognition accuracy. By contrast, as brightness, motion blur, pose, contrast, and resolution worsen, false negatives skyrocket particularly among Black women and Asian women while false positives, while less frequent, can surge with images close to baseline quality (Accuracy and Fairness of Facial Recognition Technology in Low-Quality Police Images: An Experiment With Synthetic Faces). These inequalities are not hypothetical; they have resulted in at least seven documented instances of wrongful arrest as a result of facial recognition mistake, six of which were Black individuals. The LAPD’s own mugshot database, like most law enforcement datasets, disproportionately represents marginalized groups based on historic policing patterns, adding to the threat of bias.
FuckLAPD.com’s strategy processing images locally and not storing or transmitting user data could not be more at odds with commercial facial recognition providers such as Clearview AI, which scrape billions of photos off social media platforms without permission and sell access to law enforcement agencies across the country. The LAPD, following a short trial, prohibited the use of Clearview and other third-party software, citing “public trust” issues (LAPD Bans Facial Recognition, Citing Privacy Concerns). However, local police and federal agencies still employ the tools, frequently without much transparency or regulation.
The moral landscape is treacherous. Controlled testing of facial recognition accuracy does not translate to safety or justice in practice. As the ACLU has contended, “there is no laboratory test that represents the conditions and reality of how police use face recognition in real world-scenarios,” particularly if low-quality images and biased databases are used (When it Comes to Facial Recognition, There is No Such Thing as a Magic Number). Performance scores touted by vendors often obscure these deeper problems, and the absence of federal privacy legislation leaves a patchwork of protections and remedies.
Across the world, regulatory approaches vary. The European Union’s GDPR inscribes “privacy by design” and mandates impact assessments for high-risk uses such as law enforcement facial recognition, with proactive data protection agencies able to investigate and penalize perpetrators (The ethics of facial recognition technologies, surveillance, and accountability in an age of artificial intelligence). In the United States, however, oversight is decentralized, and citizens must frequently turn to expensive, lengthy litigation to contest abuse.
Here, FuckLAPD.com transcends being a technical gadget; it’s a comment on power, openness, and the disputed future of surveillance. By using the same technology to track communities, it’s asking for an accounting of the dangers, the prejudices, and the obligations that accompany facial recognition whether in the hands of police or in the hands of the public. The disclaimer on the site Blurry, low-resolution photos will not match reminds us that science is not perfect, stakes are high, and the argument about who gets to look at whom is far from resolved.

