Mother Jones: Post

Mother Jones

How Your Data Is Powering Trump’s Surveillance State

In the past 24 hours, I’ve checked Google Maps for directions, logged a sunset run on Strava, booked a yoga class with ClassPass, swiped into the New York City subway using a card linked to a digital profile, and sent and received hundreds of emails, texts, Slack messages, and phone calls. Meanwhile, my iPhone passively logged each of my 13,444 steps, while my face and gait were recorded by countless discreet cameras—both those installed by the city and by my neighbors. Then, of course, there are the Google searches, the Instagram likes, the re-posts on X and Bluesky.

If somebody—a police officer, a prosecutor, an FBI agent—were to get ahold of all this data, they’d be able to sketch out a pretty complete picture of my daily life. In most cases, I probably wouldn’t even know that my data had been obtained.

“We should have a conversation, both individually and collectively, about what the stakes are when we build these networks of digital surveillance all around us without thinking about the consequences.”

For decades, civil libertarians and privacy experts have warned about the surveillance threat posed by digital technologies. But if those risks ever felt abstract, they’ve become all too real during Donald Trump’s second term: To aid its mass deportation agenda—and, at times, intimidate those who protest against it—Immigration and Customs Enforcement (ICE) has gone on a $300 million spending spree, buying up surveillance technology powered by a mix of federal, state, and commercial data systems.

In his new book Your Data Will Be Used Against You: Policing in the Age of Self-Surveillance, George Washington University law professor Andrew Guthrie Ferguson details how the rise of digital technology and smart devices has led to massive amounts of data being created, aggregated, and shared with law enforcement—and how our own consumer choices have ensnarled us in this web of digital surveillance. Ferguson surveys various court cases to show how our legal system offers few protections if a police officer—or a tyrannical government—wants to weaponize your data against you. I spoke with him the current legal landscape, how the second Trump administration has turbocharged state surveillance, and what lawmakers and citizens can do to protect their digital privacy.

This interview has been edited for length and clarity.

What inspired you to write this book?

I wrote a book called The Rise of Big Data Policing, which was about how police departments were buying technology directed at their citizens and what that meant. And this book was more of an internal sense of, “What are we doing? What’s our complicity in building these networks of self-surveillance?” Because I think we all think that we are buying this for our own consumer benefit—_I’m protecting my home by buying this Ring doorbell camera—_and not necessarily thinking, oh, wait a minute, that means there’s now footage of every time any family member has left the house and what they’re doing. And that data is available, at least with a warrant, if police want to get access to it. We should have a conversation, both individually and collectively, about what the stakes are when we build these networks of digital surveillance all around us without thinking about the consequences.

When talking about surveillance, there’s often this mindset of, “If I’m not doing anything wrong, why should I worry?” But your book shows how consumer data can be used to power racially biased predictive policing algorithms or criminalize things like obtaining an abortion, seeking gender-affirming care, or engaging in First Amendment-protected activity. What do you say to someone who isn’t concerned about buying into self-surveillance?

First, we’ve seen in the last few months that the federal government seems willing to weaponize the criminal justice system for politicized prosecutions. That changes who is at risk for criminality and who can say, “But I’m not doing anything wrong.” Eight million Americans went out and protested in No Kings. It’s pretty easy to call some of them supporting some level of sedition or treason or “part of antifa,” and we’ve already seen that kind of dissent criminalized in certain ways. If your Ring doorbell camera caught you walking out with a No Kings sign, or your cellphone revealed that you were at a protest, or you sent some tweets, or you Googled, “Where is the nearest No Kings protest?”: Your data is now available to a government that doesn’t like that speech and doesn’t like that level of dissent. That opens you up to vulnerability.

Same thing if you’re a woman who’s trying to get abortion services in a state that’s trying to criminalize them. What is a crime can change—it has changed, because we’ve seen social norms and mores turn into criminal laws with new targets. The last couple of months have changed, I think, everyone’s recognition of how vulnerable we all are to a government that can identify people that it does not agree with and turn that into criminal prosecutions.

What constitutional protections are in place? The Fourth Amendment is supposed to safeguard our right to privacy, prohibiting unreasonable government searches and seizures. But how does it govern how law enforcement obtains Americans’ digital data?

The general reality is that neither constitutional law nor statutory law has actually caught up with the reality of our smart technologies. For Fourth Amendment purposes, we are told by the Supreme Court that a warrant is required when there is a reasonable expectation of privacy in our persons, our homes, our papers, and our things. For example, the Supreme Court has said police need a warrant to get into your smartphone, but it’s an open question about whether law enforcement can go buy that exact same data from the data broker that has collected it. The Supreme Court has said that data you have given to a third party does not require a warrant because you knowingly provided it to a third party. That theory, called the third-party doctrine, is a huge, huge loophole, because everything we do in the digital age involves a third party. You don’t run your own server, your own email system, your own search engine. So if the third-party doctrine applies in the digital age, it means that almost everything we touch is exposed to police without a warrant. Kash Patel recently said in front of Congress that the FBI was buying some kind of data from these data brokers outside of the warrant requirement.

“We have created a lot more vectors of information about ourselves that are now available to law enforcement, both with a warrant and sometimes without it.”

Your Google searches reveal a whole lot of private information about what’s going on your head and the questions you have, and the fact that the government might be able to just access your Google searches on a whim, without a warrant, without requirements, should be troubling to you. The thing that struck me the most in writing the book, from all the research I did, was that there is nothing too secret or too private that cannot be obtained with a warrant. That really privileges prosecution over privacy. And I think it’s kind of a wake-up call, because we have created a lot more vectors of information about ourselves that are now available to law enforcement, both with a warrant and sometimes without it.

Your book mainly focuses on how local law enforcement has weaponized data, but it struck me how ICE under the second Trump administration has made the warnings of your book manifest at a remarkable speed and scale. How did digital surveillance tools at the local level set the stage for what’s now happening in the federal government?

What we’ve seen in the last few months is a federal government that essentially is moving full steam ahead in embracing the surveillance technologies that exist without any of the cautions or restrictions that were put in place by a Justice Department that was worried about the technology and how it would be perceived. We’ve now seen the federal government embrace a fusion of new surveillance technologies and power, primarily focused on immigration via DHS and ICE, while also giving new life to some of the Fusion Centers that have been sitting around in search of a mission after 9/11. I think those are now finding a new mission: the federalization of surveillance.

At the same time, local law enforcement is actually perhaps moving a bit slower, although technology companies are investing a lot in building real-time crime centers, automated license plates reader systems, and trying to centralize the video streams and sensor streams that exist in cities. So perhaps there’ll be the same story told in different ways—where we are accelerating in both local and federal policing surveillance technologies, but I think they have taken different paths to get there.

You mentioned Fusion Centers, which were founded after 9/11 as counterterrorism hubs for sharing information between federal, state, local, and private partners. How did the post-9/11 national security state plant the seeds for everything we’re seeing now?

Maybe I’m naive about this, but I sort of see that in the local policing surveillance context, there is still a recognition that some normal rules apply—like Fourth Amendment warrants and other things are at least considered, because we’re dealing with traditional policing powers. But when we move to the national security context, things change. The rules about surveillance have changed, and that’s in part because of how Fusion Centers have avoided any scrutiny because they played in this world of: This is for national security. It’s not for ordinary policing, so you don’t have to worry about the niceties of the Fourth Amendment or warrants.

“AI will supercharge police power.”

Under the Trump administration, by evoking national security and this idea that there’s some non-traditional policing power at stake, they’ve been able to avoid some of the traditional Fourth Amendment scrutiny. They’re kind of acting outside the normal bounds that have limited police power. When you view immigration as a national security issue, the Fourth Amendment rules seem to fall away. And we’re watching ICE agents and Border Patrol agents seem like they are not following traditional Fourth Amendment rules, going into homes that require judicial warrants.

Your book makes several references to artificial intelligence, but it struck me how a lot of the data-driven policing you talk about doesn’t necessarily rely on AI. How does the rapid rise of AI change the picture of what you described in your book?

AI will supercharge police power. The easiest way to see that is video analytics in police systems. It’s pretty common to see cameras on the streets. But the ability to fuse all those camera streams and video streams and then run video analytics on them is a great concrete use case of AI. We can now take the video feeds of a city street—including city cameras, commercial cameras, Ring doorbell cameras, and other things—and run object recognition software to identify every object that you could see. Foreground and background, car, person, bike, van, building, door, window, briefcase, colors, objects. The ability to do pattern matching to identify objects and track the same objects over time is a great concrete use case for artificial intelligence.

It also gives police a superpower, where all those otherwise disparate streams of video can now be used to track people, surveil people, do anomaly alerts if something unusual is happening, do virtual patrols to watch the streets in real time. And so what you can see in that example is a transformation from the analog world of cameras to the digital world of cameras, and then on top of that, AI gives you the ability to use this data that was probably otherwise not very productive or helpful. You were able to watch the city streets, but there was almost too much information to process. But when you can actually analyze everything that’s happening within those video feeds, it suddenly becomes useful. It allows you to isolate actions or patterns that you’re interested in—even without facial recognition, which is another form of AI. The idea is being able to use and make useful overwhelming amounts of data in a new way that enhances police power.

What are some of the things you’d like to see legislators do to update our laws for the digital age?

I use the “tyrant test” as a metaphor, but also a practical plan of action that says we should begin with the assumption that our data will be misused—that the tyrant will be reading your most embarrassing Google search—and then go from there. What would you do to protect yourself?

In the book, I propose two legislative solutions. One is simply to enhance the warrant requirement for smart device surveillance to something akin to the Wiretap Act. Since the late ’60s, the police have been able to put a microphone in your living room and listen to everything you say. The Wiretap Act basically requires police to tell a judge, Look, there’s no other way we can get this information unless you use this wiretap. The standard is basically higher than probable cause—it requires a minimization of other information, like other people’s voices that are being captured in the house, and you have to report back to the judge about what you did and why you did it. And even though this is an incredibly invasive technology, as a society we’ve been okay with it because we have relatively transparent and relatively high procedural protections around the use of it.

I think we should transport that sort of idea into these smart devices in our world. You should have to abide this higher standard that I call in the book the WALL Act. It still allows police to get access to data when they really, really need it, but it doesn’t allow them to simply get it because the data is available, and it doesn’t allow them to fish or use it for politicized prosecutions.

“You need to realize that every smart device is a surveillance device. You’re paying for the privilege of being surveilled because you think it has a convenience.”

There’s another idea in the book where I talk about digital privileges. If you admit to a murder to your lawyer, your lawyer can’t say that. They can’t be called to testify against you, because we think there’s a value in having the attorney–client privilege. There’s also a spousal privilege. And we should think about that with our digital information. We probably trust our digital devices with our most intimate information. We ask questions like, Am I pregnant? What is this ailment? You go to the digital device probably before your best friend. And that idea of intimate information could be privileged by legislative rule. The current privileges that exist in law are state made. And they basically say, _There’s another value here we think is more important than prosecutio_n. So the book makes a somewhat radical proposal that we should think about this idea of digital privileges and say, There are ways that our digital lives are connected to very intimate, very personal things, and it might be more important to privilege those things than to prosecute a crime. There’s a lot of pushback to that, but I think it’s worth a debate.

I enjoyed the part of the book where you talked about ways to resist surveillance by sabotaging your data—you don’t need to give every single app your real birthday or the same email address, for example. What are the first steps you’d recommend to someone who is concerned about digital privacy and wants to fight back against surveillance?

I think the key is to be intentional. The first step is to educate yourself about the risks of digital surveillance and make your own balance about whether you really need the cat cam to watch your kitten while you’re away at work. Cats have done fine without you watching them for years. Are you making an intentional decision about the choice of self-surveillance? You need to realize that every smart device is a surveillance device. You’re paying for the privilege of being surveilled because you think it has a convenience. No one is judging you for making that choice, but you need to make the informed choice about whether that is something that is worth the cost and benefits of what it reveals about you and your family and your community.

The second thing to do is to recognize that you are a target of surveillance—that there are companies that are trying to monetize you. Every time you get something that seems to be free in the digital age, it just means you’re the target. You’re the product, and they’re selling your data. You can make informed choices about what kind of devices you purchase.

And third is to recognize that this isn’t an individual decision. You and I can’t negotiate with Amazon or the FBI about the terms of service or how we’re being surveilled. This has to be a community pushback where we collectively say that we don’t want to have this kind of surveillance in our communities. And we’re seeing that. We’re seeing communities push back on automated license plate readers. We’re seeing communities recognize that these surveillance systems are also potentially being used for immigration enforcement. We’re seeing a growing concern about the collection and centralization of data by the federal government, and I think that those are collective concerns that come from individual education about the issues but then lead to collective action.

Continue Reading…