Privacy in 2026: Will AI further supercharge surveillance?
The tug of war between widespread data collection and data privacy has only intensified, with disruptive technologies like artificial intelligence supercharging both. Going into 2026, experts say that trends in privacy are at once reassuring and alarming. The increase in data collection, often in the form of domestic surveillance, has sparked debate across the country. But the level of pushback doesn’t seem to have outpaced the current surveillance rollout.
Experts feel that many of the privacy trends witnessed in 2025 will expand in the coming year. Jake Laperruque, deputy director of the Security and Surveillance Project at the Center For Democracy & Technology, believes the hasty adoption of AI by law enforcement agencies will prove even more controversial in 2026.
“The ramp of immigration surveillance in a variety of ways has been alarming, with the government vacuuming up data from a huge range of sources and pushing for more reckless deployment of surveillance technologies in the field,” Laperruque told Straight Arrow News. “The rapid integration of AI into surveillance and policing is also a big concern — lots of AI technologies have not proven their efficacy or at best only work under very precise and controlled scenarios, yet unvetted and unregulated technologies are being built into surveillance and policing in ways that could lead to errors with extremely serious consequences for individuals.”
Criminals put AI to use
AI is also being utilized in cybercrime. A hacker known as Deth Veggie — a member of Cult of the Dead Cow, one of the oldest computer hacking organizations — told SAN that AI has allowed criminals to carry out more advanced attacks.
Social engineering, a highly effective technique that uses psychological influence to trick people into carrying out certain actions or divulging confidential information, has become especially powerful thanks to AI-generated audio and video, commonly categorized as deepfakes.
“Right now, you don’t have to be that on-the-ball to smell something fishy when your ‘CEO’ texts you asking you to quickly buy her $14,000 in BASS PRO SHOPS gift cards,” Deth Veggie said. “But what about when it’s your CEO calling you, live, on the phone, and the voice at the other end is essentially indistinguishable from the real thing?”
Cybersecurity researcher Jeremiah Fowler also believes AI will continue to exponentially power cybercrime in 2026.
“I’m seeing a lot more use of AI in cybercrime and social engineering,” Fowler told SAN. “In the old days we could read grammatical errors or see other identifiers that looked suspicious, but now AI has given every criminal the tools they need even if they are not technical.”
Fowler, who is well known for discovering large-scale data leaks, also feels that “AI-generated malware and mass data harvesting will be a continuing issue with no easy fix in sight.”
‘Irreversible’ surveillance trend?
The initial promises made by AI companies about the technology’s utilization also raise questions about the future.
Gus Hosein, executive director at Privacy International, says just last year, tech companies were still claiming that they wouldn’t build AI software for surveillance. Yet the past year has shown that a wide range of private and public entities, such as U.S. Immigration and Customs Enforcement, are deploying those very AI-powered tools.
Hosein warns that the surveillance apparatus being put in place could easily begin targeting any group deemed undesirable by the federal government.
“If we blindly continue to allow governments to build immigration enforcement into our public infrastructure, then with a soft switch of code, the targeting will expand to every undesirable community and category, and then to generalized policing,” Hosein said. “We can only hope all the stories of abuse will motivate everyone to say this is not what we want for our futures together. Before it becomes irreversible.”
Polls show that Americans are much more concerned than excited about AI’s rapid expansion into society. But much like any emerging technology, the genie can’t be put back in the bottle
“As AI gets baked into everything from your browser to your toaster oven,” the hacker Deth Veggie said, “I can’t say I’m overly optimistic about the outlook for data security and privacy in the near future.
Hope for the future
Not all the privacy trends are negative. Experts say there is also room to be hopeful about the coming year.
Hosein believes that while the public will undoubtedly increase its adoption of AI, more people will demand that their data be kept private and under their control.
“If we truly live our lives seamlessly with AI assistants, then three follow-ons will inevitably happen: people will want this data kept private and under their control, companies will want to monetize our interactions by exploiting that data, and governments will want access to that data,” he said. “Companies can’t deliver all three of these, so by the end of 2026 these will all come to a huge struggle, like an ugly barfight.”
“With the web, it took 20 years for privacy and security to be taken seriously, after Snowden and Cambridge Analytica; with end-to-end encryption and real privacy rules,” Hosein continued. “This will all happen with the AI experience in the next two years. Companies will have to decide who they are building the future for.”
Laperruque also sees positives in the future. Despite the political divide, bipartisan coalitions in Congress are fighting for surveillance reform. Attempts are also being made to close the “data broker loophole,” which allows the government to bypass the Fourth Amendment prohibition on illegal search and seizure by purchasing data on Americans from private companies.
“We’ve also seen steady progress at the state level on surveillance issues,” Laperruque said. “More and more states are limiting surveillance technologies like facial recognition, states have begun to stop police data purchases and communities are pushing back against tech like automated license plate readers.”
The desire for privacy is also apparent given the widespread adoption of privacy tools, many of which refuse to integrate AI. Signal, the app widely considered the gold standard for end-to-end encryption, saw a major spike in downloads in 2025. Signal’s chief executive officer, Meredith Whittaker, has been a leading voice against the adoption of AI because of privacy implications.
“I think there’s a real danger that we’re facing, in part because what we’re doing is giving so much control to these systems that are going to need access to data,” Whittaker said in March.
Fowler likewise sees positive trends, particularly in regard to defending against cyberattacks.
“Companies are finally investing in cyber security and data protection,” he said. “In previous years they were more focused on revenue only and viewed security as an afterthought.”
The post Privacy in 2026: Will AI further supercharge surveillance? appeared first on Straight Arrow News.
