A crying ‘soldier’ and an ‘Army girl’ show how AI manipulates our emotions
It’s the end of a long day. You’re on the couch, dogs piled around you as you mindlessly scroll through social media. Your algorithm appears to be behaving — stand-up comedy, a political take, dachshund videos. Then a video of a sobbing U.S. soldier stops you mid-scroll.
But something seems off before you can say what it is. The audio has a strange muddiness that keeps it from perfectly matching the video. You look closer, and the labeling on the uniform looks off, as if the rank insignia is wrong.
Then it hits you: There is no soldier, and there never was.
That video is one of the many AI-generated clips depicting distressed U.S. service members that have flooded social media in recent months. They are likely not the work of foreign spy agencies but rather created to support a business model, researchers told Straight Arrow.
These videos are engineered to trigger an emotional response before critical thinking has a chance to catch up — and someone gets paid every time it works.
Experts who study the proliferation of AI-generated media say there are dozens of accounts posting such content. The fact-checking site PolitiFact previously reported on the topic, saying that reporters found nearly a dozen accounts associated with this content. Social media platforms took down those accounts after PolitiFact’s inquiry, but Straight Arrow was easily able to find just as many other accounts posting the same content.
The video often shows a person in uniform, in tears, talking to the camera, sometimes even yelling. The videos look authentic on the surface but users with a keen eye can see issues. Incorrect markings on uniforms or AI-generated artifacts typically belie their authenticity.
An emotional business model
Enough people have fallen for these videos to make their business model successful — and lucrative.
In late 2025 and early 2026, an Instagram account belonging to “Jessica Foster” popped up online. It depicted a young woman in the military who was a vocal supporter of President Donald Trump’s “America First” movement. She called herself “an American Army girl.”
Posts showed her in realistic settings, in photos that appeared to be genuine. In one, she posed with an F-22 Raptor. In another, the young woman — dressed in her Army uniform but wearing a short skirt and high heels — walked alongside Trump on an airport tarmac.
The account quickly built a following of 1 million.
But reports later uncovered that the account was fake. Its real purpose: to funnel users to paid adult content.
“If you get dozens or hundreds of people to go over to an app or to an OnlyFans page,” said Daniel Schiff, an associate professor at Purdue University and the co-director of the Governance and Responsible AI Lab, “I mean, you could be talking hundreds or thousands of dollars, maybe in the tens of thousands.”
The videos of crying soldiers work a little differently. Instead of serving as billboards for external revenue streams, these accounts are farms for direct platform payouts. This means that the more followers they have, the more ad revenue they can receive directly from social media companies. Schiff said these accounts can quickly gain followers.
“We can see the handful of posts, up to dozens of posts, within a matter of days or weeks,” he told Straight Arrow. “Some of them are posting their monetization content pretty quickly.”
Why does this work on people?
The high-impact emotions portrayed in the videos aren’t just incidental; they’re the main reason the videos work so well. It’s a psychological mechanism that works better than most people want to admit.
“We’re at a point where humans absolutely can’t tell the difference,” Sarah Barrington, an AI researcher and Ph.D. candidate at the University of California, Berkeley, told Straight Arrow.
Barrington has conducted studies on people’s ability to detect AI-powered voice clones and found that they’re not much better at it than guessing a coin toss.
“We gave a bunch of members of the public real and fake audios — hundreds of listeners, hundreds of speakers — and we found 60% of the time they couldn’t tell what was real and what was fake,” Barrington said.
When they added a real voice to the mix, she said, 80% couldn’t tell the difference between the generated voice and the real one. Detection gets harder when creators make content designed to make a person feel something first.
“When people are elevated into this emotional state, they don’t make logical decisions,” Barrington said. “People who had a traumatic incident in the last year were two times more likely to be deceived.”
The obvious solution to this is more robust media and AI literacy training — but researchers don’t know if that’s enough.
Why aren’t current measures working?
Companies do have tools to catch these videos before they take off online. But they’re not working.
Meta, Google, Microsoft and OpenAI formed the Coalition for Content Provenance and Authenticity or C2PA. The coalition offers a way for platforms to automatically detect and label AI-generated content.
“When you generate an AI video, largely now it will have a watermark in it,” Barrington said. “When these get shared, it flags up to the platform. But most of these platforms don’t reliably use that very easy watermark.”
Even if platforms note that posts are fake, that doesn’t mean users will necessarily believe it.
“Just because we put up a label and say, ‘Oh, this is manipulated’ doesn’t mean people detect it or that it really keeps them from believing in false claims,” Schiff said.
But issues with companies implementing their own rules may be less about technical limitations and more about financial ones.
“Content drives traffic, traffic drives money,” Schiff said. “These bad actors, the spreaders, are leveraging the same kinds of strategies that social media platforms built their own engagement methods on.”
Straight Arrow provided Meta with a list of AI-generated videos. The company said it’s reviewing those accounts.
“Identifying AI-generated content can be challenging as the technology evolves,” the company wrote in an email. “We’re continuously working to improve our systems and ability to label this content.”
It’s important to note that nearly all social media platforms have an AI content policy similar to Meta’s. Straight Arrow also found numerous AI-generated videos showing apparently fake soldiers crying on TikTok.
What can users do?
Despite the ever-evolving nature of AI, there is still hope for users trying to distinguish between fact and fiction.
“I totally just come back to that who, what, where, when,” Barrington told Straight Arrow. “Can you answer these basic questions about this thing you’re sharing? I just come back to that basic media literacy.”
But she stressed that this isn’t about media forensics, it’s just basic habits. The stakes of being wrong about AI-generated content are more immediate than some might realize.
“You just look a bit silly,” Barrington said. “You get it wrong if you share this thing — you just lose credibility in your own social circle.”
The responsibility of identifying AI content was never meant to just fall on users’ shoulders as they scroll at night. The deeper issue, researchers say, predates any individual video.
“We unleashed a lot of these capabilities, certainly before we had the detection capacities, or before we had the social resilience. So we’re now reactively trying to address these issues,” Schiff said. “This is probably where I would place the bulk of the responsibility.”
Round out your reading
- Scientists unearth new evidence on how the Grand Canyon was shaped.
- Why the Army is adding a second fitness test for combat.
- Illegal midwives, growing demand: The fight over home birth in America.
- 40 years after Chernobyl, the U.S. pushes nuclear power once again.
- We’re building a new Straight Arrow. Help us shape our future by taking our survey.
