Some AI companies have ethics, but those ethics might not matter 

0
Some AI companies have ethics, but those ethics might not matter 

Almost a decade ago, Google employees protested the company’s role in Project Maven, a program that used artificial intelligence to assist with military targeting. Initially, it appeared that the protest worked, as Google cut ties with the drone initiative.

Google employees were ecstatic and even more overjoyed when the company released new guidelines stating that its AI would not be used for “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.”

But things have changed in the last few years. Palantir, a company that has faced multiple controversies regarding its technology and ethics, took over Project Maven shortly after Google pulled out. Then, early last year, Google removed its pledge not to build AI systems for weapons and expanded its partnership with the Department of Defense. The partnership has led to new disputes with employees, including a unionization attempt by employees in the U.K. working on Google’s DeepMind project. 

Google’s return to the Pentagon has reignited a debate employees thought was over: can ethical tech companies do more good by refusing military work or by staying inside? The answer is more important now that Palantir, which has so far outlasted internal debate, spent years running the program Google abandoned.

The case for staying

Diane Greene is the former CEO of Google Cloud and was with the company when the Maven controversy began. She called it a massive misunderstanding and said that employees were angry over something that wasn’t happening. 

Greene said the project was using AI to analyze non-real-time drone footage for landmine detection, disaster recovery and object identification. The Pentagon had explicitly prohibited fully automated offensive work, Greene said. But that didn’t stop Google employees from spreading rumors. 

She said employees began circulating rumors that Google was allowing the military to use its AI to target and fire weapons autonomously. The uproar led Google to leave the project and Palantir to step in. 

“Under the contractors that followed us, the Maven program expanded to include the offensive targeting capabilities that had been explicitly excluded from Google’s contract,” Greene wrote in an op-ed with The San Francisco Standard. “The original misperception hardened into conventional wisdom.”

She said that most people in the tech industry are concerned about AI weapons and mass surveillance, but leaving the table instead of pushing for change is the wrong choice. 

“Refusing to engage doesn’t change the outcome. It removes you from it,” Greene wrote. “And the people who decide to engage might operate with fewer principles and constraints than you would.”

Microsoft President Brad Smith echoed these sentiments in a blog post regarding a similar 2019 protest at his company. Smith argued that if an ethical firm abandons a project over principled objections, it simply creates a vacuum for another company to fill — one that might not share those same ethical concerns.

“To withdraw from this market is to reduce our opportunity to engage in the public debate about how new technologies can best be used in a responsible way,” Smith wrote. “We are not going to withdraw from the future. In the most positive way possible, we are going to work to help shape it.”

The case against staying

Anthropic, the makers of the AI chatbot Claude, found out the hard way that standing up for ethical AI use in the military didn’t work with the Defense Department. Instead of compromising on a plan, the DOD fired them, blacklisted them, costing them hundreds of millions, and then called them “woke” — all while the military continued to use the company’s AI. 

CEO Dario Amodei said the government retaliated against his company’s push. Anthropic has launched two lawsuits against the government but lost an appeal to temporarily prevent the blacklisting.

In the recent dispute between Google and its employees, the contract with the military that employees oppose includes language on ethical use. But the language is more like a gentle push rather than a legal chokehold. 

The contract states that, “The parties agree that the AI System is not intended for, and should not be used for, domestic mass surveillance or autonomous weapons (including target selection) without appropriate human oversight and control.” But the policy is nonbinding, and Google says that it has no right to control or prevent the “lawful” governmental use of its technology. This is similar to what Defense Secretary Pete Hegseth said during his attacks against Anthropic. 

What Google employees are asking for now

Google employees are again pushing the company toward a more ethical approach to AI: no military use of its AI, restoration of the previous AI weapons pledge, establishment of an independent ethics oversight body, and an individual’s right to refuse morally objectionable projects.

These demands aren’t your typical unionization demands. There is no mention of pay or conditions — it’s all about ethics and who controls the direction of technology. If their conditions aren’t met, workers have threatened to continue quietly abstaining from work that might improve the company’s AI to avoid detection from Google’s higher-ups. 

“We want to see AI benefit humanity; not to see it being used in inhumane or extremely harmful ways,” employees wrote to Google CEO Sundar Pichai. “Making the wrong call right now would cause irreparable damage to Google’s reputation, business, and role in the world.”

As Google’s internal fight continues, it’s easy to see how difficult the road ahead is for the employees. Anthropic was the only AI company to hold its ground on its ethics, and now it’s the only major AI company not at the negotiating table with the U.S. military. If retaliation is what responsible engagement produces, the workers might be asking for something that the market has already decided against.


Round out your reading

Ella Rae Greene, Editor In Chief

Leave a Reply

Your email address will not be published. Required fields are marked *