States look to tell AI what jobs it can’t take, as UBI calls get louder

0
States look to tell AI what jobs it can’t take, as UBI calls get louder

Artificial intelligence has become the personal assistant, doctor, lawyer and even psychologist for a growing number of Americans, so much so that we’re now seeing mass layoffs with AI being blamed. The question then becomes how many unemployed workers must this new technology result in before calls for universal basic income become too loud to ignore.

Several states are considering or have passed legislation to regulate the use of artificial intelligence for health, legal and other purposes. As AI continues to become a larger part of American life and take over jobs, former presidential candidate Andrew Yang has renewed his calls for a universal basic income.

State regulations

In New York, lawmakers are moving legislation forward that would ban AI chatbots from acting as a licensed professional, like a lawyer, psychiatrist or doctor.

“People get advice from these systems, not being aware that these are not professionals in any serious sense of the term,” Hamid Ekbia, professor at Syracuse University and founding director of the Academic Alliance for AI Policy, told Straight Arrow News. “So overall, I think this is a nice move.”

That New York legislation would also allow users to file civil lawsuits against the companies that own the chatbots.

“If a human were to engage in some of this conduct, they might be thought to be engaging in the unauthorized practice of a profession,” Cary Coglianese, professor of law and political science at the University of Pennsylvania and the director of the Penn Program on Regulation, told SAN.

New York is not alone in this. Last year, Illinois enacted legislation that bans the use of AI for mental health or therapeutic decisions without oversight by a licensed clinician.

Nevada lawmakers enacted a similar law last year, banning AI companies from marketing their systems as mental health providers and prohibiting schools from using AI to perform the mental health duties typically done by psychologists or counselors.

In Oregon, lawmakers sent legislation to the governor that would require AI chatbots to implement safeguards if users express ideas of suicide or self-harm.

Lawmakers in Arizona, Georgia, Iowa, Tennessee and other states have proposed similar legislation.

“There’s been a long history of regulating professions and along with regulating professions has come state laws that ban the unauthorized practice of a profession,” Coglianese said. “And if one views what an AI chatbot might be doing as the unauthorized practice of medicine, psychology, social work, law, etcetera, then by that analogy, this is not anything that should be outside the purview of states to address through laws and regulations.”

One tech founder had a unique take on this kind of legislation.

“Affirmative action for humans,” Lulu Cheng Meservey wrote on X. “It was only a matter of time.”

Ekbia called that kind of take “madness.”

“It comes from this thinking that we are in a race with our own technology,” he said. “That, to me, is absurd.”

AI usage

According to OpenAI, at least 40 million people use ChatGPT to get answers to their health care questions. Meanwhile, a Gallup poll found roughly 70% of Americans believe the health care system has major problems or is in a state of crisis.

There are numerous reasons people turn to AI for help, including its much lower cost.

Lawyers are typically expensive, and the cost of health care continues to rise.

“If somebody can afford, let’s say, professional legal service, or even medical service or health-related issues, they can go to the real professionals,” Ekbia said. “Somebody who doesn’t, this would have provided them with some advice.”

That’s where some of the concern about this type of legislation comes into play.

“It deprives consumers of a cheap or inexpensive source of information, right?” Ekbia said. “And that’s a concern that many consumer advocates have raised about the bill and what it does. It creates a gap, as usual, between the haves and have-nots.”

Coglianese said it may be tough to draft legislation to create guardrails to ensure AI doesn’t venture over any lines.

“I would say the better way to think about this is ways to encourage and require, even AI developers, to keep their technologies on a leash and find ways that they can keep monitoring what it’s doing, how it’s performing, and adapting it so that it can avoid issues where it’s maybe venturing into a territory that’s risky to the public,” he said.

Universal basic income?

While running for president in the 2020 election, one of Yang’s major promises was what he called the “freedom divided.” The idea was a basic universal income, or UBI, of $1,000 a month for every American.

Yang cited predicted mass job loss due to AI and other automation as part of the justification for UBI.

A recent report from the Senate Health, Education, Labor and Pensions Committee found the U.S. could lose 100 million jobs to AI and automation over the next ten years.

So, as AI continues to put more people out of work, has the time for a universal basic income arrived?

“Where does this AI come from?” Scott Santens, founder and CEO of the Income to Support All Foundation. “The data that has trained it is all of our data. It’s all the books and articles and songs and movies and tweets and Facebook posts and just everything that we have created as society has gone into these pioneer models. And I think people should see UBI as being their share of the productivity growth that we can expect from this.”

Naturally, the biggest concern about UBI is its cost.

“We haven’t seen one because it’s so expensive, right?” Eva Vivalt, an assistant professor in the department of economics at the University of Toronto, told SAN. “People may be divided on how to finance it, and if you are substituting from existing social programs, then that could be difficult, because you would have constituencies who are in support of those other programs.”

There’s already been significant pushback on funding other social programs like SNAP.

“I would argue that converting SNAP from a voucher to cash would be better so that people could actually buy things like hot food,” Santens said. “They could actually use it on electricity, utility bills, water, rent. You could use it for all kinds of things.”

Santens acknowledged skepticism about giving people cash comes from what they’ll actually spend it on.

“You’re just limiting people’s choices by not trusting them, and people just don’t behave that way,” he said.

While there is no non-pilot UBI in the U.S., the most comparable program currently in place is the Alaska Permanent Fund dividend. That’s a state-owned investment fund created from Alaska’s oil revenues that pays out annual dividends to Alaskans from a portion of its earnings.

“The amount, though, is maybe not enough that people would really like to call it a basic income,” Vivalt said.

It’s been in place since 1976.

“On average, it’s around $1,500,” Santens said.

Giving every American a fixed amount of money each month would certainly be expensive, roughly $4 trillion per year. The U.S. government spends roughly $7 trillion per year in total.

“I think it’s important to recognize that this should be seen as an investment in individuals and society, instead of just purely in expenditure with the cost,” Santens said. “We know that from basic income studies, that crime reduces significantly, that health improves significantly, and those two things alone are very costly.”

Ella Rae Greene, Editor In Chief

Leave a Reply

Your email address will not be published. Required fields are marked *