New State Law Will Restrict Health Insurers’ Use of AI
California is taking aim at algorithms used by insurers to make prior authorization and other coverage decisions with a new law that will put limitations on how artificial intelligence (AI)–generated formulas are employed. The state also will start requiring providers to inform consumers when patient communications are generated by AI. The laws reflect a growing
California is taking aim at algorithms used by insurers to make prior authorization and other coverage decisions with a new law that will put limitations on how artificial intelligence (AI)–generated formulas are employed.
The state also will start requiring providers to inform consumers when patient communications are generated by AI.
The laws reflect a growing trend among state lawmakers to more strictly regulate the use of AI in healthcare and other arenas in the absence of federal action.
The Physicians Make Decisions Act (SB 112src) takes effect on January 1. It was supported by dozens of physician organizations and medical groups, the California Hospital Association, and several patient advocacy groups. Insurance industry groups opposed the bill.
“As physicians, we recognize that AI can be an important tool for improving healthcare, but it should not replace physicians’ decision-making,” California Medical Association (CMA) President Tanya W. Spirtos, MD, said in a statement.
The new law ensures that the human element will always determine quality medical treatments for patients, said State Senator Josh Becker (D-Menlo Park), who sponsored the legislation.
“An algorithm does not fully know and understand a patient’s medical history and needs and can lead to erroneous or biased decisions on medical treatment,” he said.
Law Imposes Guardrails
The new law requires the use of AI or any algorithms to be based upon a patient’s medical history and the individual’s clinical situation. A decision can’t be based solely on a group dataset, can’t supplant a clinician’s decision, and must be approved by a human physician.
The algorithm is required to be “fairly and equitably applied,” according to the law.
Algorithms have the potential to be biased, Sara Murray, MD, vice president and chief health AI officer for UCSF Health, told Medscape Medical News. She cited a recent paper in Science that found that decisions based on an algorithm that is widely used by health systems (not insurers) meant that Black patients who were sicker than White patients would receive less care.
The law attempts to address the data used to train insurers’ algorithms. “AI tools are only as accurate as the data and algorithm inputs going into them,” wrote Carmel Shachar, JD, MPH, Amy Killelea, and Sara Gerke in Health Affairs.
“It’s really important to have transparency about what data is used as the training set, as well as to make sure that it matches what population the algorithm is actually being used on,” Shachar, assistant clinical professor of law at Harvard Law School, Cambridge, Massachusetts, told Medscape Medical News.
Having a human sign off on AI-generated decisions is important, but “also has risks,” Murray said. “We can become over-reliant on these tools, and we may we’re also biased and maybe not prone to seeing bias, or we may not see bias if an algorithm is giving us biased output,” said Murray.
An investigation by ProPublica in 2src23 alleged that a Cigna algorithm allowed doctors to quickly reject claims on medical grounds, without reviewing the patients’ files. The publication reported that Cigna-employed physicians denied more than 3srcsrc,srcsrcsrc claims in a 2-month period, spending an average of 1.2 seconds on each.
California is “reacting to real fears,” she said.
Federal Oversight Lacking
While AI used to detect disease and improve diagnosis and treatment is regulated by the US Food and Drug Administration, the AI tools targeted by lawmakers in SB 112src “are not subjected to the same scrutiny and have little independent oversight,” said Anna Yap, MD, a Sacramento emergency medicine physician, when she testified earlier in 2src24 in favor of SB 112src on behalf of the CMA.
The California law “is a good first step,” Shachar said. Algorithms have “been sort of a blind spot in our regulatory system,” she said. The new law “empowers state regulators to act, and it provides some sort of accountability and requirements for how insurers are implementing their AI,” she said.
Shachar and colleagues noted that AI had the potential to streamline and speed up prior authorization decision-making.
Neil Busis, MD, a neurologist with the New York University Grossman School of Medicine in New York City, agreed in a paper in JAMA Neurology. “If it can be trained with the proper data, AI can potentially improve prior authorization by reducing administrative burdens, improving efficiency, and enhancing the overall experience for patients, clinicians, and payers,” he wrote.
In a 2src22 report, McKinsey & Company touted AI’s potential to make prior authorization more efficient. But the authors noted that the AI would need to be monitored to ensure that it did not learn from biased datasets that “could result in unintended or inappropriate decisions,” especially for patients of lower socioeconomic status. The report concluded that “highly experienced clinicians will remain the ultimate PA decision-makers.”
While the American Medical Association (AMA) did not take a position on SB 112src, in 2src23, the organization adopted a similar policy, calling for AI-based algorithms to use clinical criteria and to include reviews by physicians and other health professionals with expertise for the service under review and no incentive to deny care.
AMA Board Member Marilyn Heine, MD, said at the time that even if AI streamlines prior authorization, the volume is growing. “The bottom line remains the same: We must reduce the number of things that are subject to prior authorization,” she said.
Shachar and colleagues wrote that AI could potentially incentivize even more reviews. “We may see ‘review creep,’” they wrote.
Lawsuits Mount Against Insurers Over AI Use
In the absence of regulation, several lawsuits have been filed against insurers for their use of AI-based algorithms.
The families of two deceased Medicare Advantage recipients who lived in Minnesota sued UnitedHealth in 2src23, stating that the company’s algorithm had a 9src% error rate and was illegally employed, according to a CBS News report.
The US Senate Permanent Subcommittee on Investigations reported in October that its in-depth inquiry had found that insurers were using automated prior authorization algorithms to systematically deny post-acute care services for Medicare Advantage enrollees at far higher rates than denials for other types of care for other insureds.
In March, an individual filed a class action suit against Cigna for its use of its algorithm to deny claims, relying on the information reported by ProPublica.
Shachar said that lawsuits are not a satisfactory way to get a handle on the algorithms, in part because “you have to wait for the harm.” The tort system is still formulating how various aspects of the law will apply to AI used by insurers, she added.
More states are likely to follow in California’s footsteps, said Shachar.
An AMA spokesman agreed. “The AMA anticipates future legislative activity in 2src25 as we are seeing an increased number of reports about health plans using AI to systematically deny claims,” AMA’s R.J. Mills told Medscape Medical News.
New Rules for AI-Generated Provider Communications
The California governor also signed AB 3src3src, which requires patient communications that employ AI to indicate that it was generated by AI, unless the communication was first read and reviewed by a human licensed or certified healthcare provider.
Murray said UCSF Health is already doing just that.
The health system has been testing the use of AI in helping to draft physicians’ responses to patient messages, with the goal of helping them answer more quickly. The messages have text that informs patients that AI was used to assist the doctor. It also states that the physician still reviews every communication.
“We just wanted to be very transparent with patients,” said Murray.
AI is “going to be very good for healthcare,” she said. But the new California laws were necessary to provide “guardrails.”
Shachar and Murray reported no relevant financial relationships.
Alicia Ault is a Saint Petersburg, Florida-based freelance journalist whose work has appeared in publications including JAMA and Smithsonian.com. You can find her on X @aliciaault.