Addressing Potential Perils In Algorithmic Health Tech
Originally published by Law360
Algorithmic tools, including machine learning and other artificial intelligence technologies, are becoming more common in the health care sector for predicting health outcomes and influencing clinicians’ decision making.
For all the potential benefits these data-driven innovations offer, critics say they could also have unintended consequences. Recent legislative and executive enforcement activity in California reflects mounting scrutiny toward algorithmic technology in health care and a continuing industry focus on health equity.
The Promises and Perils of Algorithmic Technology in Health Care
Hospitals and other health care providers increasingly use systems, software, and processes based on algorithmic computations to diagnose diseases, forecast costs of care, and recommend treatment options to clinicians.
These tools and technologies have the potential to improve the quality of care by preventing or detecting human errors resulting from subjective decision making. They can also foster gains in the efficiency and cost effectiveness of care through the rapid dissemination and processing of large quantities of health data.
The transformative potential of algorithmic tools is evident in myriad studies and applications showing how these tools can improve decision making in such areas as predicting risk for heart disease and stroke, pneumonia in the emergency department,[1] and hospital inpatient mortality.[2]
On a broader scale, algorithmic technology is one of the major foundational elements that could propel the health care system into the metaverse.[3]
Yet, critics contend that algorithmic tools are not without risks.
One predominant concern is that algorithms may be embedded with racial, socioeconomic, and other biases that exacerbate existing — or even create new — health inequities. In one frequently cited 2019 study published in Science magazine,[4] for example, researchers concluded that a commercial prediction algorithm widely used by many health systems perpetuated racial bias by using health care costs as a proxy for health care needs.
Because Black patients did not historically consume health care services to the same degree as comparably ill white patients, the researchers concluded that the algorithm would show their health care costs as relatively lower.
Consequently, the algorithm encouraged fewer services for Black patients, even where they were equally as sick or sicker than white patients whom the algorithm identified.
As algorithmic tools automate more aspects of care delivery, some critics emphasize that such risks of racial bias and other unintended consequences heighten the need to preserve clinicians’ professional clinical judgment and patient advocacy capabilities.
California Governor Vetoes Bill Giving Hospital Workers the Right to Override Health Information Technology
A.B. 858 represents many California legislators’ skepticism toward algorithmic technology in health care.[5]
The bill purported to safeguard the professional judgment of workers providing direct patient care [from] a deeply flawed medical technological system, that among many issues, has shown their commercial algorithms exhibit significant racial bias.
Although A.B. 858 passed both the California Assembly and Senate, Gov. Gavin Newsom vetoed the legislation on Sept. 23. Without elaboration, his veto message[6] says the author and sponsor requested the veto.
A.B. 858 would have permitted a hospital worker who provides direct patient care to “override health information technology and clinical practice guidelines if, in their professional judgment, and in accordance with their scope of practice … it is in the best interest of the patient to do so.”
The legislation also would have prohibited a hospital employer from retaliating or discriminating against a worker for engaging in such activity or discussing it with others.
Reflecting additional concerns about job displacement, the bill would have provided hospital workers with rights to receive prior notification of new technology that “materially affects the job of the workers or their patients,” to receive education and training on new technology, and to provide input on new technology implementation processes.
The anti-retaliation provisions of A.B. 858 are notable in their similarity to those under existing law in Health & Safety Code, Section 1278.5.
That statute prohibits a hospital or other health facility from retaliating or discriminating against a health care worker for engaging in various protected activities, including filing a complaint and participating in any “investigation or administrative proceeding related to the quality of care, services, or conditions at the facility.”
Given the breadth of the statute, a health care worker conceivably might invoke its protections to complain about and challenge a health facility’s usage of algorithmic technology, in much the same way as A.B. 858 contemplated.
California Attorney General Investigates Hospitals’ Use of Algorithmic Technology
As A.B. 858 was moving along in the legislative process, California Attorney General Rob Bonta informed 30 hospitals and health systems by letter dated Aug. 31 that his office was evaluating “how healthcare facilities and other providers are addressing racial and ethnic disparities in commercial decision-making tools and algorithms.”[7]
As examples of how bias in algorithmic tools and technologies may occur, the letter noted that “the data used to construct the tool may not accurately represent the patient population to which the tool is applied” and that “tools may be trained to predict outcomes (e.g., healthcare costs) that are not the same as their objectives (e.g., healthcare needs).”
The letter requested the following information by Oct. 15.
- A list of all commercially available or purchased decision-making tools, products, software systems, or algorithmic methodologies the selected hospitals use for various aspects of facility operations, such as clinical decision support and billing;
- The purposes of these tools, how they inform decision making, and related policies, procedures, and training practices; and
- The individuals responsible for evaluating the purposes and uses of these tools and for “ensuring that they do not have a disparate impact based on race or other protected characteristics.”
Had A.B. 858 become law, it would have provided a clear legal authority to support this inquiry. But the governor’s veto precludes that. Hospitals that received the letter may wonder: On what legal basis may the attorney general conduct this probe?
The letter cites various statutes the attorney general enforces to support its request. For example, the letter references Health & Safety Code, Section 1317, which prohibits hospitals from discriminating based on various protected characteristics in the provision of emergency medical treatment. The letter notes other anti-discrimination laws that broadly apply across business sectors.
Should the attorney general take an enforcement action against a hospital based on its investigation findings, the extent to which these generally applicable laws regulate a hospital’s use of rapidly developing algorithmic technology is an issue the hospital may contest.
Hospitals that received the attorney general’s letter should be aware that the attorney general described this information request as the first step in a press release about the inquiry.[8] The letter suggests investigative subpoenas for pertinent documents and data may be forthcoming and instructs the hospitals to take immediate action to preserve those materials.
Key Takeaways
As best practices continue to develop, health care providers, their technology vendors, and other stakeholders should consider the downstream impacts of algorithmic tools and technologies. If ignored, these impacts could expose a party to liability.
Questions stakeholders should ask when adopting a new technology, or evaluating an existing one, include:
- Will the technology manifest bias against any vulnerable or disadvantaged patient population, potentially resulting in the denial of services or rendering of improper care? Could this occur, for example, because the technology operates on underlying data sources and assumptions that may be germane to one population but not another?
- Will the technology replace any workers and result in layoffs? If so, will a transparent, clearly communicated decision-making process be followed in adopting the technology?
- To what extent does the technology facilitate the exercise of professional judgment by a clinician who believes deviation from the technology is in a patient’s best interests?
A.B. 858 and the attorney general’s pending investigation show that policymakers and regulators are poised to act to address these questions. Health care providers should stay alert to further regulatory developments that lie ahead.
Originally published by Law360 here.
[1] https://bjo.bmj.com/content/early/2022/08/23/bjo-2022-321842.
[2] https://www.nature.com/articles/s41746-018-0029-1%22.
[3] https://www.afslaw.com/perspectives/health-care-counsel-blog/the-metave….
[4] https://www.science.org/doi/10.1126/science.aax2342.
[5] https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=2….
[6] https://www.gov.ca.gov/wp-content/uploads/2022/09/AB-858-VETO.pdf?emrc=….
[7] https://oag.ca.gov/system/files/attachments/press-docs/8-31-22%20HRA%20….
[8] https://oag.ca.gov/news/press-releases/attorney-general-bonta-launches-….
Contacts
- Related Industries