The Generative AI Revolution: Key Legal Considerations for the Long Term Care and Senior Living Industry

For better or worse, generative artificial intelligence (AI) is already transforming the way we live and work. Within two months of its initial release to the public, ChatGPT reached 100 million monthly active users, making it at the time the fastest-growing consumer application in history.

Off

ChatGPT and other popular generative AI tools such as Github Copilot, DALL-E, HarmonAI, and Runway offer powerful instruments that can generate computer code, audio, and videos with limited human involvement. The implications are immense and have sparked calls for new federal regulatory agencies and a pause on AI development. There have even been concerns about extinction.

This alert analyzes how AI is already affecting the long term care and senior living industry, as well as some of the key legal considerations that may shape the future of generative AI tools.              

While the impact artificial intelligence will have on the long term care and senior living industry is tied closely with the impact on the health care industry as a whole, certain niche areas specifically impact both institutional and home care settings. With respect to care, generative AI technology can help provide medication and meal reminders for seniors. Certain wearables can help care staff or caregivers monitor signs of a potential fall, changes in vital systems, and disrupted sleep patterns. Yet, the benefits go beyond just residents or patients. Whether providing care or not, family members will benefit greatly from generative AI as they can stay better updated about the care of their loved ones. As with all new technology, the benefits must be weighed against concerns surrounding accuracy and privacy, as is addressed in more detail below.

1. Accuracy and Reliability

For all their well-deserved accolades and hype, generative AI tools remain a work in progress, but getting it wrong in the long term care and senior living space can have significant consequences. Operators that utilize generative AI to assist in operations need to ensure that electronic health records are accurate and reliable. For example, if an inaccurate assessment is completed for a potential new resident, such an error can lead to significant gaps in that resident’s care. Additionally, documentation errors for things such as medications can lead not only to harm to the resident, but such errors can also result in state survey violations.

2. Data Security and Confidentiality

Before utilizing generative AI tools, companies should consider whether the specific tools adhere to internal data security and confidentiality standards and, for some operators, whether the tools adhere to federal and state health privacy and security standards. Like any third-party software, these tools’ security and data processing practices vary. Some devices may store and use prompts and other information submitted by users. Other tools offer assurances that prompt and other information will be deleted or anonymized. Enterprise AI solutions, such as Azure’s OpenAI Service, can also potentially help reduce privacy and data security risks by offering access to popular tools like ChatGPT, DALL-E, Codex, and more within the data security and confidentiality parameters required by the enterprise.

Before authorizing the use of generative AI tools, owners and operators, along with their legal counsel should (i) carefully review the applicable terms of use, (ii) inquire about access to tools or features that may offer enhanced privacy, security, or confidentiality, and (iii) consider whether to limit or restrict access on company networks to any tools that do not satisfy company data security or confidentiality requirements.

3. Intellectual Property Protection and Enforcement

Content produced without significant human control and involvement is not protectable by US copyright or patent laws, creating a new orphan class of works with no human author and potentially no usage restrictions. That said, one key principle can go a long way to mitigating IP risk: generative AI tools should aid human creation, not replace it. Provided that generative AI tools are used merely to help with drafting or the creative process, then it is more likely that the resulting work product will be protectable under copyright or patent laws. In contrast, asking generative AI tools to create a finished work product, such as asking it to draft an entire legal brief, will likely deprive the final work product of protection under IP laws, not to mention the professional responsibility and ethical implications.

For long term care and senior living, it is easy to imagine a new operator using generative AI tools to develop policies and procedures, assessment forms, or even Residency Agreements in accordance with state requirements. These items are protected IP and require human creation.

4. Labor and Employment

When Hollywood writers went on strike recently, one issue in particular generated headlines: a demand by the union to regulate the use of artificial intelligence on union projects, including prohibiting AI from writing or re-writing literary material; prohibiting its use as source material; and prohibiting the use of union content to train AI large language models. These demands are likely to presage future battles to maintain the primacy of human labor over cheaper or more efficient AI alternatives. Meanwhile, the Equal Employment Opportunity Commission is warning companies about the potential adverse impacts of using AI in employment decisions.

For long term care and senior living, the rise of generative AI may actually be seen as a benefit to combat staffing shortages. There are many tools already available that cut down on the time spent on administrative tasks. This “extra” time will allow care staff to spend more time providing care and monitoring patients and can help facilities overcome staffing shortages that have been impacting the industry for some time now.

5. Future Regulation

Earlier this year, Italy became the first Western country to ban ChatGPT, but it may not be the last. In the US, legislators and prominent industry voices have called for proactive federal regulation, including creating a new federal agency responsible for evaluating and licensing new AI technology. Others have suggested creating a federal private right of action that would make it easier for consumers to sue AI developers for harm they create. Whether US legislators and regulators can overcome partisan divisions and enact a comprehensive framework seems unlikely, but as is becoming increasingly clear, these are unprecedented times.

For long term care and senior living, one potential challenge will be whether state licensing agencies are comfortable not only with the technology that is used, but also the policies and procedures that are developed regarding the use of generative AI. While many states do encourage innovative methods of care, operators must be sure the new technology that they intend to implement does not run afoul of any state-specific restrictions.

If you have questions about any of these issues or want to plan ahead, contact one of the authors or a member of our AI, Metaverse & Blockchain industry team.

And click here to watch our latest Fox Forum, in which we talk with Mike Pell, a visionary innovation leader at Microsoft and principal investor in OpenAI, the trailblazing company behind the creation of ChatGPT.

Contacts

Continue Reading