The Generative AI Revolution: Key Legal Considerations for the Consumer Products Industry
For better or worse, generative Artificial Intelligence (AI) is already transforming the way we live and work. Within two months of its initial release to the public, ChatGPT reached 100 million monthly active users, making it the fastest-growing consumer application in history. Other popular generative AI tools such as Github Copilot, DALL-E, HarmonAI, and Runway offer powerful tools that can generate computer code, images, songs, and videos, respectively, with limited human involvement. The implications are immense and have already sparked calls for new federal regulatory agencies, a pause on AI development, and even concerns about extinction.
This alert analyzes how AI is already affecting the consumer products industry, as well as some of the key legal considerations that may shape the future of generative AI tools. And click here to watch our latest Fox Forum as we talk with Mike Pell, the visionary innovation leader at Microsoft, a principal investor in OpenAI and the trailblazing company behind the creation of ChatGPT.
The role of AI in the consumer products industry is multifaceted—although it raises possible risks for clients, AI’s potential to revolutionize the industry has already been realized and will continue to rapidly evolve.
Of significant concern is Generative AI’s ability to produce new or improved products and the ownership issues for users of the technology. As discussed below, the US Court of Appeals for the Federal Circuit recently held in that according to the plain text of the Patent Act, AI cannot be deemed an inventor.[1] Additionally, the US Copyright Office Review Board denied copyright protection where the work was wholly generated by AI. Where creators of consumer products integrate generative AI into the design and development process, products may be at risk for lack of IP protection given the current direction of the law.
Despite the risks posed by generative AI, the technology has enhanced consumer experiences while simultaneously optimizing business development and resources. For example, use of AI chatbots to act as online representatives can enhance a customer experience if proper guardrails are employed. These services help users navigate websites and find the products that they are looking for helping to eliminate or minimize friction. Generative AI can also personalize marketing—AI algorithms analyze consumer patterns and advertise products to the portion of the market who is likely to be interested in such products. Similar algorithms can help companies optimize their supply chains by using AI to predict demand and trends, a practice which can lead to less waste and increased sustainability. Additionally, generative AI is being incorporated directly into consumer products—these “smart” devices use AI to adapt and adjust to the preferences and habits of the specific consumer. Despite the beneficial effects on user-experience, key issues still exist for consumers when it comes to such AI, including privacy concerns and skepticism regarding AI-generated content.
AI’s potential to transform the consumer products industry is already evident, but as the technology continues to advance, companies and consumers should consider other legal issues which we outline below.
1. Accuracy and Reliability
For all their well-deserved accolades and hype, generative AI tools remain a work in progress. Users, especially commercial enterprises, should never assume that AI-created works are accurate, non-infringing, or fit for commercial use. In fact, there have been numerous recorded instances in which generative AI tools have created works that arguably infringe the copyrights of existing works, make up facts, or cite phantom sources. It is also important to note that works created by generative AI may incorporate or display third party trademarks or celebrity likenesses, which generally cannot be used for commercial purposes without appropriate rights or permissions. Like anything else, companies should carefully vet any content produced by generative AI before using it for commercial purposes.
2. Data Security and Confidentiality
Before utilizing generative AI tools, companies should consider whether the specific tools adhere to internal data security and confidentiality standards. Like any third-party software, the security and data processing practices for these tools vary. Some tools may store and use prompts and other information submitted by users. Other tools offer assurances that prompts and other information will be deleted or anonymized. Enterprise AI solutions, such as Azure’s OpenAI Service, can also potentially help reduce privacy and data security risks by offering access to popular tools like ChatGPT, DALL-E, Codex, and more within the data security and confidentiality parameters required by the enterprise.
Before authorizing the use of generative AI tools, organizations and their legal counsel should (i) carefully review the applicable terms of use, (ii) inquire about access to tools or features that may offer enhanced privacy, security, or confidentiality, and (iii) consider whether to limit or restrict access on company networks to any tools that do not satisfy company data security or confidentiality requirements.
3. Software Development and Open Source Software
One of the most popular use cases for generative AI has been computer coding and software development. But the proliferation of AI tools like GitHub Copilot, as well as a pending lawsuit against its developers, has raised a number of questions for legal counsel about whether use of such tools could expose companies to legal claims or license obligations.
These concerns stem in part from the use of open source code libraries in the data sets for Copilot and similar tools. While open-source code is generally freely available for use, that does not mean that it may be used without condition or limitation. In fact, open source code licenses typically impose a variety of obligations on individuals and entities that incorporate open source code into their works. This may include requiring an attribution notice in the derivative work, providing access to source code, and/or requiring that the derivative work be made available on the same terms as the open source code.
Many companies, particularly those that develop valuable software products, cannot risk having open source code inadvertently included in their proprietary products or inadvertently disclosing proprietary code through insecure generative AI coding tools. That said, some AI developers are now providing tools that allow coders to exclude AI-generated code that matches code in large public repositories (in other words, making sure the AI assistant is not directly copying other public code), which would reduce the likelihood of an infringement claim or inclusion of open source code. As with other AI generated content, users should proceed cautiously, while carefully reviewing and testing AI-contributed code.
4. Content Creation and Fair Compensation
In a recent interview, Billy Corgan, the lead singer of Smashing Pumpkins, predicted that “AI will change music forever” because once young artists figure out they can use generative AI tools to create new music, they won’t spend 10,000 hours in a basement the way he did. The same could be said for photography, visual art, writing, and other forms of creative expression.
This challenge to the notion of human authorship has ethical and legal implications. For example, generative AI tools have the potential to significantly undermine the IP royalty and licensing regimes that are intended to ensure human creators are fairly compensated for their work. Consider the recent example of the viral song, “Heart on My Sleeve,” which sounded like a collaboration between Drake and the Weeknd, but was in fact created entirely by AI. Before being removed from streaming services, the song racked up millions of plays—potentially depriving the real artists of royalties they would otherwise have earned from plays of their copyrighted songs. In response, some have suggested that human artists should be compensated when generative AI tools create works that mimic or are closely inspired by copyrighted works and/or that artists should be compensated if their works are used to train the large language models that make generative AI possible. Others have suggested that works should be clearly labeled if they are created by generative AI, so as to distinguish works created by humans from those created by machine.
5. Intellectual Property Protection and Enforcement
Content produced without significant human control and involvement is not protectable by U.S. copyright or patent laws, creating a new orphan class of works with no human author and potentially no usage restrictions. That said, one key principle can go a long way to mitigating IP risk: generative AI tools should aid human creation, not replace it. Provided that generative AI tools are used merely to help with drafting or the creative process, then it is more likely that the resulting work product will be protectable under copyright or patent laws. In contrast, asking generative AI tools to create a finished work product, such as asking it to draft an entire legal brief, will likely deprive the final work product of protection under IP laws, not to mention the professional responsibility and ethical implications.
6. Labor and Employment
When Hollywood writers went on strike recently, one issue in particular generated headlines: a demand by the union to regulate the use of artificial intelligence on union projects, including prohibiting AI from writing or re-writing literary material; prohibiting its use as source material; and prohibiting the use of union content to train AI large language models. These demands are likely to presage future battles to maintain the primacy of human labor over cheaper or more efficient AI alternatives. Meanwhile, the Equal Employment Opportunity Commission is warning companies about the potential adverse impacts of using AI in employment decisions.
7. Future Regulation
Earlier this year, Italy became the first Western country to ban ChatGPT, but it may not be the last. In the U.S. legislators and prominent industry voices have called for proactive federal regulation, including the creation of a new federal agency that would be responsible for evaluating and licensing new AI technology. Others have suggested creating a federal private right of action that would make it easier for consumers to sue AI developers for harm they create. Whether U.S. legislators and regulators can overcome partisan divisions and enact a comprehensive framework seems unlikely, but as is becoming increasingly clear, these are unprecedented times.
If you have questions about any of these issues or want to plan ahead, contact one of the authors or a member of our AI, Metaverse & Blockchain industry team.
Additional research and writing from Natasha Weis, a 2023 summer associate in ArentFox Schiff’s New York office and a law student at Fordham University.
Contacts
- Related Industries