AI Risks and Rewards for Private Companies

*This article was originally published by Private Company Director
Beware not only of “hallucinations,” but inadvertent disclosure of proprietary info and copyright protection as well.

The rapid expansion of AI capabilities has significantly impacted various industries, prompting businesses and individuals to reconsider their operational strategies. While AI can be a powerful tool, it also carries numerous risks that private company boards and shareholders must be aware of, particularly in the realm of generative AI.

Benefits of AI for Private Companies

Companies can harness the power of AI to manage their books and records, streamlining financial operations and enhancing overall efficiency. By employing AI-powered tools, businesses can automate tasks such as data entry, invoicing and expense tracking, reducing the risk of human error and saving valuable time. These AI-driven insights can also facilitate proactive risk management by detecting potential fraud or discrepancies in financial records. AI can also contribute significantly to companies’ ESG efforts by optimizing resource allocation, streamlining production processes and enabling predictive analytics for demand forecasting, thereby reducing overproduction and material usage. AI technologies can help businesses transform their financial management processes, ensuring accuracy, compliance and improved decision-making in an increasingly complex financial landscape.

Private companies can also reap significant corporate governance benefits from AI if proper caution is exercised. For example, large language models (LLMs) can assist in drafting legal documents, such as bylaws, formation documents and ancillary agreements. However, LLMs are flawed works in progress and should be used as a starting point rather than a final draft. Users must carefully edit and review any AI-generated content.

Inadvertent Disclosure of Proprietary Information

While the benefits of AI integration are undeniably transformative, it is crucial to also acknowledge and address the risks associated with its implementation. The potential for inadvertent disclosure of trade secrets and confidential information through AI tools is an increasingly critical concern for businesses. Confidentiality can be compromised when sensitive data is entered into public AI models such as ChatGPT. Prior to adopting generative AI tools, companies must evaluate whether these specific tools comply with their internal data security and confidentiality standards. Security and data processing practices can differ significantly among third-party software. Some tools might store and utilize prompts and other user-submitted information, while others may guarantee the deletion or anonymization of such data.

Enterprise AI solutions can help mitigate privacy and data security risks by providing access to popular tools such as ChatGPT, DALL-E and Codex, all within the required data security and confidentiality boundaries of the enterprise. Before granting permission to use generative AI tools, organizations and their legal advisors should thoroughly examine the relevant terms of use; inquire about the availability of tools or features with enhanced privacy, security, or confidentiality; and contemplate limiting or restricting access to tools that do not meet the organization’s data security or confidentiality requirements within company networks.

Protecting Copyright

As AI adoption continues, trademark and copyright disputes have become increasingly prevalent. Private companies involved in producing copyrighted content should be aware that the U.S. Copyright Office will reject applications listing an AI model as an inventor. In April 2023, the United States Court of Appeals for the Federal Circuit confirmed that only human beings, not AI devices, can be listed as inventors. The situation grows more intricate when generative AI plays only a partial role in the creative process. The Copyright Office has demonstrated a willingness to grant copyright protection to works created by generative AI and subsequently modified by humans, as long as the resulting work displays a sufficient level of original authorship. Determining if a human-altered generative AI work meets this requirement will necessitate a case-by-case evaluation and likely involve thorough documentation of both human and AI-generated contributions. Furthermore, applicants for federal copyright must disclose if the subject work includes AI-generated content, as failure to do so could result in revocation of the copyright registration.

Beware of Hallucinations

Despite the excitement surrounding generative AI tools, they remain a work in progress. Commercial enterprises should not assume that AI-generated works are accurate, non-infringing or suitable for commercial use. Companies should be mindful of AI “hallucinations” wherein a generative AI model fabricates facts or cites nonexistent sources. There have also been instances of generative AI tools producing content that potentially infringes on existing copyrights. Additionally, AI-generated works may include third-party trademarks or celebrity likenesses, which typically require proper rights or permissions for commercial usage. Companies must diligently review any content created by generative AI before employing it for commercial purposes. 

While AI offers numerous benefits for private companies, such as streamlining financial operations and enhancing corporate governance, it also presents risks that must be carefully managed. These risks include inadvertent disclosure of confidential information, copyright disputes and potential inaccuracies in AI-generated content. Companies should thoroughly evaluate AI tools for compliance with data security and confidentiality standards, and diligently review AI-generated content before commercial use. By adopting a cautious approach, businesses can harness the power of AI while mitigating associated risks.

*This article was originally published by Private Company Director.

Contacts

Continue Reading