Generative AI Compliance Is Essential
Generative AI is transforming investment management and data analysis, making compliance an essential area of focus for both investors and regulators. In November 2023 we covered the topic as part of a workshop titled Legal & Compliance Considerations for Data, AI, and Large Language Models in Finance where we were joined by Sanaea Daruwalla of Zyte and Jessica Margolies of Schulte Roth & Zabel.
Following on from this session due to topic popularity, in January 2024, a panel was organized at our Next Level conference which was held in New York on the 18th of January. The panel, “Compliance Considerations for Generative AI for Investment Managers and Data Providers,” examined AI’s role in the financial services vertical and the related regulatory landscape.
For more information on Eagle Alpha events and to register your interest to attend our upcoming Alternative Data Conference in London on May 16th, please click here.
Expert Views Shared By:
- Jessica Margolies, Special Counsel at Schulte Roth & Zabel: Jessica specializes in non-traditional research methods, including the use of generative AI.
- Emilie Abate, Director at Iron Road Partners: With extensive experience at the SEC and now at Iron Road Partners, Emilie has a comprehensive view of AI and data from a regulatory standpoint.
- Alik Sokolov, CEO of Sibli: Alik’s background in AI consulting and venture capital, along with his academic research in financial machine learning, positions him to understand the practical applications of AI in finance. His work focuses on helping investment firms incorporate AI into their research processes.
Disclaimer: The views and opinions expressed by the panellists are their own and do not necessarily reflect the official policy or position of the organization hosting the event, its affiliates, or any other agency, organization, employer, or company. Statements made by the panellists are intended for discussion purposes only and are not intended as investment advice.
How Generative AI is Being Approached by Funds
This section focuses on the strategic approaches that funds are adopting to harness the potential of Generative AI, with insights from industry experts on risk management, regulatory compliance, and the evolving use cases within the investment environment.
Alik: “Generative AI is really a paradigm for training machine learning models in a certain way that allows them to generate data for us… we’ve recently been able to scale up these models to be much larger than we ever did in the past. And in doing so we realised that they had a lot of interesting emerging capabilities that give them a lot of utility for business use.”
Key Takeaways:
- Alik stressed that understanding the risks involves considering two main factors: the nature of the training data (whether it’s internal or from external providers like Azure, AWS, or OpenAI) and how the model is being utilized, particularly given its propensity for random errors.
- Jessica built on this by categorizing generative AI use into two main types: internal and external. She pointed out the different risks and control needs associated with each type. Key considerations include understanding the scope of use, the nature of data input (such as IP, PII, or confidential information), and contractual aspects, especially when using licensed data from alternative data vendors.
- Emilie highlighted the SEC’s capabilities in cybersecurity and the potential regulatory repercussions of a breach, highlighting the importance of cyber diligence when onboarding AI vendors. The SEC, according to Emilie, views cybersecurity breaches not only as incidents but also as compliance failures, particularly in protecting PII.
- Jessica shared insights on updating diligence processes considering generative AI’s nuances. This includes broader questions about the scope of use, data input and output, intellectual property rights, confidentiality concerns, and bias in the AI models. She emphasized the evolving nature of these processes, requiring ongoing collaboration between different stakeholders, including investment firms, vendors, and regulatory experts.
Practical Applications and Compliance Challenges
This section provides expert insights on balancing the innovative uses of generative AI with the importance of robust compliance and risk management strategies.
Jessica: “The way we think about it is there’s sort of two buckets… internal LLM generative AI use case that’s hosted on a client… and the external public facing enterprise version… What you’re inputting to each one of those types of tools is subjected to different kinds of risks and needs different kinds of controls.”
Emilie:“Compliance really needs to be in the mix with the business side of things and figure out how these tools are being used… making sure that whatever outputs are coming out… to put a disclaimer on it right, that says, ‘This was generated by AI and has not been verified for accuracy.’”
Key Takeaways:
- Alik addressed the challenges faced by larger enterprises in transitioning generative AI applications from experimentation to production. He explained that while it’s relatively easy to achieve 95% accuracy with generative AI, attaining higher accuracy levels requires significantly more effort and resources. Alik emphasized the need for rigorous testing and understanding of these models before fully integrating them into operational processes, especially given the high-risk tolerances of large organizations.
- Emilie discussed the increasing use of tools like ChatGPT in investment research and the compliance implications that arise as these tools become more integral to the investment decision-making process. She advised that compliance departments should work closely with investment teams to understand how these AI tools are being used. Emilie suggested best practices like saving AI-generated outputs and attaching disclaimers about their accuracy to mitigate regulatory risks.
- Drawing parallels with the evolution of alternative data, Jessica highlighted how the investment sector is adopting a more forward-thinking and collaborative approach towards integrating generative AI. She noted that the policies and procedures being developed are broader and more fluid, to accommodate the rapid changes and unknowns inherent in this technology. Jessica stressed the importance of understanding the application of generative AI and adapting compliance policies accordingly.
The SEC’s Evolving Focus on AI
This section explores the SEC’s intensified focus and strategic initiatives to oversee AI implementation within the investment industry.
Emilie: “The SEC has really started to focus on AI… they’re going to want to know, do you have a governance committee? Does compliance have a seat on that committee? Have you thought about what your compliance programmes actually going to look like around this?… And for cybersecurity, they will definitely add an IT person to come look at this as well, if they think that it’s an issue.”
Key Takeaways:
- Emilie noted that SEC Chairman Gary Gensler has expressed specific interest in this area and highlighted the predictive analytics rules as a development to watch. Emilie was surprised by the level of AI expertise within the SEC, including staff with PhDs focused on machine learning, which underscores the regulator’s serious approach to understanding and examining AI’s use in the investment sector.
- The SEC has been actively examining and inquiring about how investment advisors use AI, not just for investment decisions but also in their back-office operations. Emilie revealed that the SEC employs AI in its risk analysis and surveillance group, particularly for reviewing ADV filings – the Uniform Application for Investment Adviser Registration. She pointed out that when facing emerging technologies, the SEC often starts its examinations with marketing practices, scrutinizing how firms represent their use of AI to investors. This approach can lead to inquiries about the actual support for AI-related claims made in marketing materials.
- Emilie emphasized the importance of compliance departments understanding and overseeing the use of AI. She advised firms to be prepared for SEC examinations that might vary based on the specific AI application, such as the use of ChatGPT, white-label LLMs, or trading algorithms. Key areas of focus for the SEC include data sources, exception reporting, governance, and cybersecurity. She also highlighted the need for firms to have proper vendor due diligence and governance committees in place, with compliance playing a significant role.
Contractual Considerations
This section focuses on the intricate interplay between due diligence and contractual frameworks in the realm of generative AI, as delineated by Jessica’s expertise, illuminating the critical aspects of IP ownership, confidentiality, and usage rights pivotal for investment management.
Jessica: “The diligence and the contract side really do go hand in hand… For generative AI… if you have a private version or an enterprise version, you’re going to be in a position to negotiate some terms… From an input perspective, you want to think about who owns the inputs… We also want to understand how the model is trained… From an output perspective, you want to incorporate… IP-related ownership language that you own, whatever you contributed to, and that it’s not the generative AI tools… From a confidentiality perspective, we want to make sure that all of our inputs are actually confidential, and that our confidential information is in fact, our own confidential information.”
Key Takeaways:
- The conversation shifted to contractual considerations, a critical aspect when integrating generative AI into investment management. Jessica, with her extensive experience in both due diligence and contract law, offered valuable insights into the intersection of these two areas.
- Jessica emphasized that due diligence and contract negotiations are deeply interconnected. Learnings from the diligence process inform the contract terms, ensuring client protections are adequately addressed. She highlighted different scenarios for negotiating terms for private or enterprise versions of generative AI tools and understanding the limitations when using public versions:
- Intellectual Property (IP) Considerations: Jessica stressed the importance of ensuring that clients retain ownership of their inputs and that these inputs do not inadvertently train the generative AI model used by the vendor.
- Confidentiality: It’s crucial to ensure that all inputs are treated as confidential and that the client’s confidential information remains protected.
- Understanding Model Training: Knowing whether the AI model is trained using end-user information or licensed data is vital. This understanding influences the need for additional protections in the contract.
- Output Ownership: Clients should aim to establish ownership over the outputs generated by their contributions to the generative AI tool, avoiding any IP entanglements with the tool’s vendor.
- Usage Restrictions: Jessica pointed out the importance of negotiating terms around usage restrictions, particularly for investment advisory clients who might need to use AI outputs to create derivative works as part of their investment process.
- Jessica noted that while there is room for negotiation in private or enterprise versions of AI tools, public versions often come with fixed terms. In such cases, a heightened understanding of these terms is necessary, and firms must develop internal policies to address any potential gaps in the contract.
Conclusion
The use of AI in finance is growing, and with it comes the need to make sure it’s used responsibly and follows the rules. Experts have shared that it’s important to know where AI gets its information from and how it’s being used, especially because it can make mistakes. They’ve also talked about the need to be careful about protecting personal and private information when using AI. It’s been suggested that companies should be clear about the fact that AI is being used and that its results haven’t been double-checked for accuracy.
The SEC is paying more attention to AI, wanting companies to have good oversight over how they use it. They’re also looking at how companies talk about their use of AI, to make sure they’re not misleading investors. As for contracts, when companies use AI, they should be clear about who owns the information that goes into the AI and what comes out of it. They should also make sure that private information stays private.