This article is the second installment in our series of articles focusing on the emerging issues surrounding the use of artificial intelligence. In this article, we address the use of AI in the financial industry, outline certain risks associated with the use of AI by financial service firms, and provide a few best-practice tips for market participants using AI as part of their activities.
Prepared by Jephte Lanthia, Partner and Co-Founder at Basswood Counsel
To save or read this newsletter offline, click here.
From the innovative uses of stones as tools, to using the wheel, and to exploring modern technology beyond the earth’s atmosphere, human beings have always embraced science and technology to enhance quality of life. Our desire to embrace technology is often rooted in our basic nature as humans: to perform mundane tasks more efficiently, to benefit economically from the use of said technology—especially in an environment that is transactional where members of a community barter with each other for goods and services—and to become more competitive as a means for survival in a world that is often governed by the law of “you eat what you kill.”
Unsurprisingly, technology and finance have always shared a symbiotic relationship, as the financial industry has always embraced the possibilities of modern technology to maximize efficiency and to gain competitive advantages. Currently, artificial intelligence (AI) is such a technology. More than the buzzword du jour, AI promises transformative opportunities to many market participants such as broker-dealers, investment advisers and managers, lending institutions, and other participants in the financial industry.
Let’s Level-set
The use of AI goes back to at least the 1950’s, and has evolved over the decades to the ability to perform complex tasks, including the ability to learn new concepts, reason, and draw useful human-like conclusions. Fundamentally, the technology is based on predetermined algorithms of “ifs…and thens”; that is, if Premise A occurs, then the machine spits out Conclusion B, usually faster than the human brain can process. However, through machine learning (“ML”), computer systems can analyze relationships among several variables and recognize patterns in big data, whether structured (texts) or unstructured (images or social media posts, etc.) before rendering a conclusion. In such a case, the process of reasoning is not as linear as “if A, then B”. Furthermore, beyond machine learning, generative AI (“Gen AI”), allows machines to not only come up with conclusions based on “if…then” patterns, but also enables machines to solve problems by simulating human intelligence such as extrapolating and synthesizing the data, insofar to predict possible outcomes—thereby achieving what educators and psychologists would consider achieving a high-level of cognition on a Bloom’s taxonomy scale (provided, however, that machine does not hallucinate).
AI Uses in Certain Financial Sectors
As a competitive industry, the financial industry is primed for the use of AI. Some firms incorporate AI into their activities by accessing cloud-based AI platforms through third-party providers, while others, depending on the cost-benefit analysis, may implement an on-premise, proprietary AI solution where the firm may train the platform with data relating to its specific needs, and more importantly, the data is processed and maintained within a more controlled environment. Regardless of whether a firm uses a third-party cloud-based or an on-premised AI solution, the use of AI provides several benefits and associated potential risks to market participants such as broker-dealers, investment advisers and asset and managers in their quest to improve operational efficiency, provide optimum customer experience, and maintain a competitive edge while maximizing profits and minimizing risks.
Broker-dealers
Part of the services offered by broker-dealers (“BD”) include providing investment advice, making recommendations, and executing trades on behalf of customers. Additionally, some BDs publish investment research for institutional clients or assist companies in capital-raising activities. Operationally, a BD firm is often divided into the front office, middle office, and back office. While the front office is generally responsible for client interaction, the middle office and back-office operations provide administrative and support services related to executing and confirming trades, managing assets, and supporting client transactions, as well as performing compliance and risk management for the firm.
BD firms can benefit from using AI across their operational structure. According to a 2020 FINRA report, a number of BDs have explored the use of AI to target and interact with customers.1 The use of AI can be applied in client segmentation—the practice of dividing clients into discrete groups—to allow the BD to tailor its recommendations and advice to fit the respective client’s specific needs. In fact, according to the report, some broker-dealer firms are using AI tools to analyze customers’ investment behaviors, website and mobile app usages, and past client-inquiries to provide customized content to the clients.
With respect to the middle or back office of the BD, the use of AI is particularly advantageous in completing tasks that are often repetitive or rule-based. For example, the use of AI enables firms to process numerous regulations and provide automated means of complying with such rules. Further, because of the technology’s ability to capture and analyze large amounts of structured and unstructured data in various forms from both internal and external sources in order to identify patterns and anomalies, AI can assist firms in performing risk management as well as supervision and training of representatives. For example, a firm can monitor traders’ communications, client interactions, trades and recommendations, and other activities to assess whether representatives are complying with the firm’s compliance requirements.
Investment advisers and asset managers
Similar to BDs, investment advisers and asset managers may use AI for client communication, regulatory compliance, and operational efficiencies. Unlike BDs, however, who primarily make recommendations to clients or execute trades on behalf of clients, investment advisers and asset managers generally provide investment advisory services, including managing clients’ funds, which is often over a longer investment time horizon, particularly in the private funds space such as private equity.
As often emphasized by the Securities and Exchange Commission (“SEC”), the investment adviser has a duty of care and loyalty, the former of which encompasses the duty to provide advice and monitoring. Depending on the firm and its capabilities, when providing advice and monitoring investments, the firm may also use AI to analyze both structured and unstructured data to improve internal research capabilities, including identifying trends and predicting market movements. For example, when analyzing a particular investment or particular company, in addition to analyzing traditional data sources such as regulatory filings, press releases, news reports, and other large text data, some firms analyze non-traditional data sources such as social media, satellite images, and weather forecasts as a barometer to extrapolate and gauge the economic activity associated with such investment.2 As a result, the manager is able to make informed investment decisions, including evaluating risk factors to mitigate potential losses, which is particularly crucial in the hedge fund industry where managers are analyzing vast volumes of financial data based on real-time information in deciding when to execute trades. Similarly, in the private equity space, the use of AI can assist fund managers in deal sourcing when screening thousands of investment opportunities and analyzing factors such as market trends, financial performance, and growth potential. Finally, as we noted in an earlier article, private equity firms and other market participants may use AI for conducting due diligence on portfolio companies as well as in M&A transactions.
Lending institutions
Lending institutions, like BDs and asset managers, use AI for operational efficiency, risk management and regulatory compliance, including AML and KYC compliance. However, the use of AI may offer unique opportunities to lending institutions when assessing the risk of prospective borrowers defaulting on loan repayment. Traditionally, when deciding whether to issue loans and applicable interest rates, lending institutions use scoring models based on use payment history, unpaid debts, outstanding loans, and accounts and other financial records. Recently, however, some financial companies have considered using AI to analyze nontraditional data (structured and unstructured) to assess creditworthiness. For example, some firms may use big data—including social media—to assess a potential borrower’s spending habits, lifestyle choices, and overall financial responsibility. Further to the extent that such media posts and other public data indicate unusual spending patterns, employment instability, and signs of financial distress, the lending institution could consider instances as potential risk factors in deciding whether is issue a loan.
Jephte Lanthia offers his insights on the practical applications, advantages, risks, and challenges of integrating AI into the legal field during Basswood Counsel’s official launch and the AI Integration in Legal Practice webinar.
Certain Risks Associated with the use of AI
While new discoveries and technologies often offer promises, they are usually accompanied by possible risks that must be considered in order to harness the power of such technology. AI does not escape such fate. Without addressing the following details, the use of AI poses some of the following risks.
- Data and privacy: Financial institutions may obtain personal information while using AI to communicate with current and prospective clients. As such, if not safeguarded properly, client information may be vulnerate to misuse, particularly when accessible by
third-party providers. Financial firms are subject to various data protection and security requirements to ensure the privacy and confidentiality of customers and protect against threats and unauthorized access and use of data.
Cybersecurity: Similarly, the use of AI may expose firms to cybersecurity threats and other related risks that may disrupt the firm’s activities, including endangering clients’ investments. Similarly, financial firms are subject to various requirements to adopt and implement written cybersecurity policies and procedures, including business continuity plans.
Bias: As noted above, AI is based on predetermined algorithms and data input. As such, the possibility of bias may exist based on the type or breadth of the data entered or the manner in which the model was trained. Such biases, if not monitored, may lead to discrimination against a potential sector, asset class or investments, and more importantly, protected classes—the latter of which data may render investment or credit decisions in ways that illegally perpetuate biases.
Conflicts of Interest: As noted above, some financial firms may use AI to segment clients based on shared characteristics and needs. While the practice of client segmentation is generally acceptable, possible conflict may arise if a firm inadvertently prioritizes high-value clients over smaller ones, potentially harming the smaller segment. Similarly, the possibility of conflict of interest may exist if the AI platform makes recommendations or provides investment advice that puts the firm’s interests ahead of those of the investors or clients, thereby violating the adviser’s duty of loyalty. BDs and investment advisers are subject to various rules under the securities laws, including Regulation Best Interest and the fiduciary standards established under the Investment Advisers Act of 1940 (and potentially the SEC’s proposed rule regarding use of predictive data analytics).
- Insider trading: As noted, the use of AI allows firms to analyze voluminous data and provide investment advice and recommendation, including those by automated advisors such as robo-advisors. To the extent that the firm has access to material non-public information (MNPI) and has not properly established a proper Chinese wall as a barrier between its investment banking activities and other activities, the firm may inadvertently execute trades based on MNPI.
Best practices
The number of firms using AI will continue to increase dramatically. Similarly, although the law is often slow to catch up with technology, regulators from various sectors have taken interest in understanding the use of AI technology in the financial industry. As the regulations are being developed, firms should consider the following best-practices as part of their compliance preparation.
- Know thyself: Firms should evaluate their business practices and capabilities to assess whether and how the use AI will fit within their existing operations.
- To thine own self be true—well, at least be true to your clients, as far as the SEC is concerned: Without necessarily revealing the “secret sauce”, firms should properly disclose the use of AI in their activities, particularly the extent to which they use AI technology in performing their core activities. Be careful regarding making over-inflated claims regarding the use of AI in the firm’s operations. For example, the SEC has warned against “AI washing” and has brought enforcement actions under the Marketing Rule against two investment advisory firms for making false and misleading statements about the extent of their use of AI.
- Compliance: Establish written policies and procedures to (1) supervise the activities of employees and associated persons; (2) safeguard client information and cybersecurity; and (3) monitor possible conflicts of interests and bias (particularly with respect to lending activities).
- Test and document: Regularly test compliance policies and procedures to assess vulnerabilities and deficiencies. Document the results of the assessment and remedial measures taken to rectify the deficiencies.
- Back to basics: “A” is for “artificial”: Artificial intelligence, particularly, Gen AI, has evolved significantly to mimic human intelligence. Intelligence generated by machines will not (or is unlikely to) replace human intelligence; they are meant to assist humans in decision-making. As such, firms should implement a layer of human review to ensure that the results are consistent with business goals, internal policies and procedures, and regulatory requirements. Despite their ability to outperform humans, AI models have been known to overindulge and hallucinate.
At Basswood Counsel, we are always exploring ways to implement the use of artificial intelligence in our practices to better serve our clients more efficiently. More than for the purposes of increasing our internal efficiency, we continue to learn about the practical and regulatory landscapes surrounding the development of artificial intelligence to better understand our client’s respective business activities and the risks and regulations associated with such practices.