Responsible AI adoption at the world’s most systemically important bank
1 Background
Since the debut of OpenAI’s ChatGPT in November 2022, much of the world has been enamored with the astonishing and—sometimes—frightening capabilities of artificial intelligence. Public and private entities are racing to understand the technology’s implications on society and business.
On May 4, 2023, the Biden administration summoned executives from leading AI companies to the White House to discuss ways to mitigate potential harms of AI[1]. At the same time, the administration’s director of CISA (Jen Easterly) and director of cybersecurity at the NSA (Rob Joyce) both called AI a game-changing and era-defining technology posing new cyber challenges to the world[2]. Director Easterly testified to the House Homeland Security Committee, compared AI to be the same level of threat and challenge as China[3].
In the private sector, while many companies are rapidly adapting to the AI-enabled features and operating models, many others are starting to disclose in SEC filings that AI technology is a fundamental risk factor to their existing business models[4]. On May 2, 2023, the threat of ChatGPT to Chegg’s underlying education-technology business model led to a 38% selloff of their shares[5]. Clearly, the impact of AI technology will not be equal for all parties involved.
Timely with recent banking turmoil due to tightening credit conditions in the U.S. financial system, this document will attempt to analyze the benefits and risks from adopting AI at the world’s most systemically important bank, JPMorgan Chase.
2 JPMorgan Chase
Since the Global Financial Crisis of 2008, the G20's Financial Stability Board has published an annual ranking of systemically important banks. In 2022, JPMorgan Chase remained at the top of that list, alone in the highest capital buffer requirement category[6]. The global financial system is incredibly interconnected, with the banking sector working closely with global capital markets to provide credit and liquidity to the world. Because the world’s banks mostly operate as a fractional reserve banking system[7], large bank failures can have catastrophic contagion effects on the world.
At the end of March 31, 2023, JPMorgan Chase had $3.74 trillion in assets, including $1.13 trillion in issued loans. Its clients have $2.38 trillion of deposits at the bank[8]. By comparison, the United States Department of Defense had $3.52 trillion in assets at the end of Sept 2022[9].
JPMorgan Chase is already investing in leveraging AI technology similarly to ChatGPT, known as large language models (LLM’s)[10].
3 AI: Large language models
To unpack the AI buzzword, it is important to understand the intuition behind the technology powering AI chatbots. These chatbots are powered by one or more large language models (LLMs).
3.1 What are large language models?
Perhaps unsurprisingly, AI is not magic, it is mostly made of many linear calculations[11]. Large language model is a type of neural network machine learning model. It is trained to understand words and their linguistic structure (“language”). By converting words into numbers, it can reduce linguistic patterns into statistics by training on hundreds of billions of words from written documents. These models can capture dense information and linguistic structures in billions of parameters—hence they are “large” language models.
Most importantly, recently advances in LLMs rely primarily on a type of neural network architecture called “Transformers”, which was open sourced by Google in a 2017 paper called “Attention Is All You Need”[12]. For more details on how these models are trained, The Economist recently wrote a wonderful layperson’s explanation of large language models[13].
Intuitively, your interaction with an AI chatbot can be broken down into these steps:
1. User enters a message to the chatbot. [Input: Words]
2. The model converts the words into numbers, which represent the words. [Input: Words; Output: Matrix of numbers]
3. The model uses the matrix of numbers to sequentially predict the “next” most likely number(s) (which are just a representation of words). [Input: Numbers; Output: Numbers]
4. The model converts the predicted numbers back into words for the reply! [Input: Numbers; Output: Words]
3.2 Beyond Chatbots
While consumers are most familiar the chatbot application of language models, language models have been in your lives for many years. Google’s helpful predictive tools in Gmail, Google Docs, Google Sheets are all powered by language models in some form[14]. You simply had no chat interface to the model.
Beyond chatbots, LLMs have great generalizable attribute. Most recently, Columbia Business School professor Dan Wang[15] compared LLMs’ generalizability to electricity. It may be a horizontal layer of generalized compute that can enable many not-yet-created inventions—like how electricity led to the microprocessor.
An early glimpse into the generalizable capabilities of LLMs (beyond text-generation) can be demonstrated in Microsoft’s recent paper called HuggingGPT[16]. The paper demonstrates LLMs like GPT-4 can be used to create and orchestrate tasks to be done by other AI models and computers. Empowered, the model no longer have to solely rely on its own abilities.
4 Beneficial applications in banking
In the case of JPMorgan Chase, there are likely a long list of potential AI use-cases. Instead of an exhaustive review, we will simply focus on use-cases that are intuitively easier to understand.
4.1 Internal usage
Internal usage of AI at a firm like JPMorgan can be framed in two ways: directly applied on core business processes, or indirectly applied by employees responsible for business processes.
Here are examples of direct applications (processing previously too-complex documents):
1. Loan underwriting: LLMs can process complex loan/mortgage applications directly in the underwriting process.
2. Know your customer (KYC): LLMs can discover attributes and generate customer profiles from internal and external records to comply with KYC regulations.
Here are examples of indirect applications (empowering employees):
1. Summarization: Employees can use LLMs to generate regular internal updates and reports.
2. Data entry: Employees can use LLMs to automate their daily data-entry tasks.
4.2 Client service usage
Beyond internal usage, client services usage is probably the most exciting, as banks are primarily financial services firms. JPMorgan Chase can leverage LLMs create new products for its clients and interact with clients in whole new ways.
Examples of new products:
1. AI wealth management product that delivers portfolio-level market reports daily
2. AI private banker that can do complex tasks like managing mortgage application and closing processes (previously only available to HNW clients of the private bank)
Example of new ways to interact with clients:
1. Investment bankers can use 1,000s of previous pitchbooks to generate the most compelling pitchbook for the next debt/equity offering.
2. Client service can rely on AI chatbots as opposed to a call center agent.
5 Firm-level risks
While the potential of LLMs for JPMorgan Chase seem unlimited, there are also important risks to consider at the firm-level. Primarily, two types of risks are at the forefront: data security and global compliance.
5.1 Data security
Trust is the most important attribute for a bank, especially at the world’s most important systemic bank. Clients rely on the firm to, not only protect their personal data, but protect the data that underlies their cash deposits. In the digital age, cash deposits are simply bits of data that are stored in a very trusted and reliable database. If anything were to happen to that database, then everyone’s deposits may be at risk.
Therefore, banking data and client information at the firm are likely frequently targeted by adversaries. These adversaries may be seeking both financial and/or political gain. For a firm like JPMorgan, they likely face adversaries that have enormous capabilities, like state actors.
The usage of LLMs present a new challenge to maintain the firm’s data security standards. While LLMs seem powerful, they are also very nascent. Their technological vulnerabilities are only beginning to be studied by system experts[17].
Deployed pre-maturely without proper safeguards, LLMs can lead to poor performance (at best) or data loss and manipulation (at worst). For example:
1. Poor underwriting performance: It may be possible to manipulate loan documents to inject specific words that can “trick” a LLM processing loan applications to underwrite loans below acceptable credit guidelines.
2. Data loss and manipulation: Criminals may inject malicious code and prompts into known repositories of LLM training and input data. Then, the malicious code and prompt may lead to arbitrary code execution in key banking data processes. This may lead to data loss, manipulation, or theft.
5.2 Global compliance
Because JPMorgan is a global multi-national financial institution. In addition to United States banking regulations, it must comply with the regulations in all regions in which it operates. These regulations may be financial in nature, or they may be related to corporate, commercial, or information regulations.
Already a complex web of compliance requirements, JPMorgan had historically spent billions of dollars annually to keep up with requirements[18]. Most recently, it was hit with a $200 million dollar fine for lapse in compliance for keeping WhatsApp records[19]. Suffice to say, compliance is a major risk factor for the firm.
In the United States, there exist nearly zero guidelines or regulators for the usage of AI. The regulatory bodies are only beginning to catch on[20]. The firm faces regulatory risks of investing in AI technology that will later be banned or severely restricted.
Internationally, the differential regulatory landscape may make it difficult for the firm to keep up with the complexity of regulations. Some European authorities (i.e., Italy) are already whimsical in their authorities and approach[21].
6 Systemic risks
In addition to firm-level risks, JPMorgan will have to contend with systemic risks if it suffers failures or breaches because of its AI usage. With more than 10% of all American banking deposits, a loss of confidence in the firm can and will lead to widespread panic.
Bank runs are primarily caused by panic. A bank run on JPMorgan Chase will lead to bank runs on the worlds’ banks.
This will have unprecedented consequences, so severe the world may never adequately prepare for such an event. The designation of “too big to fail” banks are exactly because it is not possible to prepare for their failures.
7 Closing thoughts
The rapid adoption of AI technologies by private and public actors is likely to fundamentally change society and business. But its impacts will not be equal across the board.
As discussed, in the public sector, authorities continue to be ill-prepared to adequately balance the risk and reward calculus of AI on society. The White House is scrambling to catch up.
In the private sectors, winners (i.e., Microsoft) and losers (i.e., Chegg) are already apparent. There will be many more winners, and no doubt, many more losers.
Systemic actors like JPMorgan Chase have a responsibility to lead with example and caution. Financial services, an industry built on information technology, can easily be disrupted and lead to short-term chaos. Its leaders should balance its desire to continue to lead while its responsibility as the world’s most important bank.
[3] https://www.c-span.org/video/?527701-1/cyber-director-testifies-threats-landscape
[4] https://9fin.com/insights/chatgpt-is-the-new-risk-factor-buzzword
[9] https://comptroller.defense.gov/Portals/45/Documents/afr/fy2022/4-Financial_Section.pdf
[11] Linear calculations are like linear equations (i.e., y = 5x + 6; if x = 5, y = 31).
[12] https://arxiv.org/pdf/1706.03762.pdf
[14] https://ai.googleblog.com/2018/05/smart-compose-using-neural-networks-to.html
[15] https://www8.gsb.columbia.edu/cbs-directory/detail/djw2104
[16] https://github.com/microsoft/JARVIS
[17] https://simonwillison.net/2023/Apr/14/worst-that-can-happen/
[18] https://www.wsj.com/articles/SB10001424127887324755104579071304170686532