The launch of ChatGPT sent the world into a frenzy. Within five days of launch, it had over a million users. Within two months, it broke records as the fastest-growing consumer application in history, with 100 million users. For perspective, it took TikTok nine months and Instagram 2.5 years to reach that milestone.
Since its release, generative AI has been building to a fever pitch in nearly every sector, including finance. BloombergGPT was announced in late March, and its capabilities include sentiment analysis, risk assessment, fraud detection and document classification, along with other financial NLP tasks.
Now that Pandora's box has been opened, there's no going back. We will see generative AI and LLMs take a more significant role in the financial sector, likely leading to investment experts shifting into new positions emphasizing prompt engineering and contextual analysis.
Since the change is inevitable, the logical next step is to debug the system, so to speak, by looking at the potential risks and considering ways to mitigate them.
Risk: Confirmation Bias and Over-reliance on Machine “Expertise”
Currently, the financial markets are experiencing serious swings that are leaving all but the most iron-stomached investors feeling motion sickness. Now let's consider what could happen if we add a substantial cohort of financial advisors who are heavily reliant on AI to give investment advice.
It's true that we all know AI is prone to bias; we also know that human nature makes us far more likely to put too much trust in machines, especially ones that appear highly intelligent. This bias – called “machine heuristic” – could all too easily spiral out of control if professionals start relying too heavily on AI predictions and not checking the outputs against their own knowledge and experience.
The current iteration of ChatGPT essentially agrees with anything you say, so if people start asking ChatGPT about financial markets based on unclear, partial or false information, they’ll get answers that confirm their ideas, even if they’re wrong. It’s easy to see how this could lead to disaster, especially when human biases or a bit of lazy fact-checking are added to the mix.
Reward: Enhanced Efficiency, Productivity, Risk Management and Customer Satisfaction
Hedge funds like Citadel and banking monoliths like Morgan Stanley are already embracing this technology as a knowledge resource because it’s so skilled at completing routine tasks like data organization and risk assessment. When incorporated as a tool in an investment professionals toolbox, it can help financial managers make better decisions in less time, freeing them up to do the expertise-driven parts of the job they enjoy most.
It’s also able to analyze financial data in real time, identify fraudulent transactions and take immediate action to prevent losses. Detecting these fraud patterns would be difficult or impossible to spot with traditional methods. Financial institutions in the U.S. alone lost over $4.5 billion to fraud in 2022, so this is a huge reward for banks.
Additionally, generative AI allows for smarter virtual assistants to provide personalized and efficient customer service 24/7. For instance, India’s Tata Mutual Fund partnered with conversational AI platform Haptik to create a chatbot to help customers with basic account queries and provide financial advice, leading to a 70% drop in call volume and better customer satisfaction.
Risk: Insufficient Compliance Regulations
It's hard to imagine, but GPT's incredible power is still in relative infancy. The future will undoubtedly see an iteration so sophisticated that we can't yet fully grasp its abilities. Because of this, the global community must establish strict, comprehensive regulatory frameworks that ensure its fair, ethical use. Otherwise, it is likely that we will see discriminatory practices arise as a result of biased data, whether intentional or unintentional.
Right now, consistent controls are sorely lacking, leaving companies and countries scrambling to decide how to handle this technology and how tight their restrictions should be. For instance, in sectors that deal with highly sensitive data, such as finance, healthcare and government, many organizations have outright banned any use of ChatGPT because they don't know how secure their data will be. Amazon, Verizon, JPMorgan Chase, Accenture and Goldman Sachs are all examples of this sweeping ban.
On a larger scale, countries are in the same regulatory limbo, with some, like Germany and Italy, issuing temporary bans until they can ensure it will not incite GDPR violations. This is a serious concern for all EU members, especially in the wake of known data leaks already reported by OpenAI.
Unfortunately, regulators are already pretty far behind the curve when it comes to developing solid legal frameworks for this tech. Still, once they catch up, we can expect to see GPT take its place in every sector of the global community.
Reward: Better Regulation Means Faster Adoption
The lack of controls on GPT tech is a major bottleneck for more widespread adoption. Yes, it's a trendy novelty right now, but it can't be viewed as a serious part of any long-term corporate strategy without comprehensive rules and guidelines about its use.
Once the global community has developed and implemented appropriate frameworks, businesses will feel more comfortable investing in this technology, opening up a whole new wave of use cases across even the most cybersecurity-forward sectors like healthcare and government.
Risk: Flooding Finance Markets With Amateurs
Earlier, I mentioned the problem of generative AI only being able to give outputs based on its inputs. This problem has broader implications than allowing seasoned professionals to be a bit lazy. At least the industry veterans have the background and skills necessary to contextualize the data they're given, which is more than can be said for the amateurs who think they can masquerade as professional advisors by learning how to use ChatGPT.
There's nothing wrong with being a DIY investor, especially if you enjoy exploring financial markets and experimenting with risk at your own expense. The problem is when these relatively unskilled people with a bit of spare cash and a lot of free time decide they're more competent than they really are because of AI and decide to brand themselves as professionals. Their lack of real-world experience and formal training will likely cause a fair amount of short-term chaos and put extra stress on actual professionals.
Reward: ChatGPT Can Give Professionals a Long-Term Reputation Boost and Democratize Financial Advice
The good news here is that if the real veterans can weather the inconvenience of a temporarily flooded market, they'll see how fast people get tired of hearing generic advice they could have read on Yahoo Finance and watch the amateurs drop out of the market as fast as they entered, leaving only the seasoned advisors to pick up the now-advisorless clients wishing to pay for expert help from someone who can deliver real results.
On the other side of the equation, ChatGPT can also play a role in closing the financial literacy gap and helping those without access to a professional advisor learn some basic strategies for optimizing their money. Its ability to generate useful, basic investment advice means it is now possible to start making financial education more accessible, even to those who have been previously unable to pay for professional financial services.
Lowering the barriers to better financial stability is an extremely important benefit of this technology because, currently, only one in three adults in the global community are financially literate.