Financial Advisors: Beware of FOMO When It Comes to Artificial Intelligence
The Fear of Missing Out, or FOMO, is an emotional response to the notion that ‘next big thing’ will pass you by. Financial advisors are likely acquainted with this phenomenon, perhaps from when they had to temper the enthusiasm of a client who wanted to get in on a hot tip about a can’t miss investment.
A different form of FOMO is bubbling up around what some have described as the hottest thing since the discovery of fire. That would be generative artificial intelligence (AI), the technology behind ChatGPT and other rival applications. For financial advisors tempted to jump right into this technology, a healthy dose of caution is in order. There may be some ways to utilize the emerging capability, but not without a lot of human intervention.
Generative AI is capable of producing all kinds of content like text, audio, images and even synthetic data. Technology called large-language models makes this possible by analyzing huge data sets to look for patterns and trends, then answering questions using human language. The possibilities are somewhat staggering, with productivity benefits a major consideration. A Fast Company article noted that “While increasing someone’s productivity by 20% is called “progress,” increasing someone’s productivity tenfold—as generative tech can—is more of a revolution.”
Healthy Dose of Skepticism: Marketing and Compliance
It also seems that everywhere you turn, there’s an article or media report about generative AI; both the world-changing wonders it may bring or the world-changing destruction it may cause. The financial media contains pieces suggesting AI can be leveraged for marketing with Wealth Management stating that “While still relatively new, AI-supported chatbots have the potential to transform how financial advisory firms generate business leads, engage with clients and turbocharge other marketing efforts.” However, generative AI applications often deliver content that is not accurate or relevant. The same article advises that since AI-generated content often doesn’t resonate with its human audience, using a human editor can avoid a number of problems.
Compliance teams may also find some uses for AI, but it’s more likely to help as a supplemental tool as suggested by Financial Advisor IQ. These models can process and analyze vast amounts of data and provide insights that can help firms identify potential compliance issues such as insider trading, unusual patterns of money movement, etc. Using these models can save time and resources, allowing financial firms to focus on other critical areas. Despite their potential benefits, one concern is accuracy and reliability: a risk of false positives or missing potential compliance issues. There is also a risk of bias in these models, as they are trained on historical data that may not be representative of current or future situations. Furthermore, there are concerns about the ethical implications of using AI in compliance functions, as it may lead to a lack of human oversight and decision-making. 
The use of generative AI technology also presents security risks to personal information caused by the unintentional exposure of sensitive data or the malicious use of AI-generated data. The generation of realistic images and text through AI algorithms can sometimes result in the generation of personal or confidential information that is not intended for public viewing. As a result, there is a risk that financial advisors could unintentionally expose sensitive data by using generative AI models to analyze and interpret data. 
Another security risk is the potential for malicious use of AI-generated data. Attackers could use AI-generated data to create convincing phishing emails or malware, making it more difficult for financial advisors to detect fraudulent activity. Furthermore, AI-generated data could be used to create fake identities or impersonate individuals, leading to potential identity theft and fraud.
Caution: The Antidote to FOMO
There are valid reasons for financial advisors to exercise caution when using large language models and generative AI in their practice. While these models offer potential to improve marketing, analyze large datasets for business opportunities, and assist with some compliance functions, they are not without risks. One concern is the potential for biases to be amplified or reinforced in these models, leading to unfair or inaccurate decision-making. 
Additionally, the accuracy of these models may be impacted by the quality of the data they are trained on, which can lead to errors. It is essential for financial advisors to carefully evaluate the benefits and risks of using large language models in their practice and ensure that they have appropriate safeguards in place to mitigate potential risks.
 Peake, M. (2022). Six key considerations for ethical AI in financial services. Biometric Update.com. https://www.biometricupdate.com/202211/six-key-considerations-for-ethical-ai-in-financial-services
 Bourkherouaa, E., AlAjmi, K., Farias, A., & Ravikumar, R (2021). Powering the Digital Economy: Opportunities and Risks of Artificial Intelligence in Finance. IMF Library. https://www.elibrary.imf.org/view/journals/087/2021/024/article-A001-en.xml
 Lee, N., Resnick, P. & Barton, G. (2019). Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms. Brookings. https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/