Artificial intelligence (AI) has revolutionized various industries, and the financial sector is no exception. However, Securities and Exchange Commission (SEC) Chairman Gary Gensler recently expressed his concerns about the potential risks that AI could pose to financial systems.
Gensler believes that as AI becomes more prevalent and influential, it could eventually lead to financial crises. This is due to the powerful dynamics of scale and networks inherent in AI technology. He predicts that future business systems will heavily rely on a small number of foundational AI models. The drawback of this concentration is the risk of “herding,” where all companies depend on the same information. If that information turns out to be faulty or misinformed, it could have a significant negative impact on the entire financial system.
To address these concerns, the SEC proposed a new rule last month requiring investment advisers to eliminate conflicts of interest in their use of AI technologies. Gensler emphasizes the importance of ensuring that AI algorithms serve investors’ interests rather than prioritizing the interests of advisers or brokers. The proposed rule aims to protect investors by curbing any potential conflicts embedded in AI models.
Gensler also underscores the responsibility of investment advisers for the financial advice provided by AI. Despite the algorithmic nature of AI, investment advisers remain accountable for the accuracy and suitability of the advice given. They have a fiduciary duty to act in the best interests of their clients, and this obligation remains unchanged even when AI is involved.
While AI brings undeniable advantages to the financial sector, Gensler’s warning highlights the need for robust regulation and accountability. Striking the right balance between innovation and risk minimization is crucial to harnessing the full potential of AI while safeguarding the stability of financial systems.
FAQ
What are the risks of AI in the financial sector?
AI in the financial sector carries the potential risks of concentration and herding. Concentration occurs when business systems heavily rely on a small number of foundational AI models, increasing the vulnerability of the entire financial system to failures or faulty information. Herding refers to the situation where all companies depend on the same AI-generated information, leading to widespread detrimental outcomes if that information proves to be inaccurate or flawed.
How is the SEC addressing the risks posed by AI in the financial sector?
The SEC has proposed a new rule that aims to eliminate conflicts of interest in the use of AI technologies by investment advisers. The rule seeks to ensure that AI algorithms prioritize the interests of investors rather than those of advisers or brokers. By curbing potential conflicts embedded in AI models, the SEC aims to protect investors and mitigate the risks associated with AI in the financial sector.
Who is responsible for the financial advice provided by AI?
Despite the algorithmic nature of AI, investment advisers bear the responsibility for the financial advice given by AI systems. They have a fiduciary duty to act in the best interests of their clients, ensuring the accuracy and suitability of the advice. This duty remains unchanged even when AI is involved, and investment advisers must be held accountable for any faulty guidance provided by AI algorithms.