AI in Capital Markets

Appropriate use of AI in capital markets facilitates investors to have greater access whilst enjoying greater investor protection and market integrity. This being considered, market regulators shall seek to remain up to date and agile on this ever-developing landscape to assess and balance the potential risks and benefits in view of the present facts and obtainable evidence.

The use of AI in various industries is growing at a very rapid pace and regulators of financial services and capital markets must ensure they remain in touch with the ever-changing realities of the industries as the rapid AI development, can quickly make regulations irrelevant within the context of the shifting regulatory environment. Specifically, within capital markets, AI has huge potential which can significantly enhance the industry as well as risks which can also have a very negative effect on investors and operators.

In March 2025, the International Organization of Securities Commissions (“IOSCO”)  published a report based on the 2-phased approach it conducted by virtue of its Fintech Task Force (“FTF”) and this to better understand the issues, risks, and challenges from emerging from Artificial Intelligence (“AI”) technologies used in financial products and services and how it will affect investor protections, market integrity, and financial stability. Hence, it aims to develop a shared understanding of AI’s involvements in financial markets.

This published report was based on information primarily gathered through:

  1. IOSCO member and Single Room Occupancy (“SRO”) Surveys;
  2. Affiliate Members Consultative Committee (AMCC); and
  3. Stakeholder Engagement Roundtables.

Survey participants when asked what their most frequent use of AI is, they responded that AI is most commonly used for internal productivity support, market analysis, and internal GPT tools. Less frequent applications include robo-advising and Net Asset Value calculation, indicating a focus on operational efficiency and analysis over direct client-facing financial functions.

Survey participants were also asked to identify the type of AI market participants are using in their respective jurisdictions for specific given uses. Results show that machine learning is the most widely applied AI technology across functions like algorithmic trading, AML, and investment research, while Large Language Models (LLMs) and generative AI are increasingly used for communications, trading insights, and internal productivity. Notably, federated learning is minimally used, appearing only in anti-money laundering applications.

IOSCO further conducts and in-depth analysis on the potential risks, issues, and challenges that have been amplified and are being faced by the market and/or investors since the 2021 report. These risks, issues, and challenges are not necessarily exclusive to the involvement of AI but were the most prevalent because of the use of AI.

The resulted risks were:

  • Malicious use - Respondents from identified cybersecurity, fraud, market manipulation, data privacy, and deepfakes as the most significant AI-related risks, particularly in areas like investment advising, algorithmic trading, and AML/CFT, categorizing;
  • Cyber-attacks - AI systems can introduce or intensify cybersecurity threats, with financial firms facing added challenges;
  • Fraud, scams, and misinformation - Malicious use of GenAI tools can significantly lower the barrier for fraudsters by enabling more automated, cost-effective, and convincing scams, including deepfakes and misinformation that mimic real people or investment opportunities. This growing threat risks eroding investor trust in digital information and financial markets, while existing safeguards remain limited and often ineffective against increasingly sophisticated synthetic content;
  • AI Models and Data Considerations – survey respondents identified model and data-related risks as significant threats to investor protection and financial markets, second only to malicious AI use. These risks span numerous financial applications, including trading, robo-advising, AML/CFT, and client communications, making them a critical focus area in AI risk assessment.
  • Concentration, outsourcing, and third-party dependency - survey respondents highlighted concentration, outsourcing, and third-party dependencies as key AI-related risks, particularly in areas like algorithmic trading, robo-advising, and asset management. These risks are difficult to monitor, and the lack of transparency around the types and ownership of AI models adds to the challenge.
  • Interactions between Humans and AI - survey respondents identified human-AI interaction risks such as lack of accountability, regulatory non-compliance, talent shortages, and overreliance on AI for decision-making, all of which can undermine effective oversight in financial markets.

IOSCO found that financial institutions are increasingly integrating AI governance either within their existing risk frameworks or through dedicated structures, often supported by independent audits and interdisciplinary teams.

As AI becomes more embedded across organizations, firms are expanding training, controls, and acceptable use policies, particularly in light of GenAI’s unique risks. Larger firms are aligning their AI strategies with principles like transparency, accountability, and human oversight, while also implementing controlled environments to safely experiment with emerging tools.

Several IOSCO Member respondents indicated that their jurisdictions have implemented or are developing bespoke laws and regulations to address the use of AI systems and related risks in capital markets. There are already countries that have introduced comprehensive frameworks or guidelines emphasizing principles such as fairness, safety, transparency, and accountability, with the EU AI Act specifically identifying high-risk financial use cases.

Other jurisdictions are actively proposing or considering national legislation to regulate AI across sectors, including finance, while some regulators have issued guidance documents on expected standards covering areas such as AI disclosure, cybersecurity, and data governance.

In addition to developing regulatory frameworks, IOSCO members have actively engaged with market participants to better understand and guide the use of AI in financial markets. Most regulators reported using tools such as surveys, innovation hubs, roundtables, and collaborative projects to assess AI's impact and risks. Notable initiatives include Singapore’s “Project MindForge,” which created a GenAI risk framework grounded in ethical principles, the Netherlands’ study on market manipulation using AI, and the UK’s FCA AI Lab, which fosters collaborative understanding and innovation.

Many regulators have also offered guidance and testing environments, though none have granted regulatory exemptions, emphasizing a supportive but cautious approach to AI integration.

Malta has taken a proactive approach to regulating AI in capital and investment markets by aligning with EU directives, particularly MiFID II and the upcoming EU AI Act. The Malta Financial Services Authority (MFSA) emphasizes that firms using AI in investment services must maintain robust governance, transparency, and client protection.

Thanks to Malta’s strong and collaborative regulatory bodies, such as the MFSA and the Malta Digital Innovation Authority (MDIA), the country supports innovation while effectively managing AI-related risks like algorithmic bias and data quality. These efforts ensure responsible AI adoption that upholds market integrity and investor trust.

The adoption of AI technologies in financial markets has accelerated, offering operational benefits but also introducing new and evolving risks that regulators must closely monitor. With the rapid advancements in AI, IOSCO is entering a second phase of work aimed at developing additional tools, recommendations, and good practices to help regulators and market participants adapt responsibly.

As part of this effort, IOSCO will continue to coordinate with international bodies, support investor education, strengthen supervisory cooperation, and engage with stakeholders across sectors, while welcoming public feedback to shape its ongoing initiatives.

For any further information or assistance, please contact us at info@gtg.com.mt

Author: Dr Neil Gauci

Disclaimer This article is not intended to impart legal advice and readers are asked to seek verification of statements made before acting on them.
Skip to content