Dentons US LLP

11/20/2024 | News release | Distributed by Public on 11/20/2024 10:56

Use of artificial intelligence in Canadian capital markets

November 20, 2024

On September 26, 2024, Dentons was pleased to host a webinar on the use of artificial intelligence (AI) in Canadian capital markets in consultation with the Alberta Securities Commission and Computershare. The webinar was moderated by Kate Stevens Partner at Dentons and included Riley Dearden, (Partner at Dentons), Mohamed Zohiri (Legal Counsel and FinTech Advisor at the Alberta Securities Commission) and Tara Israelson (General Manager at Computershare). The topics discussed, and which are summarized below, included:

  • Key applications of AI in capital markets;
  • Applications of AI in customer experience and shareholder services;
  • AI advancing efficiency and driving revenue growth in capital markets;
  • Regulatory considerations;
  • Current regulatory requirements;
  • Adoption and regulation of AI in Canada compared to other jurisdictions globally;
  • Unique challenges faced by traditional governance methods;
  • Use of AI in managing risks in shareholder services; and
  • Advice for market participants navigating the evolving AI landscape in capital markets.

Key applications of AI in capital markets:

AI is broadly defined as systems capable of performing tasks that typically require human intelligence. The application of AI in capital markets has the potential to assist and transform various aspects of the industry, including:

  1. Risk analysis and management: AI tools analyze historical data and current events to assess market volatility, creditworthiness, and potential downturns.
  2. Sentiment analysis: Issuers use AI to gauge market sentiment by analyzing public opinions from social media and news sources, helping them understand investor behaviour.
  3. Price forecasting: AI attempts to predict future asset prices by analyzing large datasets, aiding issuers in pricing and structuring offerings.
  4. Portfolio management: For investors, AI automates portfolio management, considering individual risk tolerances and investment goals to maximize returns.
  5. Algorithmic trading: Deep learning models enhance AI's ability to process data on stock movements and customer feedback, allowing for quicker, more informed trading decisions.
  6. Fraud detection and compliance: AI tools help detect market manipulation and ensure compliance more effectively.

Applications of AI in customer experience and shareholder services:

AI summarization tools improve operational processes and elevate customer service quality, helping financial institutions deliver personalized support and maintain compliance. The greatest impact arises from integrating AI chatbots with other AI systems, creating a cohesive ecosystem that enhances operational efficiency and decision-making in the financial sector. This technology improves efficiency in several ways:

  1. Enhanced response efficiency: Faster and more accurate responses to client inquiries.
  2. Personalized client interactions: Tailored communication based on insights gleaned from interactions.
  3. Trend analysis and insight generation: Identifying patterns and insights from client communications.
  4. Training and quality assurance: Providing a basis for evaluating and improving service quality.
  5. Reduced cognitive load: Eases the burden on relationship managers by summarizing communications, allowing them to focus on strategic tasks.

AI advancing efficiency and driving revenue growth in capital markets:

From a regulatory perspective, AI has significant potential to enhance efficiency and drive revenue growth in capital markets. Key use cases include:

  1. Compliance automation: AI can streamline regulatory processes like transaction monitoring and reporting, reducing errors and improving efficiency.
  2. Underwriting and risk assessment: AI models analyze large datasets, enhancing the accuracy and speed of underwriting, especially in insurance and credit.
  3. Predictive analytics in trading: AI is employed in algorithmic trading to analyze data, potentially improving trade timing and accuracy, although effectiveness relies on data quality and market conditions.
  4. Robo-advisors: AI-powered robo-advisors provide personalized investment advice, broadening access to financial services at lower costs for retail investors.
  5. Fraud detection: AI monitors transactions for anomalies, bolstering fraud detection and prevention by identifying suspicious activities in real time.

AI's ability to reduce operational costs and create personalized financial products can drive revenue growth. However, success depends on implementation quality and market adoption.

Regulatory considerations:

  1. Bias and fairness: AI systems may unintentionally produce biased outcomes, necessitating ongoing monitoring and mitigation efforts.
  2. Transparency and accountability: Ensuring that AI systems, particularly complex deep learning models, are transparent and auditable is crucial, especially for decisions impacting clients.
  3. Systemic risk: High-frequency trading using AI may amplify market volatility, requiring firms to implement safeguards and monitoring systems to maintain stability.

Regulatory frameworks are evolving to address these challenges, with a focus on explainability and responsible AI use.

Current regulatory requirements:

We are currently in a pivotal moment regarding the regulation of AI in capital markets, as regulations are struggling to keep pace with technological advancements.

However, Bill C-27 has been proposed, which aims to establish the Artificial Intelligence and Data Act (AIDA) - which will be the first Canadian legislation specifically focused on AI. This bill seeks to ensure that AI systems are safe, non-discriminatory, and that the systems handle personal information legally, holding businesses accountable for their AI usage.

Additionally, the Canadian Securities Administrators have released preliminary guidance highlighting the importance of governance, oversight, and accountability in AI deployment. Increased regulatory activity in this area is anticipated in the coming months.

Adoption and regulation of AI in Canada compared to other jurisdictions globally:

The regulation and adoption of AI in Canada is developing alongside global trends, with firms potentially needing to comply with both local and international regulations depending on where their operations are.

As noted, the AIDA, part of Bill C-27, provides a framework for the responsible use of AI in commercial activities, especially where it can significantly impact individuals, such as in credit decisions or financial advice. Global regulatory comparisons include:

  1. European Union (EU): Utilizes a risk-based approach with specific regulations for high-risk applications, categorizing AI systems by their potential impact. The EU AI Act may serve as a model for other jurisdictions, similar to the GDPR in privacy regulation.
  2. United States: Regulation at the federal level is driven by an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence which encourages secure and ethical AI practices. Further, the Securities Exchange Commission has proposed a rule specific to the use of Predictive Data Analytics by brokers and advisors with a view to eliminating certain conflicts.
  3. Japan: Encourages AI innovation while managing risks related to data and privacy, taking a more flexible approach to support responsible development.

Unique challenges faced by traditional governance methods:

As AI becomes integral to market operations, traditional governance methods are facing unique challenges:

  1. Lack of transparency: AI systems, especially those using deep learning, make decisions in ways that are difficult to trace or understand, complicating governance frameworks that rely on human-readable processes.
  2. Privacy and data governance: AI's dependence on large datasets raises concerns about managing sensitive data, as traditional governance may not effectively address risks such as data breaches or privacy violations.
  3. Unconscious bias: AI systems can unintentionally perpetuate biases found in their training data, leading to unfair practices and discrimination in market operations.
  4. Accountability and liability: Determining responsibility for decisions made by AI can be challenging, as multiple stakeholders (developers, operators, users) may be involved, complicating traditional accountability structures.

To adapt to these challenges, organizations should:

  1. Enhance explainability and transparency: Governance frameworks must prioritize clear, traceable explanations for AI decisions, potentially incorporating AI auditing tools.
  2. Strengthen data governance: Establish stronger regulations on data collection, use, and protection, ensuring compliance with standards like GDPR.
  3. Promote algorithmic fairness: Implement policies for regular testing and validation of AI systems to identify and mitigate biases, using diverse data sources.
  4. Clarify accountability: Update legal frameworks to clearly define responsibility for AI systems, ensuring developers, operators, and firms are accountable for ethical AI design and implementation.

By addressing these areas, governance can better align with the evolving demands of AI in market operations.

Use of AI in managing risks in shareholder services:

AI is transforming risk management in shareholder services, particularly in fraud detection and compliance. It offers advanced, real-time analyses of transactions, allowing for the rapid identification of patterns and anomalies indicative of fraud. This capability includes detecting unusual transaction behaviours, discrepancies in documentation, and suspicious communication. AI's adaptive learning enhances its effectiveness over time, improving accuracy and reducing response times, which is crucial for maintaining customer trust and protecting assets.

Companies like Computershare are implementing new analytics and data warehousing tools to support this technology adoption. However, as AI tools improve fraud detection, fraudsters also use AI to analyze data and uncover personal information (e.g., mother's maiden name, high school name) available online, making it easier to impersonate individuals and bypass security measures. To combat these challenges, individuals need to be aware of their personal information's online presence, while service providers must invest in secure systems and enhanced security protocols to protect against fraud.

Advice for market participants navigating the evolving AI landscape in capital markets:

  1. Stay informed on regulations: Organizations must keep up with the ongoing adoption and upcoming regulations regarding AI. As new laws emerge, it's essential to ensure AI use is responsible, ethical, and compliant.
  2. Educate the board of directors: Boards should prioritize AI education to understand its applications, risks, and rewards. AI requires enterprise-wide oversight, not just IT department management.
  3. Seek professional guidance: Consult knowledgeable professionals when unsure about compliance with regulations. Engaging with regulators proactively can lead to collaborative solutions and insights.
  4. Be bold and optimistic: Embrace the potential of AI and be willing to take calculated risks.

By following these recommendations, market participants can better navigate the complexities of AI in the capital markets.

For more information on this topic, please contact the authors, Kate Stevens or Riley Dearden.

Watch the full webinar recording here.