RBC - Royal Bank of Canada

10/10/2024 | News release | Distributed by Public on 10/10/2024 10:09

AI in Canada: Leading Innovation, Lagging Adoption

In this episode of Disruptors x CDL: The Innovation Era, hosts John Stackhouse, Senior VP of RBC, and Sonia Sennik, CEO of Creative Destruction Lab, dive into one of the most transformative technologies of our time: Artificial Intelligence. With the potential to revolutionize industries from healthcare to energy, AI is reshaping the global economy - and Canada is both a leader in research and a laggard in adoption.

This week, Geoffrey Hinton, Professor at the University of Toronto, was awarded the Nobel Prize in Physics for his research in artificial intelligence that began in 1987.

Join John and Sonia as they discuss Canada's AI ecosystem and the country's challenges in keeping pace with global AI adoption. They're joined by three visionary guests: Sheldon Fernandez, CEO of Darwin AI, Kory Mathewson, Senior Research Scientist at Google DeepMind, and Gillian Hadfield, a Schmidt Sciences AI2050 Senior Fellow. Together, they explore the opportunities and barriers in AI adoption, the creative applications of AI, and the role Canada must play in the future of AI.

This episode is packed with insights for business leaders, policymakers, and anyone curious about how AI is changing our world. Whether you're an AI enthusiast or a skeptic, this episode will challenge your thinking on the role of technology in shaping the future.

Tune in to learn how AI is both an opportunity and a responsibility, and how Canada can lead the charge in this new innovation era.

Click to view audio transcript

John Stackhouse: [00:00:00] Hi, it's John here and welcome to Disruptors and CDL: The Innovation Era.

I'm joined by my co host, Sonia Sennik, who's CEO of Creative Destruction Lab. And in this special series, we'll be exploring the future of Canada's economy through the lens of cutting edge technologies, and the visionaries who are at the forefront of these breakthroughs.

Sonia Sennik: Today's episode is all about artificial intelligence, a technology that's shaping everything from healthcare, to finance, to energy.

AI has the potential to transform our economy and redefine Canada's role on the global stage. But there are also risks and challenges that come with these advancements.

John Stackhouse: If there's one thing we've learned over the past year, it's that AI is no longer just the stuff of Silicon Valley. It's here, it's now, and it's transforming Canadian industries at a pace we haven't seen before.

I was actually just in Silicon Valley. And the momentum has not let up. In fact, just a year ago when I was [00:01:00] last there, the innovations that many of the companies thought might take a few years are now already here. But another thing I heard in the Valley was that Canadians are not moving at the same pace as many other leading countries.

So how do we change this?

Sonia Sennik: Maybe we can first start with assessing how are we doing in Canada. We have world class research and academic excellence. Canada is home to several prominent AI research hubs, notably MILA, Montreal Institute for Learning Algorithms, Vector Institute in Toronto, and the Alberta Machine Intelligence Institute, or Amii, in Edmonton.

The University of Toronto is another key institution with Geoffrey Hinton's contribution to AI significantly impacting the global development of deep learning technologies. At Creator Destruction Lab, we launched our AI focus stream in 2015. We have since seen a huge expansion of AI into every industry and technology area.

But John, as we've talked about before, Canada currently has a productivity challenge. We are a leader in AI research, but a laggard in AI adoption. It is really important for Canada to enable investment in new technologies to maintain [00:02:00] global competitiveness and improve things like the efficiency of our complicated project approval systems or reducing the complexity of the tax system, for example.

John Stackhouse: So Sonia, that's a lot to figure out in this episode. And fortunately, we're joined by three remarkable leaders. First up will be Sheldon Fernandez. He's the CEO of DarwinAI, a company that's pioneering AI driven solutions for a range of industries. Darwin actually came out of the Creative Destruction Lab in its AI stream in 2017.

And Sheldon is going to give us an inside look at how AI is already reshaping Canadian business and what the future might hold.

Sonia Sennik: We've also got Kory Mathewson, a senior research scientist at Google DeepMind, whose groundbreaking work is pushing the boundaries of what AI can do. Kory's at the forefront of exploring how AI can augment human creativity and decision making.

And I can't wait to hear his perspective on where this technology is taking us.

John Stackhouse: And rounding out our discussion is Gillian Hadfield, who's a professor at the University of Toronto and at Johns Hopkins University in the United [00:03:00] States. She's also been named a Schmidt Sciences AI 2050 Senior Fellow, which in AI circles is a really big deal.

Sonia Sennik: Let's dive in.

Sheldon Fernandez: Sheldon, welcome to Disruptors. Thank you for having me.

Sonia Sennik: So Sheldon, can you tell us a bit about your background and your work at Darwin?

Sheldon Fernandez: So I am first a reluctant entrepreneur and then a serial entrepreneur, if that makes any sense. I went to the University of Waterloo and had the unique privilege as a co op student of doing a couple of my work terms in the United States in Silicon Valley in New York.

And that's where I would say the entrepreneurial spirit of our neighbors down South really made an imprint on me. So I've actually started two companies right out of school. I started a company called Infusion with some classmates and some partners in the US. We grew that from the original six people to a company of 700.

And we were acquired in 2017 by a company called Avanade. They're coined by Microsoft and Accenture. My plan was to take a break after [00:04:00] that 17 year journey and, watch hockey and eat Tim Hortons and do all the wonderful things that I think Canadians do when they have free time. But of course, the artificial intelligence revolution was happening around that time.

And through a series of chance events, I met a really gifted academic team at the university of Waterloo and just couldn't walk away from this team and the potential of this IP. So in 2017, we started Darwin AI. And it really got going in 2018, about four months after starting Darwin, AI, my wife got pregnant with our first child.

So I often joke for the last five and a half years, I've really had two startups. I've had an artificial intelligence startup called Darwin AI and a biological intelligence startup called Max Fernandez, and they are magical and exhausting equal measure.

John Stackhouse: Sheldon, I'm sure there's a Darwinian joke in there about survival of the fittest, but maybe we'll leave that to later in the conversation.

One of the things I find fascinating about your background is just the interdisciplinary nature of it. You've studied neuroscience and metaethics. I'm not even sure what metaethics is, but it sounds impressive. [00:05:00] And you've pursued creative writing at Oxford, no less. I'm curious how the combination of those fields and all that it does to your brain helps you and therefore might signal how it can help all of us.

In this new age of AI.

Sheldon Fernandez: Yeah, I did engineering as an undergrad. Then I did a master's degree in theology and philosophy where my thesis was on basically asking the question, what could the latest neuroscience tell us about our foundational morality? And I did it purely out of interest, not knowing that Almost 10 years later, neuroscience would directly connect to neural networks because of the conceptual overlap.

And of course, the question of morality would become very important as we think about the ethical implications of this pervasive and ubiquitous technology. So I think it just gave me a very holistic appreciation for How technology is not just in a box and it touches literally, so many different things.

And certainly we brought that holistic perspective to Darwin when we thought about the implications of our technology, societal and otherwise.

Sonia Sennik: And Sheldon, [00:06:00] just on the topic of deep tech and AI, Many companies see challenges and barriers on new technology adoption. What are the biggest barriers you're seeing right now for AI adoption and how would you recommend companies overcome them?

Sheldon Fernandez: I think one of them is just being overwhelmed with the different applications of this really powerful technology and the many areas it can be used in your business. And what I often say is when we're advising companies, start with low hanging fruit, start with obvious processes where, basic AI can help you, where you can measure the uptick in performance, where you have a lot of data and use that project as a means to familiarize yourself with this world a bit before tackling more ambitious undertakings.

So it's a combination of fear, conceptual overload, and a little bit of, new technology syndrome that we see with any new transformative technology.

John Stackhouse: Sheldon, you deal with a lot of companies. When I talk to Canadian business people right across the economy, I [00:07:00] sense there's still a bit of hesitation around AI.

Maybe that's wrong, but curious what your perspective is and what you read into a bit of that Canadian mindset when it comes to this new opportunity.

Sheldon Fernandez: Yeah, I can tell you that of the dozen or so clients that we had at Darwin, all but one were outside Canada, which is a real shame to me as a proud Canadian, there's so much innovation around the fundamental IP and technology and deep tech that's happening here, Vector, University of Waterloo, the Creative Destruction Lab.

I mean, we were a child of that program, yet the corporate clients we engage are just very risk averse. It was much easier to engage in an experimental project with our companies and partners in the United States and Europe and even Asia than here in Canada. So that's a culture that we've known about and I've dealt with it, in my previous business, but it is something that I think is limiting the aggressive adoption of AI transformation in this country.

John Stackhouse: When you think of those other clients, the non Canadians who are embracing this, what [00:08:00] kind of questions are they asking of you and of the technology that Canadians should be asking?

Sheldon Fernandez: I think they're asking, first of all, what are the areas where this can transform our business? But one question we get a lot of is what are our competitors doing?

Or what are the small startups doing that could be threats to our business? To give you an example, financial institutions, the United States are looking very aggressively at fraud detection, and it was harder to get that conversation going with companies that we advised here. When you look across the Canadian economy, Sheldon, where do you think the biggest opportunities are?

One of the things I've noticed is that there's some very seemingly mundane industries that really have been untouched by AI. I think of natural resources, I think of mining, I think of, things that we traditionally have strengths with, where there hasn't been a lot of injection of this technology because it doesn't seem exciting at the beginning, right?

So I think it's those kind of underserved areas where we have traditional [00:09:00] industrial strengths, where there's a real opportunity for Canada to show leadership.

Sonia Sennik: How is Canada doing? The hook for our conversation here on AI is, what does Canada need to do to stay ahead in the global AI race? What would be your grade or estimation of how Canada is performing right now?

Sheldon Fernandez: I don't want to be too critical of my own country because I'm such a proud Canadian, so I would give us like a B minus. I think there's so much wonderful innovation that happens here. I see it at the University of Waterloo where I'm a regular guest and speaker. I see it, of course, at CDL, I see it at Vector.

So on the innovation side, we're not doing too badly where there's a lot of room for improvement is with the larger scale corporate adoption of AI. The other advice I give a lot of entrepreneurs who are starting the journey and younger than myself is that the knock on Canadian entrepreneurs is that we don't think ambitiously enough.

Many of the Canadian entrepreneurs that I speak to, they'd be comfortable with a 250 million market. In the United States, I [00:10:00] want a 10 billion market, I want to change the world. I would encourage everybody listening to this, including entrepreneurs, think bigger. This is going to be a transformative technology without question.

It reminds me a little bit of the internet in the early 90s when we didn't know that this rudimentary technology that was sending signals over telephone lines would completely transform the world. We had no idea. The same thing is true of artificial intelligence. And so there's an incredible opportunity here that I would really urge a lot of my fellow Canadians to take advantage of.

Sonia Sennik: I love that Sheldon. Think bigger, get creative. Thank you so much for your time and for joining us.

Sheldon Fernandez: My pleasure. Thank you for having me.

John Stackhouse: Kory, welcome to Disruptors. Happy to be here. Thanks for having me. Kory, I was just saying on the introduction that I was in Silicon Valley recently and got to go back to Googleplex, hadn't been there in a few years, and saw for the first time the new gigantic building, it's stunning, architecturally, [00:11:00] dedicated to DeepMind, and pardon the expression, it was mind blowing, it just was a big statement on the ambition.

when it comes to AI that we see from Google, but lots of other companies in the valley. And I wonder if we can start off with your perspective, because you get to see the world through DeepMind and the world as it is today, but also the world as it is becoming. Where does Canada fit into that picture?

And how do Canadians need to see ourselves in that bigger global picture?

Kory Mathewson: There's a lot to unpack there. So first off, Canada occupies a pretty unique place on the world stage when it comes to the leadership in AI. We have an incredibly open and collaborative ecosystem with layers of academic, non profit, and private sector cooperations, and that's really helped to position Canada as a leading player in this space.

We were, I think, the first nation to have a Pan Canadian AI strategy. There has been significant investment on a federal level for a long time. The National Research Council, a lot of the tech incubation, a lot of the universities, a lot of great [00:12:00] faculty and students that come out of Canada, and some fantastic representation in Canada.

in the private sector, including myself and the amazing colleagues I have at Google DeepMind in Montreal and in Toronto. I think that there's a lot of exciting people that are working in it, a lot of great trainees and a great like system of development in Canada. And that has really driven a lot of knowledge sharing and a lot of innovation.

But as you say, there's always more to do.

Sonia Sennik: Kory, your research focuses on the human machine interaction, most recently in domains of interactive conversational systems or creative applications of AI. As I like to call it, whose line of code is it anyway? So what potential do you see for AI to enhance human creativity further?

Kory Mathewson: I love that. So Colin Mockery loves that bit. I told it to him and he said he was very fond of it. He's seen a lot of my technology brought to the stage and is pretty excited about the I love it. possibilities for these technologies to challenge creative people on the theater stage. So yeah, I like to work at the [00:13:00] intersection of AI and artistic creativity.

I think that there's so much that can be done by listening and engaging with and empathizing with the creative professionals because they're really the people that are going to push these models past the frontier. And their opinions are important because we have to do this together. It's really critical that we do this together because.

Some of these people will be the ones that will be most impacted by the technologies that we're building, right? To make the next era of art, like generative AI, is the storytelling technology of our generation. And that means that how we build it has to be done in collaboration and alongside creative people and technical people.

John Stackhouse: How do you apply that to businesses, and I'm thinking of healthcare as well as education, not just big corporations. How do you apply that idea of a technology that is enabling the storytelling of our times? That's a poetic description.

Kory Mathewson: I aim for a bit of a poetic description every once in a while, but [00:14:00] humanity is built around storytelling.

knowledge transfer and a lot of what we would describe as culture is built around storytelling and the way that we share our principles and values comes down to the way that we can communicate and we'll leverage the technology of our time to do that sort of communication. So I'm thinking about Educators, I think about students also, every single student in post secondary education, the TAs, the professors that are trying to communicate these classes and curriculums, they have the capacity to leverage these generative AI technologies so that they can communicate their messages more effectively, but also personalize that learning experience, that learning journey.

In the healthcare industry, there's a lot that Google's doing to understand the medical research domains and assist medical research. I think personalization has a real place here. Everyone is different. And with these powerful tools, we have the capacity to appreciate that context and appreciate the individual and to really build future alongside [00:15:00] them as they build the best version of themselves.

Sonia Sennik: Thinking about the creative community, Canada has some incredible musicians. Some of the work you're focused on right now is transforming the future of music creation. And I'd love for you to share a bit about Dream Track or any of the other music AI tools that you're starting to develop. How do they work and how do creators leverage them in their music or art creation process?

Kory Mathewson: Dream Track is powered by our most advanced music generation model. It's called Lyria and Dream Track in YouTube shorts is a technology that's going to allow creators to express themselves in new and interesting and different ways. It's a model. It's a generative AI model that's been trained on a whole bunch of information to generate new original content.

And that content can either be used directly in Dream Track or in a lot of our different Music AI Sandbox tools. And that can be used directly in your own creative workflows if you want to come up with new idea generation, smash two ideas together, play something [00:16:00] on one instrument and hear it in a different voice.

We've seen artists like Wyclef Jean and Dan Deacon many more that have asked for certain things. that are doable, and we implement them, and then we see the sort of like fruit of the collision of the arts and the

John Stackhouse: technology. Kory, that's so exciting to hear. When I talk to a lot of companies, though, they're doing some pretty rudimentary things with AI.

How do organizations need to think about these big ambitions while also focusing on the plumbing? And using AI as kind of an enhancement tool for efficiency purposes.

Kory Mathewson: So this, I think, will happen slowly, and then progressively quicker, and quicker, and then rather quickly. Google Canada put out an economic report just recently, and it said that this sort of technology has an amazing potential to boost Canada's economy.

230 billion, save the average Canadian worker 175 hours a year. That's not [00:17:00] nothing. You think about a lot of the jobs that are being done and how those jobs can be done more efficiently, more effectively with these generative AI tools. Obviously, there's going to be a lot of re skilling, on ramping that's going to take time and energy and effort.

investment, but the payoff, the dividend that comes once you build out the workflow and how the workflow can be augmented by generative AI is starting to be measured and will pay off as the models get better, which is a bet I'm willing to make.

John Stackhouse: I'm glad you raised the point about skills and we all need to think pretty aggressively and ambitiously about the skills we need.

For that augmentation, as you described it, we also need to think about the talent that is critical to all of our organizations as well as our country. We've seen a lot of that talent leave. It goes to Silicon Valley. How do we do better? Kory, keeping our best talent. Especially when it comes to AI.

Kory Mathewson: So venture investment from previous founders is critical.

Now I do a lot of work with the Creative Destruction Lab [00:18:00] adjudicating the science behind early stage startups, not just in Canada, but companies that come to Canada to connect with our venture ecosystem and our science ecosystem. So I think it's not just a matter of reducing the brain drain, but also attracting talent here and saying, hey, we have an incredible amount of scientific expertise and mentorship for.

These early stage startups and that can be fostered through the acceleration of the creative destruction lab through the incubation at the Mila or the Vector or Alberta's Machine Intelligence Institute. These ecosystems are inviting and they want to support people. Canada is a great place to build a company.

Canada is a great place to do your early stage research to be a graduate student. There's an incredible amount of funding that's available provincially, federally, and at each of these institutions. So, would I like to see more? For sure. Am I happy to mentor early stage researchers, students, so that they do consider Canada as a place to build what they want to build?

For sure.

Sonia Sennik: Thank you so much, Kory. This was fantastic. Thanks for your time.

Kory Mathewson: Always a pleasure to talk with you, Sonia, and nice to meet you and to chat with you, John. [00:19:00] Thanks, Kory.

Sonia Sennik: Gillian, welcome to the podcast.

Gillian Hadfield: Hey, very glad to be here.

Sonia Sennik: So, Gillian, we met earlier this year, just before you started as a Schmidt Sciences AI 2050 Senior Fellow. Can you tell us a bit more about your work in this role so far?

Gillian Hadfield: Sure. The theme that's informing that work is how can we build AI systems that what I call normatively competent and how can we build the normative infrastructure for AI alignment?

So this is a different take on how do you get computer machines, AI to do what you want them to do. One approach, which is the dominant approach today, is well just figure out what values and norms you want them to follow and stuff them in there. And I think it's not going to work very well. It's going to be very brittle, not very adaptive.

So I'm working with some fantastic computer scientists on how do you build AI systems that can go into a context or a setting and figure out what they [00:20:00] should be doing in this environment.

Sonia Sennik: The first way you were talking about is very policy driven or rules based, and what you're researching is these AI models that need to do off policy learning to have the ability to innovate on policy.

How do those two things work?

Gillian Hadfield: So I think of normative competence as what describes humans. We don't actually just come like pre programmed with a bunch of rules and norms that we should follow. You could drop any of us down in an unusual new environment, and we could kind of figure out, oh, I know there must be rules around here, and I know what to look at to go and figure out what the rules are, and I know what's expected of me, both in terms of complying with rules, and also helping to enforce rules.

If you're going to get AI systems that follow the rules, it's much more complex than just say, well, give them the rules and they'll follow the rules. Rules are actually really complex things and we use all kinds of institutions and ways of signaling and so on to figure out what is the right thing to do here.

Sonia Sennik: So given your experience on advising governments and tech companies on [00:21:00] AI policy, how do you think Canada can develop a regulatory framework that promotes both trust and innovation at the same time?

Gillian Hadfield: A really important place to start is to recognize that we don't really even have the basic legal and regulatory infrastructure in place that would allow us to figure out when and how we want to regulate what AI does.

So everybody's very focused on, we should come up with the rules and standards of behavior. But I think about things like, well, we need to figure out how we're going to register. AI systems so that we can learn about them and keep track of them. We need to figure out how do we give them durable ID. Maybe we need to be figuring out how to make them directly accountable, just like we make corporations, which are also artificial entities, directly accountable through the legal system.

Sonia Sennik: So it sounds like Gillian, you're thinking on very much a systems level for these types of innovations and that the recommendation for Canada is to think system wide. So how [00:22:00] can these regulatory or legal frameworks remain flexible? Because it sounds like they need to have some structure and standards, but also flexible at the same time, just like building a building or a bridge, it needs to stay standing tall, but it also needs to survive the weather and sway.

Yes. So what are some recommendations or what are you seeing that's proving effective

Gillian Hadfield: I think the most important move we need to make to be able to get that balance between stability and adaptability between, having reasonable protections against harms, but also allowing innovation and evolution is an idea that I've put forward with Jack Clark called regulatory markets.

Rather than government coming up with very specific requirements, here's the kind of data you should train on, here's the particular tests you need to be able to pass. Government should be setting what the outcomes are. How safe does that autonomous vehicle have to be? How fair does your credit approval algorithm have to be?

Then you recruit the private sector to [00:23:00] invest and innovate in designing the systems that will implement those outcomes, and what.

Sonia Sennik: Are you seeing, Gillian, right now as a barrier for that to flow?

Gillian Hadfield: So I've actually been talking about this idea for close to eight or 10 years. So it's been a long slog to get people to not think this is crazy or this is like turning it over to corporations to regulate.

But we are actually starting to see much more uptake because I think it's becoming clear to people that it. Governments are not going to have the capacity to respond fast enough and the level of complexity and technological complexity here. So I will say I'm actually much more optimistic that this is going to happen than I was even like three or four years ago.

I think the release of ChatGPT and the sort of the language model explosion has gotten everybody to realize, Oh, my gosh, we may need to be doing things quite differently. But the kinds of obstacles that are there is it is a really different way of thinking about regulating, and you have [00:24:00] to get past this idea that it's handing it over to the private sector.

Now, I do point out, we've already handed it over to the private sector because we actually don't have very much AI regulation in place, and companies are regulating themselves. So we need to do things like, let's pick some domains. And say, okay, how would we know that we're achieving the outcomes we want?

Let's identify some companies that are startups that are already in the space or could soon be in the space to license them to be a licensed regulator in this domain. Let's think about how we create the incentive to adopt that regulatory system, like by giving like a safe harbor. It says, you can't get sued in tort law.

If you've adopted this regulatory regime, like, oh your algorithm started discriminating against a group of people or it started behaving in ways that were unpredictable, like in the medical space or something like that. So I think I'd really be focused on [00:25:00] help those companies get a market foothold.

So they've got serious demand and I think governments could do that in a really straightforward way.

Sonia Sennik: There's a lot of companies and enterprises that struggle with adopting new technologies like AI. And we're seeing Canada right now, extremely low on our productivity and AI adoption comparative to the G7 or G20.

What are some of the barriers Gillian, that we're experiencing and what are some recommendations on how companies can overcome them?

Gillian Hadfield: I think there are barriers that are coming from risk aversion in companies about, could we get sued? We don't know. I've been reading scary stories about chatbots that go crazy or predictions about what might happen with AI.

So I actually think that the lack of sensible regulatory infrastructure that's attuned to the current risks and, I mean, there are current risks, but it's nothing we couldn't handle. But I think it's terra incognita for a lot of the people who are managing [00:26:00] risk and liability and compliance in organizations.

So I do think that building that regulatory infrastructure, trying to keep it simple, that's why I keep coming back to the safe harbor idea. Right? Like the idea that, oh, we can actually put a straight line through our risk calculation because somebody said, take these steps or, enter into a contract, a regulatory contract with this organization, and you won't face those kinds of downsides.

There are economic barriers, but this is certainly a factor.

Sonia Sennik: So de risking is critical. Thank you so much, Gillian, for your time. And thank you for joining the podcast.

Gillian Hadfield: Yeah, happy to, Sonia. Thank you.

John Stackhouse: That was a fascinating conversation, Sonia. And a really good reminder of the complexities of AI. We can all get excited about the technology and the opportunity for innovation.

As we should, but there's so many more considerations that we're hearing from right around the world.

Sonia Sennik: And what I love is talking to folks who are expanding AI into [00:27:00] areas that you wouldn't typically think about, like Kory talking about the creation of music and the creative process, augmenting that with new innovative tools, and how essential it is to hear back from that community to shape those tools and build things that they're excited to use.

John Stackhouse: That's such an important word, tool. AI is a tool, and I fear that too much of the public debate around it is ascribing superpowers to AI that we may see one day, but right now it's a tool that's in the hands of humanity and we can all use these tools, whether we're music creators or code writers to improve what we do. And how we do it.

Sonia Sennik: To quote Ani DiFranco, every tool is a weapon if you hold it right. And so now is the time to figure out how do we want to manage this tool? How do we want to shape the way we use it, the way we integrate it into our lives? And that's why it was so inspiring to hear Gillian talk about the work she's doing at the Schmidt Foundation to have a global conversation about how this evolves.

This is a moment in time as well, John, where I feel [00:28:00] companies and people can really have profound influence on how we use this technology. So now's the time to get involved, test it out, try it in your company, in your industry, in your life, and make it work for you.

John Stackhouse: What a great message to end the show on.

Make it work for you. Sonia, it's been great sharing the episode with you.

Sonia Sennik: Always a pleasure, John. And a special thank you to Sheldon, Kory, and Gillian for sharing their insights.

John Stackhouse: This has been Disruptors, an RBC podcast. And if you liked what you heard, be sure to subscribe, leave a review, and tell us what topics you want us to explore next.

I'm John Stackhouse.

Sonia Sennik: And I'm Sonia Sennik.

John Stackhouse: Thanks for listening.

Disclaimer

This article is intended as general information only and is not to be relied upon as constituting legal, financial or other professional advice. A professional advisor should be consulted regarding your specific situation. Information presented is believed to be factual and up-to-date but we do not guarantee its accuracy and it should not be regarded as a complete analysis of the subjects discussed. All expressions of opinion reflect the judgment of the authors as of the date of publication and are subject to change. No endorsement of any third parties or their advice, opinions, information, products or services is expressly given or implied by Royal Bank of Canada or any of its affiliates.