AAMC - Association of American Medical Colleges

11/10/2024 | News release | Distributed by Public on 11/10/2024 13:35

As AI transforms medical education and care, leaders must adapt but be wary of pitfalls

  • AAMCNews

As AI transforms medical education and care, leaders must adapt but be wary of pitfalls

Microsoft exec James Weinstein, DO, tells Learn Serve Lead attendees about how the technology is transforming the medical field, but shares concerns about transparency and equity.

James Weinstein, DO, MS, talks about the potential of AI to fundamentally change medical education and practice, and the challenges it presents, at Learn Serve Lead: The AAMC Annual Meeting on Nov. 10 in Atlanta.

Credit: Kaveh Sardari

By Patrick Boyle, Senior Staff Writer
Nov. 10, 2024

James Weinstein, DO, MS, senior vice president at Microsoft Health, is both "optimistic" and "concerned" about the potential for artificial intelligence (AI) tools to transform medical care, education, and research.

"I feel very optimistic that artificial intelligence will have a key role to help us transform systems that we all want to make better," Weinstein said during a plenary session on the second day of Learn Serve Lead: The AAMC Annual Meeting on Nov. 10 in Atlanta.

But, the spinal surgeon and researcher added, "I'm concerned" about whether people and the institutions they run will be able to keep up with the technology in order to fully understand and govern how it works.

Weinstein - former CEO and president of Dartmouth Health and former director of the Dartmouth Institute, both in New Hampshire - focuses at Microsoft on innovation and health equity. Microsoft's role in AI includes well-known ventures (such as its collaboration with OpenAI) and an array of AI health tools focused on population health, evidence, imaging analytics, and genomics.

In his presentation and subsequent question-and-answer discussion with Alison J. Whelan, MD, AAMC chief academic officer, Weinstein stressed that the promise of AI is not just in technology substitution - replacing one technology with one that performs a task better - but in ecosystem disruption, whereby systems are fundamentally changed to improve outcomes.

"What we really want is to redefine the system," he said.

For example, AI can consume and analyze date about patients, research outcomes, and patient populations at speeds and with precision far beyond human capacity, thus improving diagnosis and treatment. One result he envisions is building AI networks to bring health consultation and services to people in underserved areas.

"Can AI be used to improve outcomes in well-being across society? I think yes," Weinstein said. "Can we integrate AI into education and care processes? Yes."

But he issued cautions.

"Can we ensure no harm? I'm not sure. Can it be trusted and be equitable and fair in the eyes of those impacted? That really depends on us."

Among the key issues, he said, are ensuring transparency about how the systems collect and share patient data; eliminating bias in the collection and use of data; and fully informing patients about how the systems work so that they can make informed choices about participating in processes driven by AI.

With AI progressing so quickly, Weinstein said, "we need to have oversight" of the tools. "But who's going to do the oversight? Governments are going to struggle with this. [Do] we have enough knowledge to actually do oversight?"

And while such issues remain to be resolved, Weinstein stressed that doctors, teachers, and administrators need to understand how AI works and how it is being used now by students and medical professionals.

"Ecosystem transformation is happening," he said. "You as leaders in academic medicine can let this happen to you, or you can be part of the solution.

"You need to be engaged. Just sitting back is not an answer."

Leading up to LSL, AAMCNews spoke with Weinstein about the potential for AI to transform medical care, and the risks that the technology brings. Below are portions of that interview, edited for length and clarity.

Let's start by talking about why medicine even needs AI. What problems can AI solve beyond relatively simple functions like providing the tools that listen to a doctor-patient visit and produce written notes?

I like to think about AI as actionable intelligence, not artificial intelligence. How do we turn it into actionable things that change the lives of everyday people?

It's not just for cool stuff, like taking notes, which I call technology substitution. Instead of the doctor writing his notes, now the note can be written by AI. I can listen to my patient and do my job as a physician better, actually use my ability to focus on the problem and make a patient-centric decision.

But I can't know everything that AI can know. Its ability to consume information in so many fashions, in such large quantities, and the ability to communicate that back to you in seconds with some useful information, has changed everything. How do we use that potential to solve everyday health-related problems? These tools can effect tremendous ecosystem changes, which then affect individuals.

Editor's note: Weinstein is talking about generative AI, which creates original content and analysis by learning from information that it is fed, including text and data.

You've written about ecosystem disruption in the medical system. What does that mean?

People talk about technology as being disruptive. Disruption generally refers to a substitution of technology to make a specific thing easier to do, like organizing the electronic health record.

But that doesn't necessarily change the patient experience for the better. That doesn't get the patient an earlier appointment. That doesn't help the person pay for the medication they need. With AI-enabled technologies, we begin to put those pieces together in a more seamless manner. How do we use these technologies like AI in the systems of the payers, providers, patients, pharma, and supply chain, to connect the dots, to spend money more efficiently and effectively? AI can help with diagnosing, testing, and treatment choices like never before.

What would that look like for patient care?

Can we access "multimodal" information from these communities to intervene with patients before the problems start, before they become less amenable to treatment? Before one has the manifestations of breast cancer, lung cancer, before they get prostate cancer. Diagnose in the earliest stages, where simpler treatments may be much more effective. We haven't had so much upstream predictive capability before.

When you talk about rural health care, you're talking about millions of Americans who don't have regular access to health care, especially specialists and subspecialists. With all the population and environmental-based data coming together in this AI brain of supersized capacity, we can say, "I need to target this county/community, this zip code in Mississippi that I know is at risk for many health ailments. Do they [a particular resident there] have food insecurity, environmental risk? Do they have housing problems? How many people live in the house? Are they smokers? Are they drinkers? What medications do they take? What is their family history? Do they wish to share genomic information?"

Clinical care accounts for only 20-30% of someone's health. Behaviors - smoking, exercise, alcohol use, sexual activity - that's another 30%. Social/economic issues, that's 40%. Physical environment, that's 10%. AI can look at all those pieces - 100% - and we can have much more targeted approaches, and more upstream.

The issue of patient privacy comes up a lot when discussing AI in medical care. What you've described, if I understand correctly, is breaking down silos that hold information about patients, so that information is shared. I get the value of that.

But what do you tell people who say, "Now you've gathered information about me from all sorts of places into one system, and I don't understand that system. And even though you're Microsoft, I don't believe anybody can guarantee that my data doesn't get violated somehow, like by hackers. Or be used for research."

Patients must own their own information. Systems that use AI must respect that. A lot of people talk about implied consent: Because I'm here participating in a medical process, you're implying that I'm consenting to share information. I don't agree with implied consent. I believe in informed choice versus traditional informed consent.

You've told me you're going to use my data in some large database to discover a cancer drug. I agree with that as informed choice. But if you're going to use it for something you haven't told me about, I don't agree with that.

Good, accurate data are essential to the future of artificial intelligence, if it is to be actionable. But not sharing who owns the data and what you're going to do with it is not fair.

But if I'm a patient who needs care urgently, I feel pretty vulnerable. I'm probably going to waive my rights to withhold my personal information in order to solve my problem.

You just said something I don't agree with. I don't think you should accept a risk that we haven't discussed, even if you're in a vulnerable situation.

We lost a daughter to cancer at age 12. She went through chemotherapy and radiation for 11 years. I wasn't happy with the discussions I had with the doctors about the risks. I didn't feel totally informed on some choices. But we were vulnerable and stuck in the unknown. We couldn't not treat our daughter.

So, I understand that people are vulnerable when they're in a situation where they're compromised by some illness or disease. That's even more reason for doctors and researchers to respect their rights as humans, to honor their privacy and to share information only with their permission.

This is a good place to discuss oversight of AI. The book The AI Revolution in Medicine quotes your observations about data and safety monitoring boards, which operate under the National Institutes of Health to oversee clinical trials to ensure patient safety and the reliability of data collection. You worked with a monitoring board when you led a 15-year trial [the Spine Patient Outcomes Research Trial] at the Dartmouth Geisel School of Medicine, on the effects of back surgery.

Would oversight boards like that work for AI in medicine?

The governance of this is important. Transparency is key. I found this external group of subject matter experts in the oversight group to be extremely helpful. It's an external body, non-interested parties, with no financial gain but with subject matter expertise.

For AI, what kind of expertise do we bring to the table so that patients feel, and institutions feel, that they [institutions] are doing the best they can to avoid potential compromise of patient information, to do no harm?

I appreciate how thoughtful you are about this.

I want to tell you one thing. I don't know if you know Adi Ignatius, editor in chief of the Harvard Business Review. The new edition just came out, and he wrote about producing work with the help of AI: "Times have changed. What once felt like cheating is now a way to be more productive."

I think what Adi said is that our professional success in dealing with humans is going to be much better with these tools. If you don't use them, I fear that you're going to be leaving people behind and hurting people. As a colleague puts it, "Physicians won't be replaced by AI, but physicians that don't use AI will be replaced by those who do."

Keep the conversation going

Discuss this session and more while networking with your peers in academic medicine during, and long after, Learn Serve Lead ends, by joining the AAMC's virtual community. More than 7,000 of your peers are already there!

JOIN THE COMMUNITY

Patrick Boyle, Senior Staff Writer

Patrick Boyle is a senior staff writer for AAMCNews whose areas of focus include medical research, climate change, and artificial intelligence. He can be reached at [email protected].