Join Us Wednesday, October 15

In one of my courses at Stanford Medical School, my classmates and I were tasked with using a secure AI model for a thought experiment.

We asked it to generate a clinical diagnosis from a fictional patient case: “Diabetic retinopathy,” the chatbot said. When we asked for supporting evidence, it produced a tidy list of academic citations. The problem? The authors didn’t actually exist. The journals were fabricated. The AI chatbot had hallucinated.

This delicate relationship between AI and medicine was a major reason I chose Stanford for medical school. Just a short drive from Silicon Valley’s ecosystem of innovation and AI startups, it felt like the right place to learn about the future of medicine.

I eventually realized this assignment was never about breeding cynicism toward AI. It was Stanford’s attempt at teaching us how to work with it, to recognize its limitations, reflect on our own, and remember that humanity must always come first.

There are some real fears with AI’s integration into the medical field

At a dinner with a friend’s parents, they asked me if I worried about threats to the future of medicine. They mentioned headlines of various deep learning models —which are layered networks that resemble the human brain — making advancements in healthcare.

Under certain contexts, AI can diagnose cardiac arrest and abnormal heart rhythm with a higher accuracy than board-certified cardiologists. Some of these breakthroughs were born right here at Stanford.

A few days later, during a phone call, my mother shared a story about a teenager who died after confiding in an AI chatbot and discussing plans to end his life. In another course session, we discussed a case of bromism, a syndrome that can result from excessive consumption of sedatives, after a patient had consulted ChatGPT.

There are other real risks: breaches of confidentiality, bias when models learn from data that doesn’t reflect the diversity of patients, and authoritative advice that is misleading or unsafe.

But I’m putting my patients first

In our classrooms at Stanford, cadavers are called “silent teachers.” In the hospital and clinic, patients become our most profound educators. We take home the stories of our patients — the heartbreak, the pain, the breakthroughs, the hope. I will never forget sitting with a mother who had just lost her son to a fentanyl overdose.

If I approach AI in medicine from a place of fear, particularly fear of losing my job, I have lost. For me, healthcare is about advocating for the best outcomes for my patients. If AI enhances clinical diagnosis, distills complex information, or fills in my gaps in knowledge, then it unquestionably deserves a place in my generation of medicine.

Sure, there are negative aspects to AI. But these are not reasons to retreat; they’re reasons to collaborate and find ways to use this tool to help our patients.

I’m in medical school to improve the lives of many

In a 2021 interview with The New York Times, my 17-year-old self, then still in high school, said I wanted to become a doctor and open a clinic for immigrants.

After the article was published, a 90-year-old Vietnam War veteran emailed me and asked me to keep that promise. We shook hands outside the Stuyvesant High School auditorium in New York. Years later, a week before I left for medical school, we met again in Manhattan. We had become good friends.

On the steps leading up to Bryant Park, I promised this person — who had unknowingly become one of my staunchest supporters — that I would do the best for my future patients.

Even in an AI-driven future, the heart of medicine remains stubbornly, unmistakably human. Thirty years into my practice, even if all I did was help a patient and their family leave my office a little more hopeful and informed than when they arrived, I would have won.



Read the full article here

Share.
Leave A Reply