AI is rapidly changing the future of healthcare. Its ability to analyze complex medical and research data is helping improve diagnostics, drug development, and our overall knowledge of healthcare! But along with its promise comes serious ethical questions, particularly around equity, trust, and inclusion. These topics were at the center of the “Code, Context, and Care” symposium, hosted by the Cobb Institute on Sunday, July 20, during the National Medical Association conference, bringing together experts in medicine, informatics, public health, and education to explore how AI can responsibly support healthcare delivery.
AI presents transformative opportunities in healthcare—from diagnostics to population health—but without thoughtful design, it can also amplify existing inequities. As noted by panelists like Dr. Alison Whelan and Dr. Hassan Tetteh, biased datasets, opaque algorithms, or poorly validated tools can undermine clinical trust, misguide interventions, and further marginalize vulnerable populations.
Industry Insight: Ethical AI in Practice
Dr. Gilles Gnacadja, PhD, a research strategist at Amgen, provided a critical industry perspective on the ethical integration of AI in clinical research and development. He emphasized that for AI to be truly impactful, it must be:
- Transparent in its decision-making processes,
- Accountable to both providers and patients, and
- Equitably designed to reflect diverse populations and reduce care gaps.
From a biopharmaceutical standpoint, Dr. Gnacadja underscored the responsibility of industry leaders to implement AI with clinical validity and ethical guardrails, especially when these tools influence real-world treatment decisions. His remarks were a strong reminder that advanced AI must serve all patients—not just those best represented in training datasets.
For healthcare professionals, the takeaway is clear: our engagement and oversight are essential to ensuring AI enhances care without compromising equity or trust.
Panelists Spotlight
This year’s symposium featured a dynamic roster of panelists and speakers representing diverse expertise and lived experience:
- Dr. Gilles Gnacadja, PhD – Shared Amgen’s innovations in AI-driven drug discovery and its role in accelerating therapeutic development.
- Dr. Melissa Simon, MD, MPH – Spoke on inclusive research practices and Amgen’s focus on equity in clinical trials.
- Dr. Virginia Caine, MD, MPH – Highlighted the importance of AI in community-based public health.
- Dr. Alison Whelan, MD – Emphasized AI’s integration into medical education and workforce development.
- Dr. Ronnie Sebro, MD, PhD – Discussed AI applications in medical imaging and clinical diagnostics.
- Dr. Mallory Williams, MD, MPH – Served as moderator, guiding conversations on innovation and policy.
- Dr. Marshall Chin, MD, MPH – Focused on using AI to advance health equity.
- Dr. Brenda Jamerson, PharmD – Addressed the role of AI in pharmacy education and underserved populations.
- Dr. Hassan Tetteh, MD, MBA – Provided insights from military medicine and large-scale AI implementation.
- Dr. MaCalus Hogan, MD, MBA – Shared advancements in AI-assisted orthopedic care.
Key Themes from the Symposium: What This Means for Patients
1. Better Data Means Better Care
When AI is built using incomplete or biased data, it can lead to serious consequences—especially for Black patients. For example, if an algorithm assumes healthcare costs reflect health needs, it may overlook those who face barriers to accessing care. To make AI work for everyone, we need data that truly represents our communities.
2. Making Clinical Research More Inclusive
AI can make it easier to match people to clinical trials, which are often the gateway to cutting-edge treatments. But Black patients are still underrepresented in research. That means we risk missing out on care designed with us in mind. Equity in trial access is essential to creating health solutions that actually serve our communities.
3. Training Doctors to Use AI Responsibly
Doctors are learning to use AI as part of their medical training. But it’s not just about learning the technology—it’s about recognizing when AI tools might be biased or harmful. We need to make sure all future doctors, especially those from underrepresented backgrounds, are prepared to use AI in ways that respect and protect every patient.
4. AI Should Support, Not Replace, Your Doctor
AI can help doctors make more informed decisions, but it should never replace human judgment. Patients deserve care that considers their full story—not just what a computer model predicts. That’s why clinical oversight and patient-centered thinking must always come first.
5. Ensuring AI Works for Every Body
AI tools used in things like X-rays, orthopedic care, or pregnancy monitoring need to work for people of all skin tones, body types, and backgrounds. If these tools aren’t tested on diverse groups, they may miss key health issues. Patients have the right to know that the tools guiding their care are accurate—and fair.
BlackDoctor.org will continue to share AI trends and takeaways to keep you up-to-date!