Back
Communications

AI Bias in Healthcare Is a Hidden Inequality Machine. How Can We Help Influence and Fix It?

By Sebastian Stokes | Jun 09 2025

AI is transforming healthcare—but it’s not transforming it equally. From large language models (LLMs) to diagnostic tools, these technologies often overlook the needs of underrepresented groups and underprivileged communities. As political pushback threatens diversity and equity efforts, the risk of deepening health disparities is growing. 

The IPSOS AI monitor 2025, released this week, reports that globally most people trust artificial intelligence not to discriminate or show bias toward any group of people (54% agree), but AI is, in some clear instances, making the healthcare gap wider.  

AI Isn’t Neutral, It’s a Mirror of Our Biases 

AI is often described as “unbiased” because it is a machine. However, these tools are only as unbiased as the data on which they are trained. These biases are not about intention but impact. For example, AI-powered skin cancer detection tools trained primarily on lighter skin tones are significantly less accurate at identifying melanoma in darker skin. Or consider AI-driven symptom checkers, which often prioritize male-centric symptoms for conditions like heart disease, leading to a dangerous under-diagnosis of women. This isn’t just a system glitch; it’s a structural flaw reflecting decades of biased data. 

Why Communications Agencies Hold the Key 

As communicators, we craft narratives. AI doesn’t understand human experience, but we do. The strategy, creative and behavioral science experts at communications agencies can intervene by creating content designed to serve everyone, not just those who historically received the best care. In this way, we will influence the universe of information on which AI tools are trained.  

A perfect example of how our work as communicators can contribute in a positive way is our Bristol Myers Squibb (BMS) Cancer Equals campaign in the UK. The campaign aims to foster a national conversation and support the development of solutions that address health inequalities in cancer. Leveraging authoritative, evidence-based data points from various influential stakeholders, we’re helping to shape AI training datasets, making them more representative. 

How We Build a Fairer AI Future 

Fixing AI bias requires systemic change, and communications agencies can lead the charge by demanding better data, advocating for more representative training sets and challenging unconscious biases before they get baked into AI algorithms. Here’s where we did some human-first, active brainstorming with our internal AI tech stack to look at meaningful solutions. Augmenting our intelligence only gets us further, faster. 

  1. Generative Engine Optimization / Share of Model: Shaping AI’s Knowledge Base

Most AI models pull their knowledge from existing online content, meaning whoever controls the content controls the narrative. If communicators actively create inclusive, high-quality content, we can shape what AI learns. Think of it like “Share of Search” but for AI: “Share of Model” or Generative Engine Optimization. The more accurate, representative and diverse our content is, the more influence it has on LLM outputs. We are well placed to influence multichannel campaigns, where the combination of owned content alongside earned and paid media that targets authoritative information sources, can really influence outcomes in LLMs.  

  • What this means for you: Develop SEO-driven, AI-optimized multichannel content representing diverse patient experiences, inclusive messaging and underrepresented voices. 
  1. Demand Diverse Data and Explore Partnerships

AI learns from what it’s given. If that data skews white, male and Western, it will perform better for white, male and Western patients. We must push AI developers to use representative datasets reflecting real-world diversity. Too many AI-driven healthcare tools are launched without proper bias testing. Agencies should work alongside AI developers to embed fairness audits into the development process before the tools start making real-world decisions.  

  • What this means for you: Look for opportunities to partner with AI developers and tech companies to audit and improve dataset inclusivity before new tools are deployed. We can also advocate for bias review panels to ensure content and its impact on LLMs is positive. 
  1. Inclusive Strategy and Inclusive Creative

AI isn’t just about data; it’s about how that data is communicated. Communicators must ensure that health campaigns, chatbots and AI-generated messaging are designed with cultural competence and linguistic inclusivity.  

  • What this means for you: Develop AI literacy training for comms teams, ensuring campaigns are checked for algorithmic bias before they go live. This means being clear on how AI algorithms work, what bias in AI means, why AI models hallucinate and why content matters. 
  1. AI Transparency & Explainability

AI bias thrives in opacity. If no one understands how an AI model makes decisions, no one can challenge them. Comms agencies are working diligently to better unpack how these algorithms pull content to ensure that AI-generated healthcare advice is both accurate and accountable. 

  • What this means for you: Push for “explainability statements” in all AI-powered tools, clarifying where information is coming from and how decisions are made. 
Diverse Medicine is Just Good Medicine 

There is simply no room for bias in medicine. Clinical trials are working to enroll participants aligned with those impacted by diseases to generate trustworthy results. AI-driven diagnostic technology must consider how symptoms manifest in all types of patients. Health campaigns must resonate with all audiences they seek to educate and play a role in representing underrepresented patient populations. Bias isn’t just “built-in”; it’s something we can challenge, dismantle and fix. As communications professionals, we have the power to shape the narratives, demand better data and drive the industry toward fairness. AI will define the future of healthcare, and we must decide what kind of future we want. The real question isn’t if AI will influence healthcare; it’s who gets to shape that influence. 

Feeling inspired to do more in influencing LLMs? Spectrum Science is here to help guide you through this process. Contact us today and let’s chat; human to human. 

Tags:

Expect More.
Do More.