Is AI really the magical pill for content creation in healthcare?

By Siobhan Robertson, Head of SEO at performance.io / September 22, 2023

Looking back at AI over the last 10 months…

As with all major technology developments, it can take time to understand the true impact, and genuine consequences, of new software. Over the last 10 months, we at performance-io have been keeping a keen eye on the advancements made to AI technology and their impact in the medical search space, and pharma industry. While this isn’t another “Chat GPT” post, it is a review of what to expect from AI in healthcare SEO, following comprehensive testing and R&D from our SEO team.

At first, it felt like fear swept the industry with many making outlandish statements like “SEO is dead” and conversations about replacing content capabilities flourished. I think we can all agree, it isn’t.

Some got overzealous on the potential efficiencies: the Sports Illustrated & Men’s Journal AI Faux Pas

Some thought AI was the magic pill to streamline digital content efficiencies… and they learned from their mistakes.  We’re not saying it doesn’t play a part, but wanted to share the misfortune of Men’s Journal.

Back in February, we saw Men’s Journal take a very relaxed approach to AI-generated content, confident it could replace a lot of their existing manual work without compromising quality and integrity. To those in the agency world right now, this may sound familiar!

  • They launched one of their first AI-assisted articles “What All Men Should Know About Low Testosterone” (the article is now removed from their site, but we’ve linked to the archived content.)
  • The article lacked many factual accuracies when reviewed by a medical professional and it found itself in the midst of a negative PR scandal, with dozens of publications being quick to point out the shortcomings. Giving credit where credit is due, a detailed initial account of the story can be found here.

To summarise in our own words, Men’s Journal addressed a medical question with a lot of factually incorrect information when reviewed by a qualified medical specialist. Yes, Men’s Journal were transparent that AI had contributed to the article (in line with Google’s guidance and highlighted in the article shown below) However, when reviewed by a third-party medical specialist it fell short of being accurate in giving men advice on low testosterone levels. We pause for a minute, to ask can pharma afford these kinds of mistakes? With a strict MLR (medical legal regulatory) process already adjusting to new technology and approaches, with SEO being one of them in 2023, the risk of getting this wrong will at best dent confidence to innovate further with AI.

Men’s Health

So what is Google’s view on AI content?

Google understands the role AI can play in streamlining content production and driving efficiencies and has had to adapt to its increasing popularity. This is as long as it’s factually accurate, people-centric, and delivers a good user experience.

“However content is produced, those seeking success in Google Search should be looking to produce original, high-quality, people-first content demonstrating qualities E-E-A-T.”Full article on Google’s guidance on AI-generated content here.

There are many conversations confirming that the advancements to AI-generated content and Google’s abilities to detect such content are not at an equal scale of development. Several tools such as OpenAI’s AI Text Classifier were created to try to detect AI-generated content have failed due to inconsistent accuracy. Google’s opinion that this is not a new challenge, with “mass generated content” being an existing problem, seems a sensible position to hold. That being said, those who are enjoying short-lived performance drives from mass AI-generated content may find that this isn’t sustainable. They’ll see performance losses later down the line if Google continues to (and it will) try and understand the impact AI is having. An interesting thread on the community section of Google Search Console addresses just this problem with a site owner seeing short-term quick wins from AI-generated content, only to see the majority of his small site de-indexed, with fellow SEOs diagnosing the problem could be AI-generated product reviews.

Ok, so what about Google’s view on AI health-related content?

It’s similar to its overarching approach to content but with the expected added scrutiny. Google states: “On topics where information quality is critically important—like health, civic, or financial information—our systems place an even greater emphasis on signals of reliability.”

This means that reviewing the quality and accuracy of the output received from AI is paramount to ensure extensive time isn’t being wasted in re-writing and fact-checking. In our follow-up article, we’ll share with you the results we’ve found following testing AI-generated medical copy, and some of the learnings that have impacted quality outputs.

As a side note, we’ve been testing with Chat GPT and Bard to understand the capabilities of deciphering medical queries purely to understand if their role in medical content production is sustainable and how it continues to develop. Yes, Google in particular has been clear that Bard and Google Search are different, but they use the same LLM (large language model)

How do we at performance-io feel AI content could cause challenges in the healthcare industry?

  • In the pharmaceutical industry we still see a huge overreliance on paid media, often focused around vanity metrics like clicks, with no measurement or real conversions considered, and with this in mind, there is a worry that AI-generated content could make that worse.
  • A study from MIT Technology Review stated that “according to research conducted by the Association of National Advertisers, 21% of ad impressions in their sample went to make-for-advertising sites. They estimated that could equate to a global ad spend of $13 billion being wasted annually” Ultimately, if there is already a trend to over-rely on paid media in healthcare, digital ad automation and AI content creation could be a recipe for disaster.

Sources being shared in response to health-related queries from nonhealth-related sources…

We’ve tested several queries ranging from disease awareness to detailed product brand information. In order to illustrate these advancements and gaps, we thought we’d share one example for the search query: ‘what are the symptoms of depression” 

When testing this query back in March (pictured) we saw that Bard didn’t want to provide an answer to this question and stated “I’m unable to help you with that, as I’m only a language model and don’t have the necessary information or abilities”

Then, testing in July we can see that a much more comprehensive answer is provided (pictured) where on the whole the content seems useful, informative and advises a person to speak with a professional if they have concerns.

However, our main observation here is although in the 5 months between testing this specific query and the confidence and advancements from Google in understanding the question and delivering a response, the source provided is perhaps not the most  authoritative given the wide range of credible sources Google has access to (NHS, Mind, Mayo Clinic) instead Bard has selected a blog post as it’s source. https://www.themoodrecipes.com/depression-the-silent-killer-of-mental-health/

In the blog post (pictured below) you’ll have seen there is an author and date stamp above the fold, taking a user through to the author page (shown below) where we can see that there are no references to the author being a mental health professional or this being an accredited healthcare website. Therefore, the source page on depression is based on subjective observations. The risk of the wrong information being served to those who are fragile is very concerning.

The Mood Recipes

 Is this just part of E-E-A-T? 

We know with E-E-A-T that Google introduced the extra E to try and differentiate between someone having personally experienced something firsthand and someone being an expert to advise based on other’s experiences and their expertise within a field. The best example to clear confusion would be an article written on surviving testicular cancer, written by a cancer survivor would demonstrate experience and an oncologist their expertise. Therefore, we appreciate more articles coming out, particularly in the disease awareness space that aren’t necessarily all authored by medical professionals.

We wanted to give Google the benefit of the doubt with its source provided here, but the blog post demonstrates neither experience nor expertise. The author is clearly not a medical or mental health specialist and she writes nowhere from personal experience nor discusses her first-hand experience with depression.

From our broader findings, we’ve found that the sources and references that Google chooses to display in Bard seem at best very random and almost never correlate to the pages it chooses to serve on page 1 for the same queries and this does have us questioning the validity of the outputs. We know that Google Search and Google Bard are separate but surely it makes sense to try and share learnings where possible?

Healthcare advancements moving quicker than Chat GPT’s ability to update their database…

As we know, the healthcare industry and guidelines on medicine can change rapidly and a search in Chat GPT around guidelines for aspirin in pregnancy confirms their database only dates from September 2021.

This is concerning, as they do state when their source is last updated but continue to give medical guidance, as a patient searching for information who is unlikely to be privy to when industry or medical guidelines change, the impact could be very damaging.

To summarise, it’s clear that AI content is playing a bigger role in digital content production with Google now accepting it into its guidelines as long as it adheres to the same quality parameters it sets for all other content.

  • However, queries shown recently in (people also ask) boxes feature “how much AI content can I get away with” and “what AI score is bad” reflect the broader sentiment in an online audience looking for the quickest way to use AI to reach a positive outcome.
  • From our testing which we’ll dive into in more detail in our follow-up article the sources and references for medical queries seem far off from being able to trust any comprehensive output for content production without extensive review.

At performance-io nearly all of our clients have come to us as their SEO partners to help them integrate AI into their SEO approaches, with some wanting to work at pace (not our recommendation) and others wanting to take a more cautious, iterative approach.

Our job is to test and as soon as there are efficiencies and consistency we pass those savings on. Following further posts on findings from our testing, our CEO will also be publishing his views on what we are seeing in this space and its impact on pharma. Sign up here if you’d like to be notified of future articles.

performance-io's logo in white

Get in Touch

Get your 30 minute consultation

If you have any questions about us, or want to reach us, please do not hesitate to contact us or send us an email.
hello@performance-io.com

Array

The reCAPTCHA verification period has expired. Please reload the page.

Your message has been sent, thank you.

Let's redefine SEO & performance in healthcare

Book a
consultation

Subscribe to
newsletter

Subscribe to newsletter

Array

The reCAPTCHA verification period has expired. Please reload the page.

Our website uses cookies to ensure you to get the best experience on our website. Please accept of our Cookie Policy

Accept