Take all the recent headlines about generative AI, the flurry of academics using platforms like ChatGPT, often to create robo-written screeds about the dangers of ChatGPT — it’s almost enough words to program an entire large language model (LLM).
Whether those headlines about Bing AI, Google’s Bard and other generative platforms leave you bullish or bewildered, there is no questioning that the technological advancements in recent months are a central part of a discourse that AI might be cowriting.
[Editor’s note: A human wrote this article, but we did have the help of AI to illustrate the January issue of MGMA Connection magazine.]
A March 28, 2023, MGMA Stat poll points to limited embrace of these types of AI tools in healthcare, as 10% of medical group leaders report using them in their organizations, while 85% do not and another 5% were unsure. The poll had 569 applicable responses.
Top use cases for AI in healthcare today
In many instances, medical group leaders signaled that they know many of their vendors are using some form of AI to enable certain tasks but do not use AI tools directly in house. Other emerging uses of AI noted by poll respondents included evaluation of AI to help refine a triage tool for applying decision support in the practice.
Some of the most frequently cited uses of AI tools by practice leaders in the poll included:
-
Patient communications, ranging from contact center answering service AI to help triage calls and sort/distribute incoming fax messages to AI-enabled outreach, such as appointment reminders and marketing materials
-
Capturing clinical documentation, often with natural language processing (NLP) or speech recognition platforms to help virtually scribe.
-
Improving billing operations and predictive analytics.
Trial and error
However, a strong majority of medical group leaders whose organizations do not currently use AI tools noted that they are unlikely to add any until they see more evidence of their efficacy. As one respondent told us, “[We] tried; none of them work as advertised.”
To indulge ourselves a bit, we asked ChatGPT to give us a list of the best articles about generative AI use in healthcare. The results (see image) were interesting:
-
None of the article titles are real.
-
The hyperlinks point to totally unrelated publications, ranging from a study on black string theory in high-energy physics to an article on number theory from a professor at the University of Versailles Saint-Quentin in France.
The exact same prompt in a trial version of Google’s Bard didn’t try to deliver as much as ChatGPT did, instead noting that — as an LLM — it doesn’t “have the capacity to understand and respond” to the command.
Meanwhile, the “New Bing” presented three real articles — two from Forbes and another from The Wall Street Journal — with working hyperlinks to them, as well as two more relevant articles on the topic. But in some respects, this might have been a job better left to the respective search engines rather than the answer machine, as old-fashioned Google and Bing provided a tsunami of relevant write-ups on this topic.
The challenges and concerns around AI in healthcare
When it comes to iterations of GPT (generative pretrained transformer), two legal scholars writing for JAMA outline three distinct use cases with unique concerns:
-
AI within patient-physician relationships that augments clinician judgment without replacing it;
-
Patient-facing AI in care delivery that substitutes for clinician judgment; and
-
Direct-to-consumer health advice.
When it comes to information integrity, the authors suggest “inaccurate advice generated by AI is no different” than bad information from other sources, and that “existing legal frameworks can assign liability” for erroneous information. The authors expressed particular concern around GPT-created information that is offered outside the patient-provider relationship, such as “using LLMs to provide basic mental health care to patients or to replace clinic staff who usually perform triage.”
The rapid advancement of these tools and proliferation of use through publicly available, commercial platforms has been heralded as a major step forward for healthcare, but at the same time it raises several concerns about when and how to use these tools.
In a Viewpoint article for JAMA earlier this week, three informatics leaders argued that healthcare leaders “leverage the promise while minimizing the peril of AI” by developing a form of code of conduct around the use of AI in healthcare.
Additional reading
-
Stephen Wunker of New Markets Advisors writes in Forbes about four key areas for generative AI to help in healthcare: interpreting unstructured data, explaining data in a coherent way, engaging people in conversation, and generating new ideas.
-
Sai Balasubramanian, MD, JD, also writing in Forbes, suggests that Google’s Bard has exceptional potential “with regards to enabling medically literate conversation, perhaps even as a way to aid physicians and specialists in creating diagnostic plans or bridging care for their patients.”
-
Belle Lin of The Wall Street Journal examines use of generative AI for assisted documentation and synthetic data.
JOIN MGMA STAT
Our ability at MGMA to provide great resources, education and advocacy depends on a strong feedback loop with healthcare leaders. Sign up by texting “STAT” to 33550 or visit mgma.com/stat and make your voice heard in our weekly polls sent via text message.
Do you have any best practices or success stories to share on this topic? Please let us know by emailing us at connection@mgma.com.