Skip To Navigation Skip To Content Skip To Footer

    The MGMA membership renewal portal is experiencing intermittent issues. We are working on a fix. If you're unable to renew, please call 877.275.6462 ext. 1888 or email service@mgma.com to renew.

    MGMA Stat
    Home > MGMA Stat > MGMA Stat

    Many have heralded the emergence of artificial intelligence (AI) tools in healthcare as a salve for the problems of staffing shortages and time-crunched practice leaders — but those same issues are also a key hindrance to wider adoption of AI in the industry.

    That’s according to a Sept. 5, 2023, MGMA Stat poll that found more than one in five medical groups (21%) have added or expanded their use of artificial intelligence (AI) tools in 2023, while the majority (74%) are still on the sidelines and another 5% reported they were unsure. The poll had 480 applicable responses.

    The majority of medical groups not adopting AI tools at this point aligns with previous MGMA Stat polling from March 2023 that found only 10% of medical groups were using AI tools, such as ChatGPT, on a regular basis.

    This measured approach to AI comes as several large organizations take their AI experiments to the next level:

    • HCA Healthcare recently announced a pilot program working with Google Cloud for improvement clinical documentation for emergency room physicians, as well as plans to use the MedPaLM 2 LLM (large language model) to help provider answers to caregivers about medical questions.
    • Boston Children’s Hospital recently hired an AI prompt engineer with a clinical and coding background to create commands for AI programs.

    A recent Morgan Stanley forecast estimates that the healthcare industry’s average estimated budget allocation toward AAI technologies is expected to grow from just 5.7% in 2022 to 10.5% in 2024, with major developments for biopharma, life sciences tools and diagnostics, medical technology and healthcare services and technology.

    AI’s time has come, but do practices have time to adopt it?

    Among practice leaders who responded to the latest poll, many cited concerns with their physicians’ level of comfort with certain AI technologies or ensuring the security of data used for AI.

    But one issue stood out above the rest — despite the promises of automation and savings of time and costs, most practice leaders don’t have the time or resources to make the leap just yet: “I would like to, but everything we seem to want to do is tied up in a revenue cycle uplift,” one practice leader told MGMA.

    This sentiment was echoed by several respondents who, while aware of several use cases for AI tools in healthcare, noted that they face the most difficulty in carving out time and resources to work with vendors, integrate tools with existing EHRs, PM systems and platforms and build policies to govern the use of those tools.

    What’s working today in AI adoption in medical groups

    Among the respondents who have added or expanded AI tool utilization this year, the use cases generally fell into three main areas:

    • The most common area of AI tools reported was in clinical AI, such as:
      • AI-assisted clinical decision support (e.g., for asthma management)
      • Natural language processing (NLP) or speech recognition for visit notes/documentation
      • Predictive AI for clinical performance.
    • The second largest area of AI use was in revenue cycle management, with several respondents noting they have adopted tools to:
      • Assist in medical coding to improve quality and accuracy of medical codes
      • Provide predictive analysis of key performance areas.
    • The last area was in patient communications, with respondents noting the use of generative AI tools to improve their marketing messages, as well as more advanced uses, including:
      • Contact center answering service tools to help triage calls and sort/distribute incoming fax messages
      • Conversational AI used for chatbots and virtual assistants.

    Hesitance, hurdles and regulations

    One of the major apprehensions around AI tools in healthcare has been the question of whether artificial intelligence — particularly in terms of clinical AI — would result in AI replacing physicians in certain settings.

    Dr. Kent Hudson, MD, CPE, director of artificial intelligence clinical operations for Radiology Partners, recently joined the DMSG Healthcare podcast to highlight the expansion of AI tools beyond imaging. These tools would support radiologists in the realm of real-time report analysis to watch for key phrases that highlight best-practice recommendations on next steps for PT scans, biopsies, etc. Using AI to help remind physicians of those recommendations saw the compliance rate jump around 10% to 15% to more than 80%, but it all hinged on educating the radiologists that the technology is designed to support physicians’ expertise and make them “better together,” Hudson said, and that it was not replacing any of their jobs.

    In addition to winning buy-in from physicians, healthcare leaders have another doctor who might have significant sway over the future of AI in healthcare: U.S. Sen. Bill Cassidy, R-Louisiana, a gastroenterologist, who recently created a framework for “artificial intelligence regulation,” as reported by POLITICO earlier this week. The report noted that Cassidy — who is the ranking member of the Senate’s Health, Education, Labor and Pension (HELP) Committee — is taking feedback on the framework and its suggested “targeted updates” on medical device regulations through Sept. 22.

    The rapid innovation in AI previously was the topic of a headline-grabbing hearing in May 2023 by the Senate Judiciary Subcommittee on Privacy, Technology and the Law, which included OpenAI chief executive officer Sam Altman welcomed the idea of regulation of new AI systems, though few specifics emerged in the following months until Cassidy’s framework began circulating.

    In August, John Halamka, MD, MS, president of the Mayo Clinic Platform and one of the foremost leaders in medical informatics and AI in healthcare, spoke on the Ground Truths podcast on this subject, noting that publicly available AI tools (such as ChatGPT) pose a concern for medical professionals who now have patients who don’t just Google their symptoms and arrive at a preconceived idea of a diagnosis; they’re being presented with specific instances of medical literature and suggested treatments from some LLM tools.

    Halamka noted that not all those models are built equally, with some trained on public internet while others are fed data from PubMed, “but none are trained on the rich clinical experience of millions and millions of patients” and thus “don't have the mastery of the care journey.”

    Finding a way forward to train an AI model to use only the best clinical data while avoiding unintended biases, with certain guardrails in place, “could be very expensive, could be very time-consuming,” Halamka cautioned, but it would be something worth exploring. Halamka also noted that he expects that the FDA, the Office of the National Coordinator (ONC) and the White House will “work through generative AI oversight” to build upon the existing, voluntary controls that some companies have instituted.

    One such example of that is the recent ONC proposed rule that would require a “nutrition label” of sorts for AI technologies/algorithms used in EHR systems. MGMA Government Affairs submitted comments in June 2023 to ONC on the proposed rule, focusing on promoting interoperability and reducing administrative burdens: “While AI and predictive DSI [decision support intervention] can be powerful tools, MGMA supports ONC’s efforts to facilitate better understanding of these rapidly developing technologies.”

    Additional resources


    Explore Related Content

    More MGMA Stats

    Ask MGMA
    An error has occurred. The page may no longer respond until reloaded. Reload 🗙