Management guru Peter Drucker said, “If you can’t measure it, you can’t manage it.” And while some may disagree with that, for the most part, that’s a pretty tenet to live buy. Particularly in managing our practices today.
We’ve got access to a myriad of data. Coding data. Productivity data. Cost data. Quality data. Encounter data, and reimbursement data. Financial data. Staffing data and turnover data. Data. Data. Data.
The key is to take all this data and turn it into actionable information. Take the historical data and employ it in making future decisions in our organizations. Use it as a tool for improved and informed decision making.
We were recently engaged to conduct a revenue cycle analysis with an internal medicine practice of 15 physicians. There was concern about the accounts receivable, including the amount outstanding for the providers, the aging of the accounts receivable and how effectively the team was collecting the outstanding balances.
The group had never done any internal or external benchmarking. Our first step was to develop a baseline for performance. Next, we looked at trends over a 24-month period. For key performance indicators (KPIs) – metrics that we would consistently use to gauge performance – we chose Total AR per physician, Total AR per provider (this includes advanced practitioners), standard aging buckets for Days in AR, Days Gross FFS Charges in AR, and Adjusted FFS Collection Percentage.
The practice developed a baseline and trended each of these KPIs over a 24-month period.
Once we knew how the practice had done over the last 2 years, we wanted to understand how the group measured up to similar practices across the country. This is where we employed MGMA’s DataDive. The 2018 MGMA DataDive Cost and Revenue allows us the ability to compare our practice baselines to over 3,000 groups nationally across various specialties, ownership types and geographic regions.
We were careful to determine which practices we wanted to be measured against (privately owned, number of physicians, total revenue, region) and which KPIs to look at. Beyond that, another key is ensuring we were using the most appropriate measure when looking to benchmark against the MGMA Data. For example, while we would typically be advised to focus on the median value for the reported benchmarks, that will not be the case for the percentage of accounts receivable across the aging buckets. For those benchmarks, best practice would be to focus on the mean value as opposed to the median. Due to the breadth and scope of DataDive, it’s critical that you ensure you’re comparing apples to apples. Failure to do so can result in erroneous comparisons, misleading assumptions, and contribute to faulty decision-making.
Once the external benchmarking work was done, we could not only measure the group’s progress over the last two years, but also to develop an understanding of where its performance ranked among like-groups, based on the normative data.
From that point forward, we developed a detailed plan to establish appropriate goals based on the internal and national benchmarks, create a plan to achieve those goals, and put a system in place to monitor progress.
We were only able to do this because we spoke data. Do you speak data?
We’ve got access to a myriad of data. Coding data. Productivity data. Cost data. Quality data. Encounter data, and reimbursement data. Financial data. Staffing data and turnover data. Data. Data. Data.
The key is to take all this data and turn it into actionable information. Take the historical data and employ it in making future decisions in our organizations. Use it as a tool for improved and informed decision making.
We were recently engaged to conduct a revenue cycle analysis with an internal medicine practice of 15 physicians. There was concern about the accounts receivable, including the amount outstanding for the providers, the aging of the accounts receivable and how effectively the team was collecting the outstanding balances.
The group had never done any internal or external benchmarking. Our first step was to develop a baseline for performance. Next, we looked at trends over a 24-month period. For key performance indicators (KPIs) – metrics that we would consistently use to gauge performance – we chose Total AR per physician, Total AR per provider (this includes advanced practitioners), standard aging buckets for Days in AR, Days Gross FFS Charges in AR, and Adjusted FFS Collection Percentage.
The practice developed a baseline and trended each of these KPIs over a 24-month period.
Once we knew how the practice had done over the last 2 years, we wanted to understand how the group measured up to similar practices across the country. This is where we employed MGMA’s DataDive. The 2018 MGMA DataDive Cost and Revenue allows us the ability to compare our practice baselines to over 3,000 groups nationally across various specialties, ownership types and geographic regions.
Internal Medicine – All Practice Types | ||||
Benchmark | Group Count | Mean | Median | |
Total AR per physician | 109 | -- | $72,535 | |
Total AR per provider | 74 | -- | $56,222 | |
0-30 days in AR | 113 | 56.89% | -- | |
31-60 days in AR | 113 | 11.89% | -- | |
61-90 days in AR | 113 | 6.58% | -- | |
91-120 days in AR | 113 | 4.83% | -- | |
120+ days in AR | 113 | 19.82% | -- | |
Days gross FFS charges in AR | 108 | -- | 30.81 | |
Adjusted FFS collection percent | 156 | -- | 97.89% |
We were careful to determine which practices we wanted to be measured against (privately owned, number of physicians, total revenue, region) and which KPIs to look at. Beyond that, another key is ensuring we were using the most appropriate measure when looking to benchmark against the MGMA Data. For example, while we would typically be advised to focus on the median value for the reported benchmarks, that will not be the case for the percentage of accounts receivable across the aging buckets. For those benchmarks, best practice would be to focus on the mean value as opposed to the median. Due to the breadth and scope of DataDive, it’s critical that you ensure you’re comparing apples to apples. Failure to do so can result in erroneous comparisons, misleading assumptions, and contribute to faulty decision-making.
Once the external benchmarking work was done, we could not only measure the group’s progress over the last two years, but also to develop an understanding of where its performance ranked among like-groups, based on the normative data.
From that point forward, we developed a detailed plan to establish appropriate goals based on the internal and national benchmarks, create a plan to achieve those goals, and put a system in place to monitor progress.
We were only able to do this because we spoke data. Do you speak data?