Imagine an auditor looking at the distribution of evaluation and management (E/M) services for your practice. What would the auditor find as he or she compared your practice’s usage patterns to other physicians in the same specialty in your state?
If a provider is an outlier on an E/M benchmark comparison—for instance, because he or she uses more consultation codes or more upper level codes—it is not necessarily a bad thing. In many cases, the variation can be explained legitimately (for instance, when a spine surgeon who only sees patients on referral is compared to general orthopaedic surgeons). Nevertheless, being an outlier will prompt questions. Hopefully, you will have answers to explain the deviation, supported by excellent documentation.
From any payer’s perspective, graphing code usage produces a distribution curve as a basis for comparison. This is especially true for Medicare, which paid $25 billion for E/M services (totaling 19 percent of all Medicare Part B payments) in 2009. Additionally, Comprehensive Error Rate Testing (CERT) audits revealed a national Medicare Fee for Service error rate for the November 2009 reporting period of 8 percent (up from 6 percent in 2008), which equates to $24.1 billion in erroneous payments. Medicare’s recovery audit contractors (RACs), CERT, and zone program integrity contractors (ZPIC) audits are out to recoup money paid to those outliers, and they have been successful in collecting.
Knowing how your practice compares to others and on a physician-to-physician basis is critical. Ignore those who tell you that your coding pattern should look like the proverbial “bell-shaped curve.” Your coding should instead represent the level of care and documentation in your providers’ records. Your subspecialty or other unique aspects of your practice, your patient population, and your level of automation will influence your coding, E/M distribution, and variations from the “norm.”