Everyone reading this article already knows about (or will soon become all too familiar with) work relative value units (wRVUs). Through the 1980s, Medicare paid physicians based on “usual and customary” charges. However, this policy was opaque, lacked standardization, and was contributing to disparities in compensation between procedural and non-procedural disciplines. Therefore, over the course of several years and multiple iterations, a team of economists set out to create the wRVU system. In 1992, this system went live. Every service, designated by a current procedural terminology (CPT) code, was assigned an amount of wRVUs that was supposed to correctly order and scale the time and intensity of the service provided. For example, an ophthalmology service with 10 wRVUs should require twice the “work” of a pathology service with 5 wRVUs. Recognizing that the system was imperfect and would require ongoing changes, the AMA established the RVU Update Committee, which has been in charge of recommending updates to CMS ever since. Criticisms of the accuracy of the wRVU system are plentiful, as would be expected of a program that directs the annual flow of $600 billion, and a whole series of blog posts could be written just about those issues. Instead, today I want to focus on the rest of what we do as academic surgeons—the so-called “academic RVUs” that we produce, what has been published on the topic, and the associated challenges and opportunities.
What is an academic RVU?
At the turn of the millennium, academic hospitals became interested in quantifying and standardizing non-clinical work. These institutions were moving towards “mission-based” budgets—essentially re-distributing the two primary revenue streams (clinical and grants) towards the various academic activities (eg, research, education, service). Quantifying the relative importance of activities within these non-clinical domains has the potential to allow for equitable re-distribution of funds.
In 2000, the Association of American Medical Colleges convened a series of panels to discuss the pros and cons of metrics that could be used to quantify effort in research and education1,2. Although these panels provided a 30,000-foot perspective on potential options, they concluded that measures should be individualized depending on the institution’s priorities. Since then, several centers have reported their experience with implementing some type of quantitative system to track, measure, and compare non-clinical activities3-5. Baylor’s Department of Surgery published their experience with significant granularity. They internally generated a points system that purposefully excluded activities that were already compensated through alternative means (eg, serving as program director). Examples include 150 points for a national presentation, 500 for a high-impact journal article, and 650 for submitting a high-value grant, along with 60 other activities spanning clinical trials, institutional and national leadership, peer review, teaching awards, teaching activities, and patents. Scores were tallied annually within each faculty rank and bonuses were paid to the top half of performers. After implementing the system, Baylor saw increases in research and service productivity.
Challenges and opportunities
The science behind academic RVUs is still very much in its infancy, especially compared to clinical RVUs. Below I document some of the key factors that differentiate academic RVUs from clinical RVUs as well as potential opportunities for future research:
- There is no CPT system. When generating the original wRVU system, the investigators had a distinct advantage—there was an existing system (CPT) that described most of the clinical activities that physicians performed at the time. No such system exists for academic activities. Leadership is quickly recognizing the expanding breadth of activities performed by academic surgeons, such as mentorship, coaching, quality improvement, public health, and global health. Further confounding the issue is that contribution is potentially more nuanced in academics than it is in the clinical world. Basic science vs health services, first vs senior vs middle author, and course director vs co-director are all variations on a single “deliverable” that require different work. Lastly, because we bill for clinical work there is an automatic mechanism in place for tracking the breadth and volume of activities performed. Surgeons frequently have academic activities scattered across personal CVs, institutional databases, and online repositories (e.g., Google Scholar, Scopus, ORCID) with no gold standard template. Efforts are needed in all of these domains to create a language that allows for apples-to-apples comparisons.
- Empiric data supporting activity weights is limited. Most studies stop after stating that an internal iterative process was used to calculate the scores they assigned to activities. A few studies describe the details of an internal focus group or survey. I am aware of only one surgical study of faculty in the Association of Program Directors in Surgery that provides a few scores that may be applicable across institutions6. As a result, scores in the published literature are wildly different. As an example, in one study a national presentation carried the same score as a first-authored journal article while in another the journal article (regardless of authorship) was 2-4 times more valuable than the national presentation3,4. In theory the work (or at least the relative work) between these activities should be comparable across institutions. In the original wRVU system, magnitude estimation was the methodology used to rank-order services within a service line, and this methodology could be used to rank-order activities within research, education, service, etc. In the wRVU system, statistical cross-liking was then used to compare disparate scales (eg, radiation oncology vs surgery) and a similar mechanism could be used for academic RVUs.
- There is no conversion factor, but also no balanced budget requirement. Medicare converts RVUs to dollar amounts using an annual conversion factor. And, because Medicare requires a balanced budget, this value goes up and (mostly) down as service volumes change and RVU values for services are adjusted. Most published reports of academic RVUs used a modest financial incentive. In the Baylor study described above, the maximum bonus paid out amounted to $10,000, or ~3.5% of the assistant professor salary in that year. Although the study reported increases in academic metrics over the study period, this was a single-center, retrospective, non-controlled study, and the authors endorsed several potential confounders (eg, new recruitment focus) that may also have contributed to the increase. Robust studies are needed to understand the role of financial (and other) incentives as well as the structure and size of these incentives. Interestingly, most studies have stratified physicians into quantiles to determine incentive payouts, effectively forcing their own internal balanced budget onto the process. No congressional mandate requires this for academic RVUs.
Final Thoughts
It is beyond the scope of this article to discuss the myriad physician compensation models in academic surgery, but recent work7 suggests that how surgeons are paid contributes to culture and decision-making. The way an academic RVU system is structured may impact far more than the number of grants and papers submitted. So let the buyer beware—as you foray into academic RVUs, you may get exactly what you pay for.
References
- Holmes EW, Burks TF, Dzau V, et al. Measuring contributions to the research mission of medical schools. Acad Med. 2000;75(3):303-313.
- Nutter DO, Bond JS, Coller BS, et al. Measuring faculty effort and contributions in medical education. Acad Med. 2000;75(2):199-207.
- Brigstock NM, Besner GE. Development of an academic RVU (aRVU) system to promote pediatric surgical academic productivity. J Pediatr Surg. 2022;57(1):93-99.
- LeMaire SA, Trautner BW, Ramamurthy U, et al. An Academic Relative Value Unit System for Incentivizing the Academic Productivity of Surgery Faculty Members. Ann Surg. 2018;268(3):526-533.
- Schroen AT, Thielen MJ, Turrentine FE, Kron IL, Slingluff CL, Jr. Research incentive program for clinical surgical faculty associated with increases in research productivity. J Thorac Cardiovasc Surg. 2012;144(5):1003-1009.
- Dunn AN, Walsh RM, Lipman JM, et al. Can an Academic RVU Model Balance the Clinical and Research Challenges in Surgery? J Surg Educ. 2020;77(6):1473-1480.
- Finn CB, Syvyk S, Bergmark RW, et al. Perceived Implications of Compensation Structure for Academic Surgical Practice: A Qualitative Study. JAMA Surg. 2023.