Quality and Safety

25 Sep 2014

by: Christine Laronga, MD, FACS

As surgeons we have been doing Outcomes research (Quality and Safety) since the beginning of time.  In today’s forum, the most common example is Mortality & Morbidity conference. Although we don’t think of it as such and that is true in the past as well. (Breast) cancer will serve as my model but any disease entity could equally be utilized. In the 1st century A.D., a Greek physician, Leonides, performed the first operative management of breast cancer called the “Escharotomy” method. The technique used a hot poker to make repeated incisions burning the entire breast off the chest wall. The outcome was not so good. Most women died from surgery of infection.

So in the Renaissance era, we developed newer sharper surgical instruments to remove the breast swiftly. Unfortunately most women died of exsanguination from the “Guillotine” method. Fortunately in the late 1800’s we developed anesthesia and learned an appreciation of antisepsis. Pioneers like Halsted could then safely and meticulously perform a radical mastectomy to cure breast cancer. The successful outcome of this operation, opened the door to clinical trials; clinical trials that have shaped and molded the treatment of breast cancer for the last 50 years.

However these medical advances were accompanied by a relentless growth of expenditures devoted to health care.  All nations struggle with inefficiencies in their healthcare system and a perceived lack of value. Value is a word often interchanged with the word quality and the diagnosis of cancer is increasingly the focus of quality discussions. It is an incident diagnosis with most of the lifetime benefits accomplished within the first year of treatment. As such in 1999, the IOM recommended that cancer care be monitored using a core set of quality of care indicators.  Quality of care indicators can encompass structural, process and outcomes measures. Process indicators have the advantage of being closely related to outcomes, easily modifiable, and provide clear guidance for quality of improvement efforts.

Therefore in order to improve the quality of cancer care, we would need high-quality data, mechanisms to feedback the information to hospitals or practices, systems to act upon the data, and participation of the providers themselves. This could be done on a national, regional or local level. For example the American Society of Clinical Oncology (ASCO) established the National Initiative for Cancer Care Quality to develop and test a validated set of core process quality of care indicators which could be abstracted from the medical records chart. In 2006, ASCO established QOPI to conduct ongoing assessments of these validated indicators within individual oncology practices of ASCO members. The abstracted data is submitted via a web portal and is analyzed in aggregate and by individual practice. QOPI provides a rapid and objective measurement of practice quality that allows comparison among practices and over time. Currently QOPI has over 300 practice groups participating and participation over time was highly correlated with improvement in performance measures.

Now ASCO is not the only national organization to examine quality of cancer care. The American College of Surgeons has their Commission on Cancer (COC) which accredits more than 1500 cancer programs that collectively treat more than 70% of all cancer patients in the United States. Accredited programs meet organizational and quality standards and maintain a registry of all patients who are diagnosed and or receive initial treatment in that program. This registry called the National Cancer Data Base (NCDB) includes initial cancer stage, treatment data, follow-up data and vital status. Currently, there are over 1 million new cancer cases that are entered annually to the already 29 million cases in follow-up. The primary focus of the data base was on the retrospective evaluation of care and to date over 350 publications are in press.

In 2005, the COC developed a set of quality measures for breast and colorectal cancer that could be measured from cancer registry data. The National Quality Forum endorsed the COC measures in 2006 and re-endorsed them in the fall of 2012. Similar to QOPI, each practice can follow their performance over time on these measures and compare themselves to other COC programs regionally or nationally. Each site can easily click on any one of these measures to identify which patients did not meet the standard. Understanding the reasons why the standard was not met will allow development and implementation of quality improvement efforts. Reassessment will then become the next key step to determine effects of improvement plans.

The 2 previous examples were national efforts at quality outcomes research but one could perhaps more easily conduct regional studies. For example, in 2004, my institution established the FIQCC which is a consortium of 11 institutions (3 academic/8 community) in Florida participating in a comprehensive practice-based system of quality self-assessment across 3 cancer types – breast, colorectal and non small lung cancer. Our Quality indicators were scripted based on the accepted QOPI, NCCN, COC, and site-specific PI panel consensus indicators. An evaluation was done to assess adherence to performance indicators among the sites. An average of 33 quality measures was examined per disease site. An abstractor trainer from Moffitt Cancer center traveled to each of the 11 participating sites to train the site abstractor by using sample charts. Quality control was maintained through audits, which were performed by the abstractor trainer when each site was one-third and two-thirds complete. A random sample of medical records charts was abstracted per site for patients first seen by a medical oncologist in 2006. In 2007, the participating sites met for an annual conference where the results were disclosed. Each site only knows which letter they are represented by but can see how they compare to the other Florida participating sites. Any quality indicator with adherence less than 85% was discussed at length. Each site was then given homework to investigate why their site was below 85% in adherence to any quality indicator and enact their own quality improvement plan. To assess success of the quality improvement plan, a random sample of medical records charts was abstracted per site for patients first seen by a medical oncologist in 2009. When the results were disclosed at the annual conference each site explained their quality improvement plan so that the other sites may benefit by lessons learned.
What we have learned so far is that performing outcomes research with regards to quality of care is no piece of cake. To be successful you will need:

  • High quality data
  • Mechanisms to feed back the information to the participating practices or hospitals
  • Systems to act upon the data
  • Participation of providers
  • Ongoing re-assessment to monitor success of quality improvement plans and establish new plans of action

We also learned that there was no “Best” Practice in terms of what quality improvement plan to implement. What worked well with one site may not work at another site for various reasons. There is also no single “Best” Practice type of outcomes research to utilize.  We must learn from each other. Two years ago, ASCO hosted their first Quality of Cancer Care Symposium which was met with resounding success. Highlights of the meeting are included in the May issue of the Journal of Oncology Practice. Hopefully attendees and readers will take away the importance of engaging in quality of care outcomes research regardless of the field of medicine. As surgeons we can lead the charge. One limitation we have already identified is the lag time from data abstraction and analysis to feedback of results to participating sites. This was evident with all 3 examples I showed you. This delay may help improve the quality and safety for patients of the future but doesn’t help the current patients.

Therefore, a key tenant of quality measurement must be timeliness. As such the COC has developed and has begun implementation of the Rapid Quality Reporting System (RQRS). Data entry begins as soon after diagnosis as possible.  This will allow the clock to begin for a given metric. For example if chemotherapy should be administered within 4 months of definitive surgery, the RQRS will alert the facility of an approaching deadline if data has not yet been received documenting initiation of chemotherapy. This will allow the program to intervene for the current patient, not just a future patient. Other advancements coming down the road include: 1) adding new standards for breast and colorectal cancer to the 6 they already have; 2) expanding to other disease sites, such as non-small cell lung, gastric, GE junction tumors, and esophagus; 3) increase adoption of the RQRS by the 1500+ participating hospitals (currently only about 25% have initiated the RQRS); 4) exploring ways to expand public reporting of quality data; and finally the COC is Partnering with Livestrong foundation to develop a tool for the RQRS to auto-populate an end of treatment summary report and survivorship plan.

Ultimately the goal of all healthcare is to improve patient health outcome. In this context, value is defined as the patient health outcomes achieved per dollar spent. This definition integrates quality, safety, patient-centeredness, and cost containment. There is no one “best” practice method for outcomes research just what works “best” in your institution’s hands. The key is to engage in some kind of quality of care initiative in your respective discipline.

Christine Laronga, MD, FACS is a Surgical Oncologist at the Comprehensive Breast Program at the Center for Women’s Oncology at Moffitt Cancer Center and currently serves as the Treasurer for AWS.

Leave a Reply

Your email address will not be published. Required fields are marked *