10.04.2013 Views

AP Discrepancy Rates - College of American Pathologists

AP Discrepancy Rates - College of American Pathologists

AP Discrepancy Rates - College of American Pathologists

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Table 1. Definition <strong>of</strong> <strong>Discrepancy</strong><br />

<strong>Discrepancy</strong>: A discrepancy has occurred if there is any difference between the original interpretation and the interpretation after the<br />

second review. Discrepancies were further classified by cause into one <strong>of</strong> the following categories:<br />

Change in margin status: The interpretation <strong>of</strong> the margin status was changed from benign to malignant or vice versa.<br />

Change in categoric interpretation: An interpretation was changed from one categoric diagnosis, such as benign, to another categoric<br />

diagnosis, such as malignant. For purposes <strong>of</strong> this study, interpretations were classified within categories that were graded by their<br />

probability <strong>of</strong> a malignant clinical outcome (eg, a benign diagnosis was assigned a 1; atypical, 2; suspicious, 3; and malignant, 4). We<br />

considered a difference <strong>of</strong> 2 or more steps between the original and the review interpretation as a discrepancy. For example, if the<br />

original diagnosis was benign, and the review diagnosis was malignant, the difference in steps between these 2 diagnoses was 4 1<br />

3, and this case was considered discrepant. If the step difference between the original and review interpretations was 1, we decided<br />

that a discrepancy had not occurred.<br />

Change within the same category <strong>of</strong> interpretation: An interpretation was changed from one benign interpretation to another benign<br />

interpretation or from one malignant interpretation to another malignant interpretation. A change from one tumor type to another fell<br />

within this category. For example, if the original interpretation was adenocarcinoma and the review diagnosis was epithelioid sarcoma,<br />

this case was placed within this category <strong>of</strong> discrepancy.<br />

Change in patient information: There was a change in the organ site, such as the left ovary to the right ovary.<br />

Typographic error<br />

Table 2. Definitions <strong>of</strong> Effects <strong>of</strong> Discrepancies<br />

Effect <strong>of</strong> discrepancy on patient management: Discrepancies were classified into the following categories based on patient outcome:<br />

Harm (significant event): A discrepancy that resulted in patient harm (eg, inappropriate treatment, loss <strong>of</strong> life or limb, psychological<br />

event). The effect <strong>of</strong> the significant event on patient outcome was assessed using a 3-point Likert scale (1 severe effect, 2 moderate<br />

effect, 3 mild effect). The pathologists performing the review judged the significance <strong>of</strong> the event.<br />

Near miss: A discrepancy that was detected before harm occurred, such as a discrepancy that was detected at a clinical pathologic<br />

conference before treatment was initiated.<br />

No harm: A discrepancy that did not result in patient harm, such as a typographic error that had no bearing on patient management.<br />

MATERIALS AND METHODS<br />

Laboratories enrolled in the C<strong>AP</strong>’s volunteer Q-Probes quality<br />

improvement program participated in this study in 2003. The Q-<br />

Probes program and the format for data collection and handling<br />

have been previously described in detail. 21<br />

Participants prospectively identified 100 consecutive surgical<br />

pathology or cytology specimens that were reviewed by a second<br />

pathologist after a first pathologist had signed out that case. In<br />

order to standardize the data-collection process across all participating<br />

laboratories, pertinent terms were defined (Tables 1 and<br />

2). For each case, the participating laboratory recorded the specimen<br />

type (surgical pathology or cytology), organ or anatomic<br />

site (chosen from a specified list), primary reason for secondary<br />

review (chosen from a specified list), clarity <strong>of</strong> the report, presence<br />

or absence <strong>of</strong> a discrepancy (Table 1), effect <strong>of</strong> discrepancy<br />

on patient outcome (Table 2), and modification <strong>of</strong> report (if performed).<br />

We subclassified discrepancies into several main groups <strong>of</strong><br />

causes that have been previously described in the pathology literature.<br />

2 The detection <strong>of</strong> particular discrepancy subtypes depends<br />

partly on the method <strong>of</strong> secondary review, described in<br />

more detail later. For example, change in margin status is more<br />

likely to be detected by a frozen-permanent section review, and<br />

change in categoric interpretation is more likely to be detected<br />

by cytologic-histologic correlation review. We arbitrarily chose a<br />

disagreement <strong>of</strong> 2 steps as constituting a categoric interpretation<br />

discrepancy, although even a 1-step discrepancy is an error. Raab 2<br />

reported that 2-step discrepancies have a much greater probability<br />

<strong>of</strong> clinical significance compared with 1-step discrepancies. In<br />

addition, because interobserver variability studies have shown<br />

that 1-step discrepancies are much more common, we did not<br />

want to collect data on a large subset <strong>of</strong> cases that had little effect<br />

on patient care. 2 For changes in categoric interpretation, the original<br />

and review diagnoses were recorded.<br />

The taxonomy <strong>of</strong> the effect <strong>of</strong> a pathology discrepancy on patient<br />

outcome was based on the medical patient safety literature,<br />

23–26 which uses a taxonomy related to the effect <strong>of</strong> a failure<br />

<strong>of</strong> a planned action to be completed or the use <strong>of</strong> a wrong plan<br />

to achieve an aim. 1 Diagnostic error does not fit neatly into the<br />

category <strong>of</strong> an ‘‘action’’ error, and the resulting outcome <strong>of</strong> the<br />

patient is <strong>of</strong>ten difficult to assess. In particular, distinguishing<br />

between a no-harm and a near-miss event may be problematic.<br />

We defined a near-miss event as an error that was detected before<br />

harm occurred; an example is when a diagnostic error was picked<br />

up at a conference (or another means <strong>of</strong> secondary review)<br />

before a particular treatment protocol was initiated. In this case,<br />

if the error had not been detected, we assume that some degree<br />

<strong>of</strong> harm would have occurred. We recognize that harm (eg, psychologic)<br />

may still have occurred in this case, but we classified<br />

this event as a near miss, because the ability <strong>of</strong> the review pathologist<br />

to identify this type <strong>of</strong> harm was limited in this study.<br />

We defined a no-harm event as occurring when a diagnostic error<br />

occurred that would not cause patient harm even if the error had<br />

been undetected. An example <strong>of</strong> a no-harm event was a typographic<br />

error, such as writing the word ‘‘brest’’ instead <strong>of</strong><br />

‘‘breast’’ in the diagnostic line.<br />

We defined a significant event as an error resulting in patient<br />

harm. 1 We provided the review pathologist a 3-point Likert scale<br />

to grade the severity <strong>of</strong> this harm, recognizing that this grading<br />

was subjective. Grzybicki et al 27 showed that even with more<br />

well-defined criteria for patient harm, there is little agreement<br />

among pathologists as to the effect <strong>of</strong> an error on patient outcome;<br />

thus, although subclassifying harm needs further study,<br />

we wanted to measure the general view <strong>of</strong> the participant institutions<br />

in assessing harm. We did not require the pathologist to<br />

perform medical record review but only to assess the error severity<br />

based on the information available at the time <strong>of</strong> detection.<br />

In general, the pathologist had some degree <strong>of</strong> knowledge <strong>of</strong> the<br />

clinical outcome. This information varied depending on the method<br />

<strong>of</strong> detection; for example, more information may be available<br />

if a clinician requests secondary review compared to if a cytologic-histologic<br />

correlation is performed.<br />

Autopsy cases were the only anatomic pathology case type excluded<br />

from this study. Multiple specimens with a secondary<br />

review from the same case or specimens from different cases<br />

associated with the same patient were included in the study. Secondary<br />

review is the main method used to detect anatomic pathology<br />

errors. The recorded reasons why cases were reviewed<br />

were as follows: intradepartmental conference, request by clinician,<br />

interdepartmental conference, specified institutional quality<br />

460 Arch Pathol Lab Med—Vol 129, April 2005 Patient Safety in Anatomic Pathology—Raab et al

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!