10.04.2013 Views

AP Discrepancy Rates - College of American Pathologists

AP Discrepancy Rates - College of American Pathologists

AP Discrepancy Rates - College of American Pathologists

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

C<strong>AP</strong> Laboratory Improvement Programs<br />

Patient Safety in Anatomic Pathology<br />

Measuring <strong>Discrepancy</strong> Frequencies and Causes<br />

Stephen S. Raab, MD; Raouf E. Nakhleh, MD; Stephen G. Ruby, MD<br />

● Context.—Anatomic pathology discrepancy frequencies<br />

have not been rigorously studied.<br />

Objective.—To determine the frequency <strong>of</strong> anatomic pathology<br />

discrepancies and the causes <strong>of</strong> these discrepancies.<br />

Design.—Participants in the <strong>College</strong> <strong>of</strong> <strong>American</strong> <strong>Pathologists</strong><br />

Q-Probes program self-reported the number <strong>of</strong><br />

anatomic pathology discrepancies in their laboratories by<br />

prospectively performing secondary review (post–sign-out)<br />

<strong>of</strong> 100 surgical pathology or cytology specimens. Reasons<br />

for the secondary review included conferences, external<br />

review, internal quality assurance policy, and physician request.<br />

Participants.—Seventy-four laboratories self-reported<br />

data.<br />

Main Outcome Measures.—Frequency <strong>of</strong> anatomic pathology<br />

discrepancy; type <strong>of</strong> discrepancy (ie, change in<br />

margin status, change in diagnosis, change in patient in-<br />

The 1999 Institute <strong>of</strong> Medicine report increased the national<br />

awareness <strong>of</strong> medical errors and patient safety. 1<br />

Anatomic pathology errors are reported to occur in 1% to<br />

43% <strong>of</strong> all anatomic pathology specimens, 2–19 and this exceptionally<br />

wide range depends on the methods <strong>of</strong> detection<br />

and the definition <strong>of</strong> what counts as an error. On review<br />

<strong>of</strong> the literature, Raab2 estimated that the mean anatomic<br />

pathology error frequency ranged from 1% to 5%,<br />

although this frequency was largely based on studies using<br />

single-institution data. No large-scale, multi-institutional<br />

anatomic pathology error studies have been conducted,<br />

and information on the effect <strong>of</strong> anatomic pathology<br />

error on patient outcome is generally lacking.<br />

Error detection in anatomic pathology most <strong>of</strong>ten depends<br />

on some form <strong>of</strong> secondary case review. 2 Secondary<br />

case review has been built into some pathology quality<br />

assurance practices (eg, review <strong>of</strong> a set percentage <strong>of</strong> cases,<br />

intradepartmental ‘‘difficult case’’ conferences, cytolog-<br />

Accepted for publication December 2, 2004.<br />

From the Department <strong>of</strong> Pathology, University <strong>of</strong> Pittsburgh, Pittsburgh,<br />

Pa (Dr Raab); the Department <strong>of</strong> Pathology, St Luke’s Hospital/<br />

Mayo Clinic, Jacksonville, Fla (Dr Nakhleh); and the Department <strong>of</strong><br />

Pathology, Palos Community Hospital, Palos Heights, Ill (Dr Ruby).<br />

The authors have no relevant financial interest in the products or<br />

companies described in this article.<br />

Reprints: Stephen S. Raab, MD, Department <strong>of</strong> Pathology, University<br />

<strong>of</strong> Pittsburgh, UPMC Shadyside Hospital, 5150 Centre Ave, Pittsburgh,<br />

PA 15232 (e-mail: raabss@msx.upmc.edu).<br />

formation, or typographic error); effect <strong>of</strong> discrepancy on<br />

patient outcome (ie, no harm, near miss, or harm); and<br />

clarity <strong>of</strong> report.<br />

Results.—The mean and median laboratory discrepancy<br />

frequencies were 6.7% and 5.1%, respectively. Forty-eight<br />

percent <strong>of</strong> all discrepancies were due to a change within<br />

the same category <strong>of</strong> interpretation (eg, 1 tumor type was<br />

changed to another tumor type). Twenty-one percent <strong>of</strong> all<br />

discrepancies were due to a change across categories <strong>of</strong><br />

interpretation (eg, a benign diagnosis was changed to a<br />

malignant diagnosis). Although the majority <strong>of</strong> discrepancies<br />

had no effect on patient care, 5.3% had a moderate<br />

or marked effect on patient care.<br />

Conclusions.—This study establishes a mean multi-institutional<br />

discrepancy frequency (related to secondary review)<br />

<strong>of</strong> 6.7%.<br />

(Arch Pathol Lab Med. 2005;129:459–466)<br />

ic-histologic correlation, or review <strong>of</strong> all malignancies).<br />

Secondary case review also occurs in hospital patient-centered<br />

conferences (eg, tumor board); external consultation<br />

practices; or at the behest <strong>of</strong> clinicians, who may initiate<br />

communication when the pathology report does not correlate<br />

with the clinical findings. An error detected by 1 <strong>of</strong><br />

these processes may be referred to as a discrepancy or a<br />

difference in interpretation or reporting between 2 pathologists.<br />

Error detection frequencies based on the different methods<br />

<strong>of</strong> secondary review have been variably studied. For<br />

example, the <strong>College</strong> <strong>of</strong> <strong>American</strong> <strong>Pathologists</strong> (C<strong>AP</strong>) has<br />

intensively studied gynecologic cytologic-histologic correlation<br />

and reported that paired cervical-vaginal cytology-biopsy<br />

specimens had a sensitivity and specificity <strong>of</strong><br />

89.4% and 64.8%, respectively. 20–22 Based on nongynecologic<br />

cytologic-histologic correlation data, Clary et al 6 reported<br />

that 23% <strong>of</strong> interpretive discrepancies had a major<br />

impact on patient outcome. Other methods <strong>of</strong> secondary<br />

review have been studied in less detail. As a consequence,<br />

how these methods may be utilized to improve patient<br />

safety and be incorporated into laboratory continuous<br />

quality improvement programs is unknown.<br />

The C<strong>AP</strong>’s Q-Probes program has measured and defined<br />

a number <strong>of</strong> key quality indicators in anatomic and<br />

clinical pathology. 21,22 This Q-Probes study is the first multi-institutional<br />

study to measure and to document anatomic<br />

pathology discrepancy frequencies and the effect <strong>of</strong><br />

these discrepancies on patient outcome.<br />

Arch Pathol Lab Med—Vol 129, April 2005 Patient Safety in Anatomic Pathology—Raab et al 459


Table 1. Definition <strong>of</strong> <strong>Discrepancy</strong><br />

<strong>Discrepancy</strong>: A discrepancy has occurred if there is any difference between the original interpretation and the interpretation after the<br />

second review. Discrepancies were further classified by cause into one <strong>of</strong> the following categories:<br />

Change in margin status: The interpretation <strong>of</strong> the margin status was changed from benign to malignant or vice versa.<br />

Change in categoric interpretation: An interpretation was changed from one categoric diagnosis, such as benign, to another categoric<br />

diagnosis, such as malignant. For purposes <strong>of</strong> this study, interpretations were classified within categories that were graded by their<br />

probability <strong>of</strong> a malignant clinical outcome (eg, a benign diagnosis was assigned a 1; atypical, 2; suspicious, 3; and malignant, 4). We<br />

considered a difference <strong>of</strong> 2 or more steps between the original and the review interpretation as a discrepancy. For example, if the<br />

original diagnosis was benign, and the review diagnosis was malignant, the difference in steps between these 2 diagnoses was 4 1<br />

3, and this case was considered discrepant. If the step difference between the original and review interpretations was 1, we decided<br />

that a discrepancy had not occurred.<br />

Change within the same category <strong>of</strong> interpretation: An interpretation was changed from one benign interpretation to another benign<br />

interpretation or from one malignant interpretation to another malignant interpretation. A change from one tumor type to another fell<br />

within this category. For example, if the original interpretation was adenocarcinoma and the review diagnosis was epithelioid sarcoma,<br />

this case was placed within this category <strong>of</strong> discrepancy.<br />

Change in patient information: There was a change in the organ site, such as the left ovary to the right ovary.<br />

Typographic error<br />

Table 2. Definitions <strong>of</strong> Effects <strong>of</strong> Discrepancies<br />

Effect <strong>of</strong> discrepancy on patient management: Discrepancies were classified into the following categories based on patient outcome:<br />

Harm (significant event): A discrepancy that resulted in patient harm (eg, inappropriate treatment, loss <strong>of</strong> life or limb, psychological<br />

event). The effect <strong>of</strong> the significant event on patient outcome was assessed using a 3-point Likert scale (1 severe effect, 2 moderate<br />

effect, 3 mild effect). The pathologists performing the review judged the significance <strong>of</strong> the event.<br />

Near miss: A discrepancy that was detected before harm occurred, such as a discrepancy that was detected at a clinical pathologic<br />

conference before treatment was initiated.<br />

No harm: A discrepancy that did not result in patient harm, such as a typographic error that had no bearing on patient management.<br />

MATERIALS AND METHODS<br />

Laboratories enrolled in the C<strong>AP</strong>’s volunteer Q-Probes quality<br />

improvement program participated in this study in 2003. The Q-<br />

Probes program and the format for data collection and handling<br />

have been previously described in detail. 21<br />

Participants prospectively identified 100 consecutive surgical<br />

pathology or cytology specimens that were reviewed by a second<br />

pathologist after a first pathologist had signed out that case. In<br />

order to standardize the data-collection process across all participating<br />

laboratories, pertinent terms were defined (Tables 1 and<br />

2). For each case, the participating laboratory recorded the specimen<br />

type (surgical pathology or cytology), organ or anatomic<br />

site (chosen from a specified list), primary reason for secondary<br />

review (chosen from a specified list), clarity <strong>of</strong> the report, presence<br />

or absence <strong>of</strong> a discrepancy (Table 1), effect <strong>of</strong> discrepancy<br />

on patient outcome (Table 2), and modification <strong>of</strong> report (if performed).<br />

We subclassified discrepancies into several main groups <strong>of</strong><br />

causes that have been previously described in the pathology literature.<br />

2 The detection <strong>of</strong> particular discrepancy subtypes depends<br />

partly on the method <strong>of</strong> secondary review, described in<br />

more detail later. For example, change in margin status is more<br />

likely to be detected by a frozen-permanent section review, and<br />

change in categoric interpretation is more likely to be detected<br />

by cytologic-histologic correlation review. We arbitrarily chose a<br />

disagreement <strong>of</strong> 2 steps as constituting a categoric interpretation<br />

discrepancy, although even a 1-step discrepancy is an error. Raab 2<br />

reported that 2-step discrepancies have a much greater probability<br />

<strong>of</strong> clinical significance compared with 1-step discrepancies. In<br />

addition, because interobserver variability studies have shown<br />

that 1-step discrepancies are much more common, we did not<br />

want to collect data on a large subset <strong>of</strong> cases that had little effect<br />

on patient care. 2 For changes in categoric interpretation, the original<br />

and review diagnoses were recorded.<br />

The taxonomy <strong>of</strong> the effect <strong>of</strong> a pathology discrepancy on patient<br />

outcome was based on the medical patient safety literature,<br />

23–26 which uses a taxonomy related to the effect <strong>of</strong> a failure<br />

<strong>of</strong> a planned action to be completed or the use <strong>of</strong> a wrong plan<br />

to achieve an aim. 1 Diagnostic error does not fit neatly into the<br />

category <strong>of</strong> an ‘‘action’’ error, and the resulting outcome <strong>of</strong> the<br />

patient is <strong>of</strong>ten difficult to assess. In particular, distinguishing<br />

between a no-harm and a near-miss event may be problematic.<br />

We defined a near-miss event as an error that was detected before<br />

harm occurred; an example is when a diagnostic error was picked<br />

up at a conference (or another means <strong>of</strong> secondary review)<br />

before a particular treatment protocol was initiated. In this case,<br />

if the error had not been detected, we assume that some degree<br />

<strong>of</strong> harm would have occurred. We recognize that harm (eg, psychologic)<br />

may still have occurred in this case, but we classified<br />

this event as a near miss, because the ability <strong>of</strong> the review pathologist<br />

to identify this type <strong>of</strong> harm was limited in this study.<br />

We defined a no-harm event as occurring when a diagnostic error<br />

occurred that would not cause patient harm even if the error had<br />

been undetected. An example <strong>of</strong> a no-harm event was a typographic<br />

error, such as writing the word ‘‘brest’’ instead <strong>of</strong><br />

‘‘breast’’ in the diagnostic line.<br />

We defined a significant event as an error resulting in patient<br />

harm. 1 We provided the review pathologist a 3-point Likert scale<br />

to grade the severity <strong>of</strong> this harm, recognizing that this grading<br />

was subjective. Grzybicki et al 27 showed that even with more<br />

well-defined criteria for patient harm, there is little agreement<br />

among pathologists as to the effect <strong>of</strong> an error on patient outcome;<br />

thus, although subclassifying harm needs further study,<br />

we wanted to measure the general view <strong>of</strong> the participant institutions<br />

in assessing harm. We did not require the pathologist to<br />

perform medical record review but only to assess the error severity<br />

based on the information available at the time <strong>of</strong> detection.<br />

In general, the pathologist had some degree <strong>of</strong> knowledge <strong>of</strong> the<br />

clinical outcome. This information varied depending on the method<br />

<strong>of</strong> detection; for example, more information may be available<br />

if a clinician requests secondary review compared to if a cytologic-histologic<br />

correlation is performed.<br />

Autopsy cases were the only anatomic pathology case type excluded<br />

from this study. Multiple specimens with a secondary<br />

review from the same case or specimens from different cases<br />

associated with the same patient were included in the study. Secondary<br />

review is the main method used to detect anatomic pathology<br />

errors. The recorded reasons why cases were reviewed<br />

were as follows: intradepartmental conference, request by clinician,<br />

interdepartmental conference, specified institutional quality<br />

460 Arch Pathol Lab Med—Vol 129, April 2005 Patient Safety in Anatomic Pathology—Raab et al


assurance policy, and extradepartmental review. These methods<br />

encompass the majority <strong>of</strong> forms <strong>of</strong> secondary review, although<br />

participants also could choose the category <strong>of</strong> ‘‘other’’ if none <strong>of</strong><br />

these methods applied. We chose to examine multiple methods,<br />

because single institutions <strong>of</strong>ten do not review a sufficient number<br />

<strong>of</strong> cases using a single method for statistical significance testing<br />

in the time frame provided by this Q-Probes study. We also<br />

wanted to determine if discrepancies were detected at different<br />

frequencies and had different clinical import depending on the<br />

method <strong>of</strong> detection. Institutional quality assurance policies include<br />

cytologic-histologic correlation, random review, and frozenpermanent<br />

section correlation. We recognize that these methods<br />

<strong>of</strong> review detect only a fraction <strong>of</strong> discrepancies (because in most<br />

institutions, the majority <strong>of</strong> cases do not undergo secondary review)<br />

and exclude those errors detected prior to sign-out. However,<br />

some <strong>of</strong> these forms <strong>of</strong> secondary review are the most likely<br />

means to detect clinically significant error, the subcategory <strong>of</strong><br />

error that some authors believe is the most important to track<br />

and eradicate.<br />

The ‘‘clarity <strong>of</strong> the report’’ question referred to how understandable<br />

the original report was perceived to be from the standpoint<br />

<strong>of</strong> the review pathologist. The review pathologist determined<br />

the report clarity using a 4-point Likert scale (ie, clear,<br />

mildly unclear, moderately unclear, markedly unclear). A limitation<br />

in this assessment was that clinicians, who were the users<br />

<strong>of</strong> the diagnostic information, did not perform the analysis, and<br />

previous authors have indicated discrepancies between clinician<br />

and pathologist impression <strong>of</strong> report clarity. 28 We also recognize<br />

that the clarity assessment was subjective, because definitive criteria<br />

<strong>of</strong> clarity were not provided. The lack <strong>of</strong> clarity may be an<br />

example <strong>of</strong> error, particularly for reports assessed to be moderately<br />

or markedly unclear, because this lack <strong>of</strong> clarity could result<br />

in improper patient management.<br />

Laboratories could choose 3 options regarding computerized<br />

report modification after a discrepancy was detected: (1) the original<br />

report was deleted and replaced with a new report; (2) the<br />

original report was marked with a qualifier, and a new report<br />

was appended; or (3) the original report was not marked or modified,<br />

but a new report was appended.<br />

Institutional demographic and practice variable information<br />

was obtained by completion <strong>of</strong> a questionnaire. These data included<br />

annual specimen volume, number <strong>of</strong> institutional practicing<br />

pathologists, type and frequency <strong>of</strong> institutional review conferences<br />

(eg, breast, chest, or endocrine conference), and institutional<br />

quality assurance methods <strong>of</strong> secondary review. The discrepancy<br />

frequency (expressed as a percentage) was calculated<br />

for the aggregate data and for each institution. The discrepancy<br />

frequency was calculated as the number <strong>of</strong> discrepancies divided<br />

by the total number <strong>of</strong> cases reviewed, multiplied by 100. Discrepancies<br />

and report clarity assessments were subclassified by<br />

organ type (and further broken down by effect on patient outcome).<br />

The correlation <strong>of</strong> the main indicator variable (discrepancy<br />

frequency) was assessed for each <strong>of</strong> the predictor variables separately<br />

by using nonparametric Wilcoxon rank sum or Kruskal-<br />

Wallis tests. The 2 goodness-<strong>of</strong>-fit test was used to test for associations<br />

between the presence <strong>of</strong> a discrepancy and other caselevel<br />

variables. Statistically significant associations are defined at<br />

the P .05 level. The reason for case review was correlated with<br />

the effect <strong>of</strong> the discrepancy on patient outcome.<br />

Some participating institutions did not answer all <strong>of</strong> the questions<br />

on the demographics form or on the input forms. These<br />

institutions were excluded only from the tabulations and the<br />

analyses that required the missing data element.<br />

RESULTS<br />

A total <strong>of</strong> 74 institutions submitted data for this study.<br />

Most institutions (87.8%) were located in the United<br />

States, with the remainder located in Canada (n 7), Australia<br />

(1), and Saudi Arabia (1). Of the participating institutions,<br />

31% were teaching hospitals, and 23% had a pathology<br />

residency program. The Joint Commission on Ac-<br />

Table 3. Percentile Distribution <strong>of</strong> Anatomic<br />

Pathology <strong>Discrepancy</strong> Rate<br />

All Institutional Percentiles<br />

N 10th 25th<br />

50th<br />

(Median) 75th 90th<br />

<strong>Discrepancy</strong> rate, % 74 21.0 10.0 5.1 1.0 0.0<br />

creditation <strong>of</strong> Healthcare Organizations inspected the majority<br />

<strong>of</strong> the institutions (68%) participating in this study,<br />

and the C<strong>AP</strong> inspected 82% <strong>of</strong> laboratories contributing<br />

data. The participants’ institutional sizes were as follows:<br />

39.7% had fewer than 150 beds; 39.7%, 151 to 300 beds;<br />

9.5%, 301 to 450 beds; 3.2%, 451 to 600 beds; and 7.9%,<br />

more than 600 beds. Private nonpr<strong>of</strong>it institutions comprised<br />

54.9% <strong>of</strong> the institutions; 11.1% were private forpr<strong>of</strong>it<br />

institutions; 9.5% were state, county, or city hospitals;<br />

6.3% were university hospitals; 3.2% were federal<br />

governmental institutions; and 15.9% were other. Fiftythree<br />

percent <strong>of</strong> the institutions were within a city, 28.1%<br />

were suburban, and 15.6% were rural. The annual mean<br />

number (and SD) <strong>of</strong> surgical pathology, nongynecologic<br />

cytology, and gynecologic specimens per institution were<br />

16 241 (19 567), 1898 (2376), and 21 147 (44 903), respectively.<br />

The mean and median numbers <strong>of</strong> pathologists at<br />

each institution were 6 and 4, respectively.<br />

The 74 institutions performed secondary review <strong>of</strong> 6186<br />

anatomic pathology specimens (5268 surgical pathology<br />

specimens and 847 cytology specimens), and each institution<br />

collected data on a range <strong>of</strong> from 2 to 100 specimens<br />

(median, 99 specimens). In aggregate, 415 discrepancies<br />

were reported, and the overall mean anatomic pathology<br />

discrepancy frequency was 6.7%. The overall<br />

mean surgical pathology discrepancy frequency was 6.8%<br />

(356 discrepancies <strong>of</strong> 5255 reviewed cases), and the overall<br />

mean cytology discrepancy frequency was 6.5% (55 discrepancies<br />

<strong>of</strong> 844 reviewed cases); neither <strong>of</strong> the specimen<br />

types had a higher discrepancy frequency (P .78). The<br />

distribution <strong>of</strong> the anatomic pathology discrepancy frequencies<br />

is listed in Table 3. Higher percentile ranks indicate<br />

lower discrepancy frequencies.<br />

The number <strong>of</strong> reviewed specimens, overall discrepancy<br />

frequency, and the discrepancy classification by organ<br />

type are shown in Table 4. The most common organ types<br />

reviewed in this study were female genital tract and<br />

breast. None <strong>of</strong> the organ types had a higher frequency <strong>of</strong><br />

discrepancy than other organ types (P .15). For the organ<br />

types with higher volumes reviewed (200 cases), a<br />

change in categoric interpretation (ie, an error type more<br />

likely to be associated with harm) occurred more frequently<br />

in the female genital tract, male genital tract, and<br />

lymph node. A change in margin status occurred most<br />

frequently in breast specimens.<br />

In Table 5, the effect <strong>of</strong> the discrepancy on patient outcome<br />

and the report modification in response to a discrepancy<br />

are listed by organ type. Some form <strong>of</strong> harm was<br />

seen in the majority <strong>of</strong> organs, although specimen type<br />

(cytology or surgical pathology) or organ type did not<br />

correlate with effect on patient outcome (P .73 and P <br />

.83, respectively). Harm was observed in 20.8% (11 cases)<br />

<strong>of</strong> breast specimens and in 25.3% (21 cases) <strong>of</strong> female genital<br />

tract specimens in which a discrepancy was detected.<br />

Neither specimen type nor organ type correlated with the<br />

Arch Pathol Lab Med—Vol 129, April 2005 Patient Safety in Anatomic Pathology—Raab et al 461


Table 4. Number <strong>of</strong> Specimens and <strong>Discrepancy</strong> by Organ Type<br />

Change in Change in Change in<br />

Overall Change in Categoric Same Patient Typographic<br />

Specimens, No. <strong>Discrepancy</strong>, Margin, Interpretation, Category, Information, Error,<br />

Organ<br />

(% <strong>of</strong> Total) %<br />

%<br />

%<br />

%<br />

%<br />

%<br />

Genital, female<br />

982 (15.9) 7.1 2.3 28.7 36.8 18.4 13.8<br />

Breast<br />

796 (12.9) 8.3 10.9 12.5 42.2 10.9 23.4<br />

Lung<br />

463 (7.5) 5.0 0 13.6 50.0 4.6 31.8<br />

Genital, male<br />

355 (5.7) 7.1 0 41.7 54.1 0<br />

4.2<br />

S<strong>of</strong>t tissue<br />

345 (5.6) 7.5 0 12.5 66.7 8.3 12.5<br />

Lymph node<br />

288 (4.7) 5.9 5.9 23.5 52.9 0<br />

17.7<br />

Hepatobiliary<br />

240 (3.9) 6.7 0 13.3 53.3 13.3 20.0<br />

Urinary tract<br />

181 (2.9) 7.2 0 30.8 53.9 0<br />

15.4<br />

Pharynx<br />

141 (2.3) 5.0 0 28.6 42.9 0<br />

28.6<br />

Endocrine<br />

125 (2.0) 8.0 0 10.0 50.0 10.0 30.0<br />

Bone marrow<br />

107 (1.7) 6.6 0 14.3 85.7 0<br />

0<br />

Bone<br />

99 (1.6) 2.0 0<br />

0<br />

50.0 0<br />

50.0<br />

Neuropathology<br />

88 (1.4) 6.8 0 16.7 83.3 0<br />

0<br />

Kidney<br />

55 (0.9) 5.5 0 33.3 33.3 0<br />

33.3<br />

Pancreas<br />

31 (0.5) 6.5 0 50.0<br />

0<br />

0<br />

50.0<br />

Salivary gland<br />

29 (0.5) 3.5 0<br />

0 100.0 0<br />

0<br />

Spleen<br />

11 (0.2) 9.1 0<br />

0<br />

0<br />

0 100.0<br />

Gastrointestinal and other<br />

1841 (29.8) 5.6 5.0 19.0 48.0 8.0 20.0<br />

Total 6162 6.7 3.7 21.0 47.7 9.1 18.5<br />

Organ<br />

Genital, female<br />

Breast<br />

Lung<br />

Genital, male<br />

S<strong>of</strong>t tissue<br />

Lymph node<br />

Hepatobiliary<br />

Urinary tract<br />

Pharynx<br />

Endocrine<br />

Bone marrow<br />

Bone<br />

Neuropathology<br />

Kidney<br />

Pancreas<br />

Salivary gland<br />

Spleen<br />

Gastrointestinal and other<br />

Table 5. Effect <strong>of</strong> <strong>Discrepancy</strong> on Patient Management and Pathology Response by Organ Type<br />

Marked<br />

Harm,<br />

%<br />

2.4<br />

1.9<br />

5.3<br />

4.4<br />

8.0<br />

0<br />

0<br />

7.7<br />

0<br />

0<br />

0<br />

0<br />

0<br />

0<br />

0<br />

0<br />

0<br />

0<br />

Moderate<br />

Harm,<br />

%<br />

6.0<br />

7.6<br />

0<br />

0<br />

0<br />

6.2<br />

0<br />

0<br />

0<br />

0<br />

0<br />

0<br />

0<br />

0<br />

0<br />

0<br />

0<br />

2.1<br />

Effect on Patient Outcome<br />

Mild<br />

Harm,<br />

%<br />

16.9<br />

11.3<br />

0<br />

8.7<br />

0<br />

0<br />

7.1<br />

23.1<br />

14.3<br />

0<br />

28.6<br />

0<br />

16.7<br />

0<br />

50.0<br />

0<br />

0<br />

12.6<br />

Near Miss,<br />

%<br />

No Harm,<br />

%<br />

Report Change<br />

Yes, % No, %<br />

462 Arch Pathol Lab Med—Vol 129, April 2005 Patient Safety in Anatomic Pathology—Raab et al<br />

4.8<br />

13.2<br />

21.1<br />

13.0<br />

12.0<br />

18.8<br />

0<br />

7.7<br />

0<br />

11.1<br />

0<br />

0<br />

0<br />

33.3<br />

0<br />

0<br />

0<br />

6.3<br />

69.9<br />

66.0<br />

73.7<br />

73.9<br />

80.0<br />

75.0<br />

92.9<br />

61.5<br />

85.7<br />

88.9<br />

71.4<br />

100.0<br />

83.3<br />

66.7<br />

50.0<br />

100.0<br />

100.0<br />

79.0<br />

58.3<br />

62.3<br />

33.3<br />

56.5<br />

58.3<br />

56.3<br />

42.9<br />

38.5<br />

14.3<br />

33.3<br />

42.9<br />

100.0<br />

50.0<br />

0<br />

100.0<br />

0<br />

100.0<br />

54.1<br />

Total 2.1 3.2 11.3 8.7 7.5 53.3 46.7<br />

report modification in response to a discrepancy (P .15<br />

and P .14, respectively).<br />

In Table 6, the clarity <strong>of</strong> the report is listed by organ<br />

type. Markedly and moderately unclear reports were seen<br />

infrequently and only in a few specimen types, such as<br />

female genital tract, breast, and lung. Because <strong>of</strong> the large<br />

number <strong>of</strong> cells with a value <strong>of</strong> 0, a 2 goodness-<strong>of</strong>-fit test<br />

was not performed.<br />

In Table 7, the discrepancy type, original and review<br />

interpretations for categoric discrepancies, and the effect<br />

<strong>of</strong> the discrepancy on patient outcome are shown. A<br />

change in the same category <strong>of</strong> diagnosis was the most<br />

common discrepancy detected. For changes in categoric<br />

interpretation, the review diagnosis tended to be shifted<br />

downward to a benign or upward to a malignant diagnosis,<br />

and there were fewer nondefinitive (atypical or sus-<br />

41.7<br />

37.7<br />

66.7<br />

43.5<br />

41.7<br />

43.7<br />

57.1<br />

61.5<br />

85.7<br />

66.7<br />

57.1<br />

0<br />

50.0<br />

100.0<br />

0<br />

100.0<br />

0<br />

45.9<br />

picious) diagnoses compared with the original diagnosis.<br />

When a discrepancy occurred, the most common classification,<br />

based on patient outcome, was a no-harm event.<br />

Anatomic pathology discrepancy specimen-centered<br />

variables, including specimen type and origin, discrepancy<br />

type, primary reason for review, the effect <strong>of</strong> discrepancy<br />

on patient outcome, and the response to a discrepancy<br />

in the form <strong>of</strong> a report change, were evaluated<br />

to identify any associations. The statistically significant associations<br />

are shown in Table 8. A request for review directed<br />

by a clinician was much more likely to be associated<br />

with a discrepancy than all other reasons for review. If a<br />

discrepancy occurred, a change in categoric interpretation<br />

was more likely to be seen in cytology specimens compared<br />

with surgical pathology specimens and related to<br />

extradepartmental review compared with all other rea-


Genital, female<br />

Breast<br />

Lung<br />

Genital, male<br />

S<strong>of</strong>t tissue<br />

Lymph node<br />

Hepatobiliary<br />

Urinary tract<br />

Pharynx<br />

Endocrine<br />

Bone marrow<br />

Bone<br />

Neuropathology<br />

Kidney<br />

Pancreas<br />

Organ Clear, %<br />

Salivary gland<br />

Spleen<br />

Gastrointestinal and other<br />

Table 6. Clarity <strong>of</strong> Report by Organ Type<br />

97.6<br />

96.5<br />

97.0<br />

98.6<br />

97.7<br />

97.6<br />

96.6<br />

98.3<br />

98.6<br />

99.2<br />

94.4<br />

99.0<br />

95.4<br />

96.2<br />

96.8<br />

100.0<br />

90.9<br />

97.4<br />

Mildly<br />

Unclear, %<br />

Moderately<br />

Unclear, %<br />

Markedly<br />

Unclear, %<br />

Total 97.4 2.3 0.3 0.07<br />

Table 7. <strong>Discrepancy</strong> Type, the Original and Review<br />

Diagnoses for Categoric Discrepancies, and the Effect<br />

<strong>of</strong> <strong>Discrepancy</strong> on Patient Outcome<br />

<strong>Discrepancy</strong> type<br />

Change within same category<br />

Change in categoric interpretation<br />

Typographic error<br />

Change in patient information<br />

Change in margin status<br />

Original interpretation for categoric<br />

discrepancies<br />

Benign<br />

Atypical<br />

Suspicious<br />

Malignant<br />

Reviewed interpretation for categoric<br />

discrepancies<br />

Benign<br />

Atypical<br />

Suspicious<br />

Malignant<br />

Effect on patient outcome<br />

Harm<br />

Marked<br />

Moderate<br />

Mild<br />

Near miss<br />

No harm<br />

Specimens, No. (%)<br />

415<br />

194<br />

85<br />

75<br />

37<br />

15<br />

25<br />

27<br />

16<br />

16<br />

28<br />

21<br />

7<br />

28<br />

63<br />

8<br />

12<br />

43<br />

33<br />

283<br />

(100)<br />

(47.8)<br />

(20.9)<br />

(18.5)<br />

(9.1)<br />

(3.7)<br />

(29.8)<br />

(32.1)<br />

(16.0)<br />

(16.0)<br />

(33.3)<br />

(25.9)<br />

(8.3)<br />

(33.3)<br />

(16.6)<br />

(2.1)<br />

(3.2)<br />

(11.3)<br />

(8.7)<br />

(74.7)<br />

sons for review. If a near-miss event occurred, the reason<br />

for case review was more likely to be extradepartmental<br />

review compared with all other reasons for review.<br />

Table 9 shows that the reason for case review correlated<br />

with patient outcome in discrepant cases (P .02). Harm<br />

occurred more frequently in discrepant cases that were<br />

reviewed at the request <strong>of</strong> a clinician (23.5%) and interdepartmental<br />

conference (25.0%). Clinician-directed review<br />

was the most common method that detected a discrepancy<br />

(23.0% <strong>of</strong> all cases reviewed). The majority <strong>of</strong><br />

discrepant cases detected at an intradepartmental conference<br />

were associated with no-harm events.<br />

Arch Pathol Lab Med—Vol 129, April 2005 Patient Safety in Anatomic Pathology—Raab et al 463<br />

1.9<br />

2.1<br />

2.8<br />

1.4<br />

2.1<br />

2.4<br />

3.4<br />

1.7<br />

1.4<br />

0.8<br />

3.8<br />

1.0<br />

4.6<br />

3.8<br />

3.2<br />

0<br />

9.1<br />

2.4<br />

0.3<br />

1.3<br />

0.2<br />

0<br />

0<br />

0<br />

0<br />

0<br />

0<br />

0<br />

1.9<br />

0<br />

0<br />

0<br />

0<br />

0<br />

0<br />

0.2<br />

0.2<br />

0.1<br />

0<br />

0<br />

Of the 74 institutions, the number that had a conference<br />

devoted to breast review was 33; chest, 21; endocrine, 8;<br />

gastrointestinal, 22; general surgical, 29; genitourinary<br />

tract, 18; gynecologic, 26; head and neck, 16; hematopathology,<br />

20; liver, 15; renal, 18; and tumor board, 60. Institutional<br />

quality assurance practices were measured as<br />

well: 52 institutions had an intradepartmental conference<br />

for difficult cases; 31 reviewed a percentage <strong>of</strong> cases after<br />

sign-out; 22 reviewed all malignancies before sign-out; 19<br />

reviewed a percentage <strong>of</strong> cases before sign-out; 6 reviewed<br />

all malignancies after sign-out; and 1 reviewed all cases<br />

before sign-out. Of the institutions that made changes to<br />

the reports after an error, 43 issued an amended report<br />

and did not retrieve the original report; 3 retrieved the<br />

original report, stamped it ‘‘in error,’’ and filed the report<br />

in the chart; 3 destroyed the original reports; and 3 handled<br />

the change using other methods.<br />

COMMENT<br />

This is the first study to determine a baseline anatomic<br />

pathology discrepancy frequency across multiple pathology<br />

laboratories. Based on secondary pathologist review,<br />

the mean anatomic pathology discrepancy frequency<br />

(based on cases reviewed through several different methods)<br />

was 6.7%, and the variability across laboratories was<br />

striking, with the 25th and 75th percentiles being 10.0%<br />

and 1.0%, respectively.<br />

<strong>Discrepancy</strong> represents one form <strong>of</strong> error, and based on<br />

literature review <strong>of</strong> a number <strong>of</strong> error-detection methods,<br />

Raab 2 estimated that the mean laboratory error frequency<br />

ranged from 1% to 5%. The discrepancy frequency established<br />

in this Q-Probes study is based on review <strong>of</strong> selected<br />

cases (a targeted group <strong>of</strong> cases studied), a bias that<br />

may overestimate overall laboratory error. Error frequencies<br />

partly depend on the method <strong>of</strong> case detection, and<br />

the more thoroughly one looks for error, the more frequently<br />

one will find it. 1,29 As expected, some <strong>of</strong> the secondary<br />

review methods used in this study detected more<br />

error than other methods; for example, clinician-directed<br />

review detected a discrepancy (23.0% <strong>of</strong> cases) more frequently<br />

than random review (4.3%). In general, methods<br />

0.3<br />

0<br />

0<br />

0<br />

0<br />

0<br />

0<br />

0<br />

0<br />

0<br />

0<br />

0<br />

0<br />

0


Primary reason for review (P .001)<br />

Request by clinician<br />

All other reasons<br />

<strong>Discrepancy</strong> type: change in categoric interpretation<br />

Specimen type (P .001)<br />

Cytology<br />

Surgical pathology<br />

Primary reason for secondary review (P .03)<br />

Extradepartmental review<br />

All other reasons for review<br />

<strong>Discrepancy</strong> type: change in same category <strong>of</strong> diagnosis<br />

Specimen type (P .001)<br />

Cytology<br />

Surgical pathology<br />

<strong>Discrepancy</strong> type: change in patient information<br />

Primary reason for secondary review (P .001)<br />

Request by clinician<br />

All other reasons for review<br />

Effect on patient outcome: near miss<br />

Primary reason for secondary review (P .001)<br />

Extradepartmental review<br />

All other reasons for review<br />

Effect on patient outcome: no harm<br />

Primary reason for secondary review (P .001)<br />

Intradepartmental review<br />

All other reasons for review<br />

Response to a discrepancy: report change<br />

Primary reason for secondary review (P .001)<br />

Interdepartmental conference<br />

Request by clinician<br />

All other reasons for review<br />

Table 8. Statistically Significant Associations<br />

No. <strong>of</strong> Specimens No. With <strong>Discrepancy</strong><br />

464 Arch Pathol Lab Med—Vol 129, April 2005 Patient Safety in Anatomic Pathology—Raab et al<br />

348<br />

5812<br />

54<br />

349<br />

89<br />

317<br />

349<br />

54<br />

79<br />

327<br />

80<br />

299<br />

44<br />

335<br />

43<br />

77<br />

250<br />

Table 9. Reason for Review Correlated With Effect on Patient Outcome<br />

Errors<br />

Detected, No.<br />

Effect on Patient Outcome<br />

(% <strong>of</strong> Cases Harm, Near Miss, No Harm,<br />

Reason for Review<br />

Reviewed) No. (%)<br />

No. (%)<br />

No. (%) Total<br />

Intradepartmental review<br />

47 (7.1)<br />

2 (5.3)<br />

2 (5.3)<br />

34 (89.5) 38<br />

Request by clinician<br />

80 (23.0) 12 (23.5) 3 (5.9)<br />

36 (70.6) 51<br />

Interdepartmental review<br />

48 (4.8)<br />

8 (25.0) 2 (6.3)<br />

22 (68.8) 32<br />

Selected by quality assurance review<br />

127 (4.3) 13 (14.6) 8 (9.0)<br />

68 (76.4) 89<br />

Extradepartmental review<br />

92 (8.6)<br />

8 (14.0) 12 (21.1) 37 (64.9) 57<br />

Total 43 27 197 267<br />

<strong>of</strong> case review that involve clinical input are a better<br />

means to detect error but are harder to perform.<br />

<strong>Pathologists</strong> have studied error frequency using intralaboratory<br />

quality assurance methods in some detail. The<br />

most commonly studied review method is correlation,<br />

such as frozen-permanent section correlation or cytologichistologic<br />

correlation, a form <strong>of</strong> review mandated by the<br />

Clinical Laboratory Improvement Amendments <strong>of</strong> 1988. 20<br />

Using this review process and the total number <strong>of</strong> cases<br />

as the denominator, Clary et al 6 reported that 2.26% and<br />

0.44% <strong>of</strong> nongynecologic cytology and histology cases<br />

were discrepant. This Q-Probes study showed that the error<br />

frequency based on extradepartmental conference review<br />

was 7.1%, whereas Raab et al 30 reported an error<br />

frequency <strong>of</strong> 8.9%, with severe significant events occurring<br />

in 7.0% <strong>of</strong> all errors. McBroom and Ramsay 11 reported that<br />

80<br />

335<br />

27<br />

56<br />

26<br />

59<br />

179<br />

15<br />

24<br />

13<br />

15<br />

18<br />

40<br />

243<br />

33<br />

53<br />

103<br />

9.0% <strong>of</strong> cases reviewed at a clinicopathologic conference<br />

had a change in diagnosis. This similarity in error frequency<br />

across studies and across many institutions may<br />

indicate a true benchmark.<br />

Benchmarking and reducing anatomic pathology error<br />

frequency clearly is just beginning, and prior to this study,<br />

the benchmarks were based on single-institution or anecdotal<br />

data. 2 The correlation <strong>of</strong> error frequencies with particular<br />

secondary review practices and existing laboratory<br />

error reduction programs is largely unknown. Lack <strong>of</strong><br />

subspecialty expertise may contribute to higher error frequencies,<br />

although error data detected using extradepartmental<br />

specialty academic institution review may be biased<br />

and may overreport error. 4,7,31 Underreporting <strong>of</strong> error<br />

because <strong>of</strong> biased review methods, lack <strong>of</strong> understandable<br />

error taxonomy, and individual fears invariably exists,


ut this has not been thoroughly studied in pathology laboratories.<br />

Some leading patient safety researchers, such as Resar<br />

et al, 32 have argued that error prevention programs should<br />

target errors that have an effect on patient outcome, rather<br />

than all errors. These Q-Probes study data show that the<br />

majority <strong>of</strong> anatomic pathology discrepancies do not result<br />

in harm, similar to the data reported in nonpathology<br />

fields. 1 Determining the effect <strong>of</strong> a pathology error on patient<br />

outcome is challenging because <strong>of</strong> contact and temporal<br />

barriers between pathology and clinical care. Thorough<br />

medical record review generally is not performed<br />

after a pathology error has occurred, and health care systems<br />

invariably lack personnel trained in the intricacies <strong>of</strong><br />

the triad <strong>of</strong> pathology reporting, patient safety, and assessing<br />

patient outcomes. Grzybicki et al 27 reported that<br />

even with data from medical record review, pathologists<br />

showed poor agreement in determining if harm actually<br />

occurred. We used pathologist self-assessment to determine<br />

error severity, and acknowledging the biases inherent<br />

in this process, we found that 16.6% <strong>of</strong> all errors resulted<br />

in some form <strong>of</strong> patient harm. This indicates that<br />

1.1% <strong>of</strong> all anatomic pathology cases that underwent secondary<br />

review were associated with a harmful significant<br />

event. Because secondary review processes tend to be varied<br />

across institutions, the probability is high that a relatively<br />

large percentage <strong>of</strong> pathology errors resulting in<br />

harm go undetected.<br />

Statistical significance testing did not show that any<br />

body site was most likely to be associated with an error.<br />

However, some organs (eg, female genital tract and breast)<br />

tended to have higher associations with a discrepancy resulting<br />

in clinical harm compared with other organs. Using<br />

medical chart review to determine clinical follow-up<br />

<strong>of</strong> errors detected through cytologic-histologic correlation,<br />

Clary et al 6 reported that the most common cytology specimens<br />

associated with harm were pulmonary and breast<br />

specimens. In a study examining errors associated with<br />

malpractice claims, Troxel and Sabella 15 also identified<br />

that diagnostic errors in breast cytology specimens were<br />

associated with harm; in addition, they identified other<br />

organ or specimen types, such as prostate needle biopsies,<br />

Papanicolaou tests, and melanocytic skin lesions, as being<br />

associated with errors resulting in harm. In this Q-Probes<br />

study, discrepancies detected in cytology specimens were<br />

more likely (compared with surgical pathology specimens)<br />

to have a 2-step change in diagnosis, and this<br />

change <strong>of</strong>ten meant that the original diagnosis was either<br />

a false-negative or false-positive diagnosis. These types <strong>of</strong><br />

errors are more likely to have a clinical effect than other<br />

types <strong>of</strong> errors. 2 Harm was more likely to be associated<br />

with cases reviewed at the behest <strong>of</strong> a clinician and cases<br />

sent for extradepartmental review.<br />

Although researchers have studied anatomic pathology<br />

diagnostic variability, the relationship <strong>of</strong> variability and<br />

error is not sufficiently addressed in the pathology literature.<br />

Strictly speaking, variability is a form <strong>of</strong> error, and<br />

secondary case review is simply a means to unearth differences<br />

in diagnosis or errors. In practice, some pathologists<br />

would like to maintain that ‘‘true’’ diagnostic errors<br />

are errors that most reasonable pathologists would agree<br />

are errors. However, establishing the ‘‘true’’ diagnosis is a<br />

complex and controversial task (eg, do we use a panel <strong>of</strong><br />

practicing pathologists or experts?) that has not been well<br />

studied in anatomic pathology. Our study did not address<br />

methods <strong>of</strong> establishing the accuracy <strong>of</strong> the original and<br />

review diagnoses. However, some methods <strong>of</strong> secondary<br />

review were performed with knowledge <strong>of</strong> clinical outcome,<br />

which, although biasing the review pathologist<br />

(compared with the original pathologist, who lacked this<br />

bias), provides a window on the actual effect <strong>of</strong> the diagnosis.<br />

This Q-Probes study recorded errors other than interpretive<br />

errors. Typographic errors and changes in patient<br />

information infrequently result in harm, but when harm<br />

occurs, it may be severe, with far-reaching consequences<br />

(eg, switching <strong>of</strong> patient specimens). Pathology laboratories<br />

place checks in the system in order to limit these error<br />

types, although these safeguards are generally not formally<br />

shared across laboratories. The frequency <strong>of</strong> marked<br />

harm as a result <strong>of</strong> these errors is known only through<br />

small studies or anecdotally, 9,33,34 resulting in a lack <strong>of</strong><br />

widespread system learning. Our study was not detailed<br />

enough to drill down into these error types.<br />

The lack <strong>of</strong> report clarity is an example <strong>of</strong> error that is<br />

difficult to quantify and falls within the realm <strong>of</strong> communication<br />

error. Poor communication is an important<br />

source <strong>of</strong> error in clinical medicine and may result in severe<br />

harm 35 ; Dovey et al 25 reported that 5.8% <strong>of</strong> family<br />

practice errors were a result <strong>of</strong> miscommunication. Report<br />

clarity is subjective, and Powsner et al 28 reported that surgeons<br />

misunderstood 30% <strong>of</strong> pathology reports. As expected,<br />

in this Q-Probes study, pathologists believed that<br />

the majority <strong>of</strong> reports were clear, and less than 1% were<br />

perceived as markedly unclear. This confirms the conclusion<br />

by Powsner et al 28 that a communication gap exists<br />

between pathologists and clinicians. The fact that clinicians<br />

request that a certain percentage <strong>of</strong> cases be reviewed<br />

underscores a potential communication problem<br />

but shows as well a functioning means <strong>of</strong> detecting error.<br />

Conclusions<br />

Using secondary pathologist review, the mean anatomic<br />

pathology diagnostic discrepancy frequency was 6.7%.<br />

More than 1% <strong>of</strong> all reviewed anatomic pathology cases<br />

may be associated with an error associated with patient<br />

harm.<br />

The <strong>College</strong> <strong>of</strong> <strong>American</strong> <strong>Pathologists</strong> provided financial support.<br />

The statistical reviewer was Molly Walsh, PhD, <strong>College</strong> <strong>of</strong><br />

<strong>American</strong> <strong>Pathologists</strong>.<br />

References<br />

1. Kohn LT, Corrigan JM, Donaldson MS, eds. To Err Is Human: Building a<br />

Safer Health System. Washington, DC: National Academy Press; 1999.<br />

2. Raab SS. Improving patient safety by examining pathology errors. Clin Lab<br />

Med. 2004;24:849–863.<br />

3. Adad SJ, Souza MA, Etchebehere RM, et al. Cyto-histological correlation <strong>of</strong><br />

219 patients submitted to surgical treatment due to diagnosis <strong>of</strong> cervical intraepithelial<br />

neoplasia. Sao Paulo Med J. 1999;117:81–84.<br />

4. Arbiser ZK, Folpe AL, Weiss SW. Consultative (expert) second opinions in<br />

s<strong>of</strong>t tissue pathology: analysis <strong>of</strong> problem-prone diagnostic situations. Am J Clin<br />

Pathol. 2001;116:473–476.<br />

5. Chan YM, Cheung AN, Cheng DK, et al. Pathology slide review in gynecologic<br />

oncology: routine or selective? Gynecol Oncol. 1999;75:267–271.<br />

6. Clary KM, Silverman JF, Liu Y, et al. Cytohistologic discrepancies: a means<br />

to improve pathology practice and patient outcomes. Am J Clin Pathol. 2002;<br />

117:567–573.<br />

7. Hahm GK, Niemann TH, Lucas JG, et al. The value <strong>of</strong> second opinion in<br />

gastrointestinal and liver pathology. Arch Pathol Lab Med. 2001;125:736–739.<br />

8. Furness PN, Lauder I. A questionnaire-based survey <strong>of</strong> errors in diagnostic<br />

histopathology throughout the United Kingdom. J Clin Pathol. 1997;50:457–460.<br />

9. Hocking GR, Niteckis N, Cairns BJ, Hayman JA. Departmental audit in surgical<br />

anatomical pathology. Pathology. 1997;29:418–421.<br />

10. Labbe S, Petitjean A. False negatives and quality assurance in cervicouterine<br />

cytology. Ann Pathol. 1999;19:457–462.<br />

Arch Pathol Lab Med—Vol 129, April 2005 Patient Safety in Anatomic Pathology—Raab et al 465


11. McBroom HM, Ramsay AD. The clinicopathological meeting: a means <strong>of</strong><br />

auditing performance. Am J Surg Pathol. 1993;17:75–80.<br />

12. Nakhleh RE, Zarbo RJ. Amended reports in surgical pathology and implications<br />

for diagnostic error detection and avoidance. Arch Pathol Lab Med. 1998;<br />

122:303–309.<br />

13. Ramsay AD, Gallagher PJ. Local audit <strong>of</strong> surgical pathology: 18 months<br />

experience <strong>of</strong> peer-review–based quality assessment in an English teaching hospital.<br />

Am J Surg Pathol. 1992;16:476–482.<br />

14. Safrin RE, Bark CJ. Surgical pathology signout: routine review <strong>of</strong> every case<br />

by a second pathologist. Am J Surg Pathol. 1993;17:1190–1192.<br />

15. Troxel DB, Sabella JD. Problem areas in pathology practice uncovered by<br />

a review <strong>of</strong> malpractice claims. Am J Surg Pathol. 1994;18:821–831.<br />

16. Whitehead ME, Fitzwater JE, Lindley SK, et al. Quality assurance <strong>of</strong> histopathologic<br />

diagnoses: a prospective audit <strong>of</strong> 3 thousand cases. Am J Clin Pathol.<br />

1984;81:487–491.<br />

17. Zuk JA, Kenyon WE, Myskow MW. Audit in histopathology: description <strong>of</strong><br />

an internal quality assessment scheme with analysis <strong>of</strong> preliminary results. J Clin<br />

Pathol. 1991;44:10–15.<br />

18. Lind AC, Bewtra C, Healy JC, et al. Prospective peer review in surgical<br />

pathology. Am J Clin Pathol. 1995;104:560–566.<br />

19. Zardawi IM, Bennett G, Jain S, et al. Internal quality assurance activities<br />

<strong>of</strong> a surgical pathology department in an Australian teaching hospital. J Clin<br />

Pathol. 1998;51:695–699.<br />

20. Jones BA, Novis DA. Cervical biopsy-cytology correlation: a <strong>College</strong> <strong>of</strong><br />

<strong>American</strong> <strong>Pathologists</strong> Q-Probes study <strong>of</strong> 22439 correlations in 348 laboratories.<br />

Arch Pathol Lab Med. 1996;120:523–531.<br />

21. Zarbo RJ. Monitoring anatomic pathology practice through quality assurance<br />

measures. Clin Lab Med. 1999;19:713–742.<br />

22. Raab SS, Jones BA. Q-TRACKS: Gynecologic Cytologic-Histologic Correlation:<br />

2003 Annual Summary. Northfield, Ill: <strong>College</strong> <strong>of</strong> <strong>American</strong> <strong>Pathologists</strong>;<br />

2003.<br />

23. Brixey J, Johnson TR, Zhang J. Evaluating a medical error taxonomy. Proc<br />

AMIA Symp. 2002:71–75.<br />

24. McNutt RA, Abrams RI. A model <strong>of</strong> medical error based on a model <strong>of</strong><br />

disease: interactions between adverse events, failures, and their errors. Qual Manag<br />

Health Care. 2002;10:23–28.<br />

25. Dovey SM, Meyers DS, Phillips RL Jr, et al. A preliminary taxonomy <strong>of</strong><br />

medical errors in family practice. Qual Saf Health Care. 2002;11:233–238.<br />

26. Tamuz M, Thomas EJ, Franchois KE. Defining and classifying medical error:<br />

lessons for patient safety reporting systems. Qual Saf Health Care. 2004;13:13–<br />

20.<br />

27. Grzybicki DM, Vrbin-Turcsanyi CM, Janosky J, Raab SS. Examining pathology<br />

errors to improve patient safety: pathologists don’t agree on the identification<br />

<strong>of</strong> errors due to pathologist misinterpretation. Paper presented at:<br />

AcademyHealth Annual Research Meeting; June 8, 2004; San Diego, Calif.<br />

28. Powsner SM, Costa J, Homer RJ. Clinicians are from Mars and pathologists<br />

are from Venus. Arch Pathol Lab Med. 2000;124:1040–1046.<br />

29. Nieva VF, Sorra J. Safety culture assessment: a tool for improving patient<br />

safety in healthcare organizations. Qual Saf Health Care. 2003;12S:ii17–23.<br />

30. Raab SS, Clary KM, Grzybicki DM. Improving pathology practice by review<br />

<strong>of</strong> cases presented at chest conference [abstract]. Mod Pathol. 2003;67:<br />

312A–313A.<br />

31. Tsung JS. Institutional pathology consultation. Am J Surg Pathol. 2004;28:<br />

399–402.<br />

32. Resar RK, Rozich JD, Classen D. Methodology and rationale for the measurement<br />

<strong>of</strong> harm with trigger tools. Qual Saf Health Care. 2003;12S:ii39–45.<br />

33. Cree IA, Guthrie W, Anderson JM, et al. Departmental audit in histopathology.<br />

Pathol Res Pract. 1993;189:453–457.<br />

34. Ramsay AD. Errors in histopathology reporting: detection and avoidance.<br />

Histopathology. 1999;34:481–490.<br />

35. Gawande AA, Zinner MJ, Studdert DM, Brennan TA. Analysis <strong>of</strong> errors<br />

reported by surgeons at 3 teaching hospitals. Surgery. 2003;133:614–621.<br />

466 Arch Pathol Lab Med—Vol 129, April 2005 Patient Safety in Anatomic Pathology—Raab et al

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!