AP Discrepancy Rates - College of American Pathologists
AP Discrepancy Rates - College of American Pathologists
AP Discrepancy Rates - College of American Pathologists
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
assurance policy, and extradepartmental review. These methods<br />
encompass the majority <strong>of</strong> forms <strong>of</strong> secondary review, although<br />
participants also could choose the category <strong>of</strong> ‘‘other’’ if none <strong>of</strong><br />
these methods applied. We chose to examine multiple methods,<br />
because single institutions <strong>of</strong>ten do not review a sufficient number<br />
<strong>of</strong> cases using a single method for statistical significance testing<br />
in the time frame provided by this Q-Probes study. We also<br />
wanted to determine if discrepancies were detected at different<br />
frequencies and had different clinical import depending on the<br />
method <strong>of</strong> detection. Institutional quality assurance policies include<br />
cytologic-histologic correlation, random review, and frozenpermanent<br />
section correlation. We recognize that these methods<br />
<strong>of</strong> review detect only a fraction <strong>of</strong> discrepancies (because in most<br />
institutions, the majority <strong>of</strong> cases do not undergo secondary review)<br />
and exclude those errors detected prior to sign-out. However,<br />
some <strong>of</strong> these forms <strong>of</strong> secondary review are the most likely<br />
means to detect clinically significant error, the subcategory <strong>of</strong><br />
error that some authors believe is the most important to track<br />
and eradicate.<br />
The ‘‘clarity <strong>of</strong> the report’’ question referred to how understandable<br />
the original report was perceived to be from the standpoint<br />
<strong>of</strong> the review pathologist. The review pathologist determined<br />
the report clarity using a 4-point Likert scale (ie, clear,<br />
mildly unclear, moderately unclear, markedly unclear). A limitation<br />
in this assessment was that clinicians, who were the users<br />
<strong>of</strong> the diagnostic information, did not perform the analysis, and<br />
previous authors have indicated discrepancies between clinician<br />
and pathologist impression <strong>of</strong> report clarity. 28 We also recognize<br />
that the clarity assessment was subjective, because definitive criteria<br />
<strong>of</strong> clarity were not provided. The lack <strong>of</strong> clarity may be an<br />
example <strong>of</strong> error, particularly for reports assessed to be moderately<br />
or markedly unclear, because this lack <strong>of</strong> clarity could result<br />
in improper patient management.<br />
Laboratories could choose 3 options regarding computerized<br />
report modification after a discrepancy was detected: (1) the original<br />
report was deleted and replaced with a new report; (2) the<br />
original report was marked with a qualifier, and a new report<br />
was appended; or (3) the original report was not marked or modified,<br />
but a new report was appended.<br />
Institutional demographic and practice variable information<br />
was obtained by completion <strong>of</strong> a questionnaire. These data included<br />
annual specimen volume, number <strong>of</strong> institutional practicing<br />
pathologists, type and frequency <strong>of</strong> institutional review conferences<br />
(eg, breast, chest, or endocrine conference), and institutional<br />
quality assurance methods <strong>of</strong> secondary review. The discrepancy<br />
frequency (expressed as a percentage) was calculated<br />
for the aggregate data and for each institution. The discrepancy<br />
frequency was calculated as the number <strong>of</strong> discrepancies divided<br />
by the total number <strong>of</strong> cases reviewed, multiplied by 100. Discrepancies<br />
and report clarity assessments were subclassified by<br />
organ type (and further broken down by effect on patient outcome).<br />
The correlation <strong>of</strong> the main indicator variable (discrepancy<br />
frequency) was assessed for each <strong>of</strong> the predictor variables separately<br />
by using nonparametric Wilcoxon rank sum or Kruskal-<br />
Wallis tests. The 2 goodness-<strong>of</strong>-fit test was used to test for associations<br />
between the presence <strong>of</strong> a discrepancy and other caselevel<br />
variables. Statistically significant associations are defined at<br />
the P .05 level. The reason for case review was correlated with<br />
the effect <strong>of</strong> the discrepancy on patient outcome.<br />
Some participating institutions did not answer all <strong>of</strong> the questions<br />
on the demographics form or on the input forms. These<br />
institutions were excluded only from the tabulations and the<br />
analyses that required the missing data element.<br />
RESULTS<br />
A total <strong>of</strong> 74 institutions submitted data for this study.<br />
Most institutions (87.8%) were located in the United<br />
States, with the remainder located in Canada (n 7), Australia<br />
(1), and Saudi Arabia (1). Of the participating institutions,<br />
31% were teaching hospitals, and 23% had a pathology<br />
residency program. The Joint Commission on Ac-<br />
Table 3. Percentile Distribution <strong>of</strong> Anatomic<br />
Pathology <strong>Discrepancy</strong> Rate<br />
All Institutional Percentiles<br />
N 10th 25th<br />
50th<br />
(Median) 75th 90th<br />
<strong>Discrepancy</strong> rate, % 74 21.0 10.0 5.1 1.0 0.0<br />
creditation <strong>of</strong> Healthcare Organizations inspected the majority<br />
<strong>of</strong> the institutions (68%) participating in this study,<br />
and the C<strong>AP</strong> inspected 82% <strong>of</strong> laboratories contributing<br />
data. The participants’ institutional sizes were as follows:<br />
39.7% had fewer than 150 beds; 39.7%, 151 to 300 beds;<br />
9.5%, 301 to 450 beds; 3.2%, 451 to 600 beds; and 7.9%,<br />
more than 600 beds. Private nonpr<strong>of</strong>it institutions comprised<br />
54.9% <strong>of</strong> the institutions; 11.1% were private forpr<strong>of</strong>it<br />
institutions; 9.5% were state, county, or city hospitals;<br />
6.3% were university hospitals; 3.2% were federal<br />
governmental institutions; and 15.9% were other. Fiftythree<br />
percent <strong>of</strong> the institutions were within a city, 28.1%<br />
were suburban, and 15.6% were rural. The annual mean<br />
number (and SD) <strong>of</strong> surgical pathology, nongynecologic<br />
cytology, and gynecologic specimens per institution were<br />
16 241 (19 567), 1898 (2376), and 21 147 (44 903), respectively.<br />
The mean and median numbers <strong>of</strong> pathologists at<br />
each institution were 6 and 4, respectively.<br />
The 74 institutions performed secondary review <strong>of</strong> 6186<br />
anatomic pathology specimens (5268 surgical pathology<br />
specimens and 847 cytology specimens), and each institution<br />
collected data on a range <strong>of</strong> from 2 to 100 specimens<br />
(median, 99 specimens). In aggregate, 415 discrepancies<br />
were reported, and the overall mean anatomic pathology<br />
discrepancy frequency was 6.7%. The overall<br />
mean surgical pathology discrepancy frequency was 6.8%<br />
(356 discrepancies <strong>of</strong> 5255 reviewed cases), and the overall<br />
mean cytology discrepancy frequency was 6.5% (55 discrepancies<br />
<strong>of</strong> 844 reviewed cases); neither <strong>of</strong> the specimen<br />
types had a higher discrepancy frequency (P .78). The<br />
distribution <strong>of</strong> the anatomic pathology discrepancy frequencies<br />
is listed in Table 3. Higher percentile ranks indicate<br />
lower discrepancy frequencies.<br />
The number <strong>of</strong> reviewed specimens, overall discrepancy<br />
frequency, and the discrepancy classification by organ<br />
type are shown in Table 4. The most common organ types<br />
reviewed in this study were female genital tract and<br />
breast. None <strong>of</strong> the organ types had a higher frequency <strong>of</strong><br />
discrepancy than other organ types (P .15). For the organ<br />
types with higher volumes reviewed (200 cases), a<br />
change in categoric interpretation (ie, an error type more<br />
likely to be associated with harm) occurred more frequently<br />
in the female genital tract, male genital tract, and<br />
lymph node. A change in margin status occurred most<br />
frequently in breast specimens.<br />
In Table 5, the effect <strong>of</strong> the discrepancy on patient outcome<br />
and the report modification in response to a discrepancy<br />
are listed by organ type. Some form <strong>of</strong> harm was<br />
seen in the majority <strong>of</strong> organs, although specimen type<br />
(cytology or surgical pathology) or organ type did not<br />
correlate with effect on patient outcome (P .73 and P <br />
.83, respectively). Harm was observed in 20.8% (11 cases)<br />
<strong>of</strong> breast specimens and in 25.3% (21 cases) <strong>of</strong> female genital<br />
tract specimens in which a discrepancy was detected.<br />
Neither specimen type nor organ type correlated with the<br />
Arch Pathol Lab Med—Vol 129, April 2005 Patient Safety in Anatomic Pathology—Raab et al 461