12.05.2014 Views

ASIPP Practice Guidelines - Pain Physician

ASIPP Practice Guidelines - Pain Physician

ASIPP Practice Guidelines - Pain Physician

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Manchikanti et al • <strong>ASIPP</strong> <strong>Practice</strong> <strong>Guidelines</strong><br />

26<br />

clinical practice guidelines is that they not only be systematically<br />

and scientifically developed but also should be able<br />

to assist practitioner and patient in making real life clinical<br />

decisions. The Institute of Medicine (IOM) implicitly<br />

incorporates rigorous science-based procedures as a part<br />

of the development of practice guidelines and decision<br />

making includes both clinicians and patients with a focus<br />

on specific clinical circumstances, without direction toward<br />

technology or procedures (6). The American Medical<br />

Association (AMA) uses the term practice parameter and<br />

defines this practice as “ . . . strategies for patient management,<br />

developed to assist physicians in clinical decision<br />

making. <strong>Practice</strong> parameters are highly variable in their<br />

content, format, degree of specificity, and method of development”<br />

(65). Thus, the methods that are used to develop<br />

practice guidelines vary among organizations and<br />

depend on objectives of the guideline and philosophic approach.<br />

Methods of development are classified as informal consensus<br />

development, formal consensus development, evidence-based<br />

guideline development, and explicit guideline<br />

development (5-7, 10, 65). However, a combination<br />

of multiple approaches is commonly utilized. Evidencebased<br />

guideline development provides a link between the<br />

strength of recommendations and the quality of evidence.<br />

Even though this approach may seem to have enhanced<br />

the scientific rigor of guideline development, recommendations<br />

may not always meet the highest scientific evidence<br />

(27).<br />

Evidence-based practice originated in the 50s with the advent<br />

of randomized, controlled trials. A randomized, controlled<br />

trial, also known as RCT, is a trial in which participants<br />

are randomly assigned to two groups: first, (experimental<br />

group) receiving the intervention that is being tested,<br />

and the other (the comparison or control group) receiving<br />

an alternative treatment or placebo. This design allows<br />

assessment of the relative effects of interventions. It is<br />

presumed that the strident debate between the proponents<br />

and opponents of evidence-based medicine has led to clarity<br />

(7). The current evidence-based medicine is defined as<br />

the conscientious, explicit, and judicious use of current best<br />

evidence in making decisions about the care of individual<br />

patients (10). The practice of evidence-based medicine<br />

requires the integration of individual clinical expertise with<br />

the best available external clinical evidence from systematic<br />

research. It should be construed that, apart from the<br />

results of the randomized controlled trials, there are many<br />

other factors that may weigh heavily in both clinical and<br />

policy decisions, such as patient preferences and resources,<br />

and these must contribute to decisions about the care of<br />

the patients (7). Thus, all evidence should be considered<br />

and no one sort of evidence should necessarily be the determining<br />

factor in a decision. There are an increasing<br />

number of well-conducted randomized, controlled trials<br />

and systematic reviews. However, such studies are difficult<br />

to conduct in chronic pain management with interventional<br />

procedures as well as surgical procedures. Clinical<br />

trials of the efficacy of commonly used interventions in<br />

low back pain were reviewed by Koes and coworkers (66),<br />

and Tulder and coworkers (67), which led to the conclusion<br />

that the methodological quality in these studies was<br />

disappointingly low. Similar conclusions were drawn in<br />

other evaluations (27-30). The quality of meta-analytic<br />

procedures in chronic pain treatment also has been questioned<br />

(68). In addition, the issues of ethics, feasibility,<br />

cost and reliability pose challenges to the randomized trial,<br />

specifically in surgical settings and treatments involving<br />

interventional procedures (69-75). Most of the studies of<br />

interventional pain procedures have been performed by<br />

multiple specialty groups (rarely including pain specialists)<br />

and without radiographic control, especially in the<br />

case of epidural steroid injections.<br />

Concato et al (76) conducted a study of randomized, controlled<br />

trials, observational studies, and hierarchy of research<br />

designs. They described that, in the hierarchy of<br />

research designs, the results of randomized, controlled trials<br />

have been considered to be evidence of the highest<br />

grade, whereas observational studies have been seen as<br />

having less validity because such studies reportedly overestimate<br />

treatment effects. Concato et al (76) showed that<br />

the average results of the observational studies were remarkably<br />

similar to those of the randomized, controlled<br />

trials, and concluded that the results of well-designed observational<br />

studies (with either a cohort or a case-controlled<br />

design) do not systematically overestimate the magnitude<br />

of the effects of treatment as compared with those in randomized,<br />

controlled trials on the same topic. However,<br />

this is not to say that we do not need randomized, controlled<br />

studies. Pocock and Elbourne (77) observed that,<br />

in a systematic review of evidence on a therapeutic topic,<br />

one needs to take into account the quality of the evidence,<br />

since in any randomized or observational study, bias may<br />

exist either in design or analysis. The importance of the<br />

difficulty of a large randomized trial with interventional<br />

procedures is reinforced by the failure to complete a randomized,<br />

controlled trial to evaluate epidural steroid injections,<br />

which was funded by the American Society of<br />

Regional Anesthesia as a Koller Award (78). In addition,<br />

Turk (79) suggests that it is important to acknowledge that<br />

<strong>Pain</strong> <strong>Physician</strong> Vol. 4, No. 1, 2001

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!