13.07.2015 Views

Critical Appraisal of Scientific Articles

Critical Appraisal of Scientific Articles

Critical Appraisal of Scientific Articles

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

MEDICINEREVIEW ARTICLE<strong>Critical</strong> <strong>Appraisal</strong> <strong>of</strong> <strong>Scientific</strong> <strong>Articles</strong>Part 1 <strong>of</strong> a Series on Evaluation <strong>of</strong> <strong>Scientific</strong> PublicationsJean-Baptist du Prel, Bernd Röhrig, Maria BlettnerSUMMARYIntroduction: In the era <strong>of</strong> evidence-based medicine, one <strong>of</strong>the most important skills a physician needs is the ability toanalyze scientific literature critically. This is necessary tokeep medical knowledge up to date and to ensure optimalpatient care. The aim <strong>of</strong> this paper is to present anaccessible introduction into critical appraisal <strong>of</strong> scientificarticles.Methods: Using a selection <strong>of</strong> international literature, thereader is introduced to the principles <strong>of</strong> critical reading <strong>of</strong>scientific articles in medicine. For the sake <strong>of</strong> conciseness,detailed description <strong>of</strong> statistical methods is omitted.Results: Widely accepted principles for critically appraisingscientific articles are outlined. Basic knowledge <strong>of</strong> studydesign, structuring <strong>of</strong> an article, the role <strong>of</strong> differentsections, <strong>of</strong> statistical presentations as well as sources <strong>of</strong>error and limitation are presented. The reader does notrequire extensive methodological knowledge. As far asnecessary for critical appraisal <strong>of</strong> scientific articles,differences in research areas like epidemiology, clinical,and basic research are outlined. Further useful referencesare presented.Conclusion: Basic methodological knowledge is required toselect and interpret scientific articles correctly.Dtsch Arztebl Int 2009; 106(7): 100–5DOI: 10.3238/arztebl.2009.0100Key words: publication, critical appraisal, decision making,quality assurance, studyInstitut für Medizinische Biometrie, Epidemiologie und Informatik (IMBEI),Johannes Gutenberg-Universität, Mainz: Dr. med. du Prel, MPH, Dr. rer. nat.Röhrig, Pr<strong>of</strong>. Dr. rer. nat. BlettnerDespite the increasing number <strong>of</strong> scientific publications,many physicians find themselves withless and less time to read what others have written. Selection,reading, and critical appraisal <strong>of</strong> publications is,however, necessary to stay up to date in one's field. Thisis also demanded by the precepts <strong>of</strong> evidence-basedmedicine (1, 2).Besides the medical content <strong>of</strong> a publication, its interpretationand evaluation also require understanding <strong>of</strong> thestatistical methodology. Sadly, not even in science are allterms always used correctly. The word "significance," forexample, has been overused because significant (or positive)results are easier to get published (3, 4).The aim <strong>of</strong> this article is to present the essential principles<strong>of</strong> the evaluation <strong>of</strong> scientific publications. Withthe exception <strong>of</strong> a few specific features, these principlesapply equally to experimental, clinical, and epidemiologicalstudies. References to more detailed literatureare provided.Decision makingBefore starting a scientific article, the reader must beclear as to his/her intentions. For quick information on agiven subject, he/she is advised to read a recent review<strong>of</strong> some sort, whether a (simple) review article, a systematicreview, or a meta-analysis.The references in review articles point the readertowards more detailed information on the topic concerned.In the absence <strong>of</strong> any recent reviews on the desiredtheme, databases such as PubMed have to be consulted.Regular perusal <strong>of</strong> specialist journals is an obviousway <strong>of</strong> keeping up to date. The article title and abstracthelp the reader to decide whether the article merits closerattention. The title gives the potential reader a concise,accurate first impression <strong>of</strong> the article's content. Theabstract has the same basic structure as the article andrenders the essential points <strong>of</strong> the publication in greatlyshortened form. Reading the abstract is no substitute forcritically reading the whole article, but shows whetherthe authors have succeeded in summarizing aims,methods, results, and conclusions.The structure <strong>of</strong> scientific publicationsThe structure <strong>of</strong> scientific articles is essentially alwaysthe same. The title, summary and key words are followedby the main text. This is divided into Introduction,Methods, Results and Discussion (IMRAD), endingwhen appropriate with Conclusions and References.100 Deutsches Ärzteblatt International⏐Dtsch Arztebl Int 2009; 106(7): 100–5


MEDICINEThe content and purpose <strong>of</strong> the individual sections aredescribed in detail below.IntroductionThe Introduction sets out to familiarize the reader with thesubject matter <strong>of</strong> the investigation. The current state <strong>of</strong>knowledge should be presented with reference to therecent literature and the necessity <strong>of</strong> the study should beclearly laid out. The findings <strong>of</strong> the studies cited should begiven in detail, quoting numerical results. Inexact phrasessuch as "inconsistent findings," "somewhat better" and soon are to be avoided. Overall, the text should give theimpression that the author has read the articles cited. Incase <strong>of</strong> doubt the reader is recommended to consult thesepublications him-/herself. A good publication backs up itscentral statements with references to the literature.Ideally, this section should progress from the generalto the specific. The introduction explains clearly whatquestion the study is intended to answer and why thechosen design is appropriate for this end.MethodsThis important section bears a certain resemblance to acookbook. The description <strong>of</strong> the procedures shouldgive the reader "recipes" that can be followed to repeatthe study. Here are found the essential data that permitappraisal <strong>of</strong> the study's validity (6). The methods sectioncan be divided into subsections with their own headings;for example, laboratory techniques can be describedseparately from statistical methods.The methods section should describe all stages <strong>of</strong>planning, the composition <strong>of</strong> the study sample (e.g.,patients, animals, cell lines), the execution <strong>of</strong> the study,and the statistical methods: Was a study protocol writtenbefore the study commenced? Was the investigationpreceded by a pilot study? Are location and study periodspecified? It should be stated in this section that the studywas carried out with the approval <strong>of</strong> the appropriateethics committee. The most important element <strong>of</strong> ascientific investigation is the study design. If for somereason the design is unacceptable, then so is the article,regardless <strong>of</strong> how the data were analyzed (7).The choice <strong>of</strong> study design should be explained anddepicted in clear terms. If important aspects <strong>of</strong> themethodology are left undescribed, the reader is advisedto be wary. If, for example, the method <strong>of</strong> randomizationis not specified, as is <strong>of</strong>ten the case (8), one ought not toassume that randomization took place at all (7). The statisticalmethods should be lucidly portrayed and complexstatistical parameters and procedures describedclearly, with references to the specialist literature. Box 1contains further questions that may be helpful in evaluation<strong>of</strong> the Methods section.Study design and implementation are described byAltman (7), Trampisch and Windeler (9), and Klug et al.(10). In experimental studies, precise depiction <strong>of</strong> thedesign and execution is vital. The accuracy <strong>of</strong> a method,i.e. its reliability (precision) and validity (correctness),must be stated. The explanatory power <strong>of</strong> the results <strong>of</strong> aclinical study is improved by the inclusion <strong>of</strong> a controlBOX 1Questions on methodology❃ Is the study design suited to fulfill the aims <strong>of</strong> the study?❃ Is it stated whether the study is confirmatory, exploratory or descriptive in nature?❃ What type <strong>of</strong> study was chosen, and does it permit the aims <strong>of</strong> the study to beaddressed?❃ Is the study's endpoint precisely defined?❃ What statistical measure is employed to characterize the endpoint?Do epidemiological studies, for instance, give the incidence (rate <strong>of</strong> newcases), prevalence (current number <strong>of</strong> cases), mortality (proportion <strong>of</strong> thepopulation that dies <strong>of</strong> the disease concerned), lethality (proportion <strong>of</strong> thosewith the disease who die <strong>of</strong> it) or the hospital admission rate (proportion <strong>of</strong>the population admitted to hospital because <strong>of</strong> the disease)?❃ Are the geographical area, the population, the study period (including duration<strong>of</strong> follow-up), and the intervals between investigations described in detail?group (active, historical, or placebo controls) and by therandomized assignment <strong>of</strong> patients to the different arms<strong>of</strong> the study. The quality can also be raised by blinding<strong>of</strong> the investigators, which guarantees identical treatmentand observation <strong>of</strong> all study participants. A clinicalstudy should as a rule include an estimation <strong>of</strong> the requirednumber <strong>of</strong> patients (case number planning) before thebeginning <strong>of</strong> the study. More detail on clinical studiescan be found, for instance, in the book by Schumacherand Schulgen (11). International recommendations speciallyformulated for the reporting <strong>of</strong> randomized, controlledclinical trials are presented in the most recentversion <strong>of</strong> the CONSORT Statement (ConsolidatedStandards <strong>of</strong> Reporting Trials) (12).Epidemiological investigations can be divided intointervention studies, cohort studies, case-control studies,cross-sectional studies, and ecological studies. Table 1outlines what type <strong>of</strong> study is best suited to what situation(13). One characteristic <strong>of</strong> a good publication is a preciseaccount <strong>of</strong> inclusion and exclusion criteria. How highwas the response rate (✞80% is good, ✜30% means noor only slight power), and how high was the rate <strong>of</strong> lossto follow-up, e.g. when participants move away or withdrawtheir cooperation? To determine whether participantsdiffer from nonparticipants, data on the lattershould be included. The selection criteria and the rates<strong>of</strong> loss to follow-up permit conclusions as to whether thestudy sample is representative <strong>of</strong> the target population.A good study description includes information on missingvalues. Particularly in case-control studies, but also innonrandomized clinical studies and cohort studies, thechoice <strong>of</strong> the controls must be described precisely. Onlythen can one be sure that the control group is comparablewith the study group and shows no systematic discrepanciesthat can lead to misinterpretation (confounding)or other problems (13).Is it explained how measurements were conducted?Are the instruments and techniques, e.g. measuringdevices, scale <strong>of</strong> measured values, laboratory data, andDeutsches Ärzteblatt International⏐Dtsch Arztebl Int 2009; 106(7): 100–5 101


MEDICINETABLE 1The best type <strong>of</strong> study for epidemiological investigations (from 13)Purpose <strong>of</strong> investigationInvestigation <strong>of</strong> rare diseases,e.g. tumorsInvestigation <strong>of</strong> exposure to rare environmentalfactors, e.g. industrial chemicalsInvestigation <strong>of</strong> exposure to multiple agents,e.g. the joint effect <strong>of</strong> oral contraceptiveintake and smokingInvestigation <strong>of</strong> multiple endpoints,e.g. the risk <strong>of</strong> death from various causesEstimation <strong>of</strong> incidence in exposed populationsInvestigation <strong>of</strong> c<strong>of</strong>actors that vary over timeInvestigation <strong>of</strong> cause and effectStudy typeCase-control studiesCohort study in an exposedpopulationCase-control studiesCohort studiesExclusively cohort studiesPreferably cohort studiesIntervention studiestime point, described in sufficient detail? Were the measurementsmade under standardized—and thus comparable—conditionsin all patients? Details <strong>of</strong> measurementprocedures are important for assessment <strong>of</strong> accuracy(reliability, validity). The reader must see on whatkind <strong>of</strong> scale the variables are being measured (e.g. eyecolor, nominal; tumor stage, ordinal; bodyweight,metric), because the type <strong>of</strong> scale determines what kind<strong>of</strong> analysis is possible. Descriptive analysis employsdescriptive measured values and graphic and/or tabularpresentations, whereas in statistical analysis the choice<strong>of</strong> test has to be taken into consideration. The interpretationand power <strong>of</strong> the results is also influenced by thescale type. For example, data on an ordinal scale shouldnot be expressed in terms <strong>of</strong> mean values.Was there a careful power calculation before the studystarted? If the number <strong>of</strong> cases is too low, a real difference,e.g. between the effects <strong>of</strong> two medications or inthe risk <strong>of</strong> disease in the presence vs. absence <strong>of</strong> a givenenvironmental factor, may not be detected. One thenspeaks <strong>of</strong> insufficient power.ResultsIn this section the findings should be presented clearlyand objectively, i.e. without interpretation. The interpretation<strong>of</strong> the results belongs in the ensuing discussion.The results section should address directly the aims <strong>of</strong>the study and be presented in a well-structured, readilyunderstandable and consistent manner. The findingsshould first be formulated descriptively, stating statisticalparameters such as case numbers, mean values, measures<strong>of</strong> variation, and confidence intervals. This sectionshould include a comprehensive description <strong>of</strong> the studypopulation. A second, analytic subsection describes therelationship between characteristics, or estimates theeffect <strong>of</strong> a risk factor, say smoking behavior, on a dependentvariable, say lung cancer, and may include calculation<strong>of</strong> appropriate statistical models.Besides information on statistical significance in theform <strong>of</strong> p values, comprehensive description <strong>of</strong> the dataand details on confidence intervals and effect sizes arestrongly recommended (14, 15, 16). Tables and figuresmay improve the clarity, and the data therein should beself-explanatory.DiscussionIn this section the author should discuss his/her resultsfrankly and openly. Regardless <strong>of</strong> the study type, thereare essentially two goals:Comparison <strong>of</strong> the findings with the status quo—The Discussion should answer the following questions:How has the study added to the body <strong>of</strong> knowledge onthe given topic? What conclusions can be drawn fromthe results? Will the findings <strong>of</strong> the study lead the authorto reconsider or change his/her own pr<strong>of</strong>essional behavior,e.g. to modify a treatment or take previously unconsideredfactors into account? Do the findings suggestfurther investigations? Does the study raise new, hithertounanswered questions? What are the implications <strong>of</strong> theresults for science, clinical routine, patient care, andmedical practice? Are the findings in accord with those<strong>of</strong> the majority <strong>of</strong> earlier studies? If not, why might thatbe? Do the results appear plausible from the biologicalor medical viewpoint?<strong>Critical</strong> analysis <strong>of</strong> the study's limitations—Mightsources <strong>of</strong> bias, whether random or systematic in nature,have affected the results? Even with painstaking planningand execution <strong>of</strong> the study, errors cannot be whollyexcluded. There may, for instance, be an unexpectedlyhigh rate <strong>of</strong> loss to follow-up (e.g. through patientsmoving away or refusing to participate further in thestudy). When comparing groups one should establishwhether there is any intergroup difference in the composition<strong>of</strong> participants lost to follow-up. Such a discrepancycould potentially conceal a true difference betweenthe groups, e.g. in a case-control study with regard to arisk factor. A difference may also result from positiveselection <strong>of</strong> the study population. The Discussion mustdraw attention to any such differences and describe thepatients who do not complete the study. Possible distortion<strong>of</strong> the study results by missing values should also bediscussed.Systematic errors are particularly common in epidemiologicalstudies, because these are mostly observationalrather than experimental in nature. In case-control studies,a typical source <strong>of</strong> error is the retrospective determination<strong>of</strong> the study participants' exposure. Their memories maynot be accurate (recall bias). A frequent source <strong>of</strong> errorin cohort studies is confounding. This occurs when twoclosely connected risk factors are both associated withthe dependent variable. Errors <strong>of</strong> this type can be correctedand revealed by adjustment for the confoundingfactor. For instance, the fact that smokers drink morec<strong>of</strong>fee than average could lead to the erroneous assumptionthat drinking c<strong>of</strong>fee causes lung cancer. If potentialconfounders are not mentioned in the publication, thecritical reader should wonder whether the results mightnot be invalidated by this type <strong>of</strong> error. If possible confoundingfactors were not included in the analysis, thepotential sources <strong>of</strong> error should at least be critically102 Deutsches Ärzteblatt International⏐Dtsch Arztebl Int 2009; 106(7): 100–5


MEDICINEdebated. Detailed discussion <strong>of</strong> sources <strong>of</strong> error and means<strong>of</strong> correction can be found in the books by Beagleholeand Webb (17, 18).Results that do not attain statistical significance mustalso be published. Unfortunately, greater importance isstill <strong>of</strong>ten attached to significant results, so that they aremore likely to be published than nonsignificant findings.This publication bias leads to systematic distortions inthe body <strong>of</strong> scientific knowledge. According to a recentreview this is particularly true for clinical studies (3).Only when all valid results <strong>of</strong> a well-planned and correctlyconducted study are published can useful conclusions bedrawn regarding the effect <strong>of</strong> a risk factor on the occurrence<strong>of</strong> a disease, the value <strong>of</strong> a diagnostic procedure,the properties <strong>of</strong> a substance, or the success <strong>of</strong> an intervention,e.g. a treatment. The investigator and the journalpublishing the article are thus obliged to ensure thatdecisions on important issues can be taken in full knowledge<strong>of</strong> all valid, scientifically substantiated findings.It should not be forgotten that statistical significance,i.e. the minimization <strong>of</strong> the likelihood <strong>of</strong> a chance result,is not the same as clinical relevance. With a largeenough sample, even minuscule differences can becomestatistically significant, but the findings are not automaticallyrelevant (13, 19). This is true both for epidemiologicalstudies, from the public health perspective,and for clinical studies, from the clinical perspective. Inboth cases, careful economic evaluation is required todecide whether to modify or retain existing practices. Atthe population level one must ask how <strong>of</strong>ten the investigatedrisk factor really occurs and whether a slightincrease in risk justifies wide-ranging public healthinterventions. From the clinical viewpoint, it must becarefully considered whether, for example, the slightlygreater efficacy <strong>of</strong> a new preparation justifies increasedcosts and possibly a higher incidence <strong>of</strong> side effects. Thereader has to appreciate the difference between statisticalsignificance and clinical relevance in order to evaluatethe results properly.ConclusionsThe authors should concentrate on the most importantfindings. Acrucial question is whether the interpretationsfollow logically from the results. One should avoid conclusionsthat are supported neither by one's own data norby the findings <strong>of</strong> others. It is wrong to refer to anexploratory data analysis as a pro<strong>of</strong>. Even in confirmatorystudies, one's own results should, for the sake <strong>of</strong>consistency, always be considered in light <strong>of</strong> otherinvestigators' findings. When assessing the results andformulating the conclusions, the weaknesses <strong>of</strong> the studymust be given due consideration. The study can attainobjectivity only if the possibility <strong>of</strong> erroneous or chanceresults is admitted. The inclusion <strong>of</strong> nonsignificantresults contributes to the credibility <strong>of</strong> the study. "Notsignificant" should not be confused with "no association."Significant results should be considered from the viewpoint<strong>of</strong> biological and medical plausibility.So-called levels <strong>of</strong> evidence scales, as used in someAmerican journals, can help the reader decide to whatBOX 2<strong>Critical</strong> questions❃ Does the study pose scientifically interesting questions?❃ Are statements and numerical data supported by literature citations?❃ Is the topic <strong>of</strong> the study medically relevant?❃ Is the study innovative?❃ Does the study investigate the predefined study goals?❃ Is the study design apt to address the aims and/or hypotheses?❃ Did practical difficulties (e.g. in recruitment or loss to follow-up) lead to majorcompromises in study implementation compared with the study protocol?❃ Was the number <strong>of</strong> missing values too large to permit meaningful analysis?❃ Was the number <strong>of</strong> cases too small and thus the statistical power <strong>of</strong> the studytoo low?❃ Was the course <strong>of</strong> the study poorly or inadequately monitored (missing values,confounding, time infringements)?❃ Do the data support the authors' conclusions?❃ Do the authors and/or the sponsor <strong>of</strong> the study have irreconcilable financial orideological conflicts <strong>of</strong> interest?extent his/her practice should be affected by the content<strong>of</strong> a given publication (20). Until all journals <strong>of</strong>fer recommendations<strong>of</strong> this kind, the individual physician'sability to read scientific texts critically will continue toplay a decisive role in determining whether diagnosticand therapeutic practice are based on up-to-date medicalknowledge.ReferencesThe references are to be presented in the journal's standardstyle. The reference list must include all sourcescited in the text, tables and figures <strong>of</strong> the article. It is importantto ensure that the references are up to date, in orderto make it clear whether the publication incorporatesnew knowledge. The references cited should help thereader to explore the topic further.Acknowledgements and conflict <strong>of</strong> interest statementThis important section must provide information on anysponsors <strong>of</strong> the study. Any potential conflicts <strong>of</strong> interest,financial or otherwise, must be revealed in full (21).Table 2 and Box 2 summarize the essential questionswhich, when answered, will reveal the quality <strong>of</strong> anarticle. Not all <strong>of</strong> these questions apply to every publicationor every type <strong>of</strong> study. Further information on thewriting <strong>of</strong> scientific publications is supplied by Gardneret al. (19), Altman (7), and Altman et al. (22). Gardner etal. (23), Altman (7), and the CONSORT Statement (12)provide checklists to assist the evaluation <strong>of</strong> the statisticalcontent <strong>of</strong> medical studies.Conflict <strong>of</strong> interest statementThe authors declare no conflicts <strong>of</strong> interest as defined by the guidelines <strong>of</strong> theInternational Committee <strong>of</strong> Medical Journal Editors.Deutsches Ärzteblatt International⏐Dtsch Arztebl Int 2009; 106(7): 100–5 103


MEDICINETABLE 2Checklist to evaluate the quality <strong>of</strong> scientific publicationsYes Unclear NoDesignIs the aim <strong>of</strong> the study clearly described? ✮ ✮ ✮Are the study population(s) and the inclusion and exclusion criteria described in detail? ✮ ✮ ✮Were the patients allocated randomly to the different arms <strong>of</strong> the study? ✮ ✮ ✮If yes:Is the method <strong>of</strong> randomization described? ✮ ✮ ✮a) Is the number <strong>of</strong> cases discussed? ✮ ✮ ✮b) Were sufficient cases enrolled (e.g. Power ✞50%)? ✮ ✮ ✮Are the methods <strong>of</strong> measurement (e.g. laboratory examination, questionnaire, diagnostic test) suitable for determination<strong>of</strong> the target variable (with regard to scale, time <strong>of</strong> investigation, standardization)? ✮ ✮ ✮Is there information regarding data loss (response rates, loss to follow-up, missing values)? ✮ ✮ ✮Study inception and implementationAre treatment and control groups matched with regard to major relevant characteristics(age, sex, smoking habits etc.)? ✮ ✮ ✮Are the drop-outs analyzed for differences between the treatment and control groups? ✮ ✮ ✮How many cases were observed over the whole study period? ✮ ✮ ✮Are side effects and adverse events during the study period described? ✮ ✮ ✮Analysis and evaluationHave the correct statistical parameters and methods been selected, and are they clearly described? ✮ ✮ ✮Are the statistical analyses clearly described? ✮ ✮ ✮Are the important parameters (prognostic factors) included in the analysis or at least discussed? ✮ ✮ ✮Is the presentation <strong>of</strong> the statistical parameters appropriate, comprehensive, and clear? ✮ ✮ ✮Are the effect sizes and confidence intervals stated for the principal findings? ✮ ✮ ✮Is it apparent why the given study design/statistical methods were chosen? ✮ ✮ ✮Are all conclusions supported by the study's findings? ✮ ✮ ✮By using a checklist such as this, the statistical and methodological soundness <strong>of</strong> a study can be assessed and improvements considered.Not all <strong>of</strong> the points in this checklist can be used to evaluate all study types; for example, randomization is particularly applicable to clinical studies.Manuscript submitted on 9. January 2007; revised version accepted on7. June 2007.Translated from the original German by David Roseveare.REFERENCES1. Sackett DL, Rosenberg WMC, Gray JAM, Haynes RB, RW Scott: Evidencebased medicine: what it is and what it isn't. Editorial. BMJ1996; 312: 71–2.2. Albert DA: Deciding whether the conclusion <strong>of</strong> studies are justified: areview. Med Decision Making 1981; 1: 265–75.3. Gluud LL: Bias in clinical intervention research. Am J Epidemiol2006; 163: 493–501.4. Easterbrook PJ, Berlin JA, Gopalan R, Matthews DR: Publication biasin clinical research. Lancet 1991; 337: 867–72.5. Greenhalgh T: How to read a paper: getting your bearings (decidingwhat the paper is about). BMJ 1997; 315: 243–6.6. Kallet H: How to write the methods section <strong>of</strong> a research paper.Respir Care 2004; 49: 1229–32.7. Altman DG: Practical statistics for medical research. London:Chapman and Hall 1991.8. De Simonian R, Charette LJ, Mc Peek B, Mosteller F: Reporting onmethods in clinical trials. N Engl J Med 1982; 306: 1332–7.9. Trampisch HJ, Windeler J, Ehle B: Medizinische Statistik. Berlin, Heidelberg,New York: Springer 2000, 2nd Revised Edition.10. Klug SJ, Bender R, Blettner M, Lange S: Wichtige epidemiologischeStudientypen. Dtsch Med Wochenschr, 2004; 129: T7–T10.11. Schumacher M, Schulgen G: Methodik klinischer Studien. MethodischeGrundlagen der Planung, Durchführung und Auswertung. Berlin:Springer 2008, 3rd Revised Edition.12. Moher D, Schulz KF, Altman DG für die CONSORT Gruppe: Das CON-SORT Statement: Überarbeitete Empfehlungen zur Qualitätsverbesserungvon Reports randomisierter Studien im Parallel-Design.Dtsch Med Wochenschr 2004; 129: T16–T20.13. Blettner M, Heuer C, Razum O: <strong>Critical</strong> reading <strong>of</strong> epidemiologicalpapers. A guide. J Public Health 2001; 11: 97–101.14. Borenstein M: The case for confidence intervals in controlled clinicaltrials. Control Clin Trias 1994; 15: 411–28.15. Gardner MJ, Altman DG: Confidence intervals rather than P values:estimation rather than hypothesis testing. BMJ 1986; 292: 746–50.16. Bortz J, Lienert GA: Kurzgefaßte Statistik für die Klinische Forschung.Leitfaden für die verteilungsfreie Analyse kleiner Stichproben. Berlin:Springer 2003; 2nd Edition, 39–45.17. Beaglehole R, Bonita R, Kjellström T: Einführung in die Epidemiologie.Bern, Göttingen, Toronto, Seattle: Huber 1997.18. Webb P, Bain C, Pirozzo S: Essential epidemiology. An introductionfor students and health pr<strong>of</strong>essionals. New York: Cambridge UniversityPress 2005.19. Gardner MJ, Machin D, Brynant TN, Altman DG: Statistics with confidence.Confidence intervals and statistical guidelines. London: BMJbooks 2002.104 Deutsches Ärzteblatt International⏐Dtsch Arztebl Int 2009; 106(7): 100–5


MEDICINE20. Ebell MH, Siwek J, Weiss BD et al.: Simplifying the language <strong>of</strong> evidenceto improve patient care. J Fam Pract 2004; 53: 111–20.21. Bero LA, Rennie D: Influences on the quality <strong>of</strong> published drug studies.Int J Technol Assess Health Care 1996; 12: 209–37.22. Altman DG, Gore SM, Gardner MJ, Pocock SJ: Statistical guidelinesfor contributers to medical journals. BMJ 1983; 286: 1489–93.23. Gardner MJ, Machin D, Campbell MJ: Use <strong>of</strong> check lists in assessingthe statistical content <strong>of</strong> medical studies. BMJ (Clin Res Ed) 1986;292: 810–2.Corresponding authorDr. med. Jean-Baptist du Prel, MPHInstitut für Medizinische Biometrie, Epidemiologie und Informatik (IMBEI)Universitätsklinikum MainzObere Zahlbacher Str. 6955131 Mainz, Germanyduprel@imbei.uni-mainz.deDeutsches Ärzteblatt International⏐Dtsch Arztebl Int 2009; 106(7): 100–5 105

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!