02.06.2014 Views

2012 PROCEEDINGS - Public Relations Society of America

2012 PROCEEDINGS - Public Relations Society of America

2012 PROCEEDINGS - Public Relations Society of America

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

to gauge the effectiveness <strong>of</strong> originally generated questionnaires by surveying the survey. This<br />

tool touches upon the many things that a researcher must be concerned about regarding survey<br />

construction. Researchers working with survey instruments look at not only at what to measure,<br />

but also at designing and testing questions that will be good measures.<br />

Reliability. Reliability was defined by Reinard (2001) as ―the internal consistency <strong>of</strong> a<br />

measure‖ (p. 202). It is the ―amount <strong>of</strong> error coders make when placing content into categories‖<br />

(Stacks, 2011, p. 128). Producing stability, reliability provides consistent measures in<br />

comparable situations (Fowler, 2009). For example, something may be seen as reliable when<br />

two respondents are in the same situation, and answer a given question in the same way. It is the<br />

―extent to which results would be consistent, or replicable, if the research were conducted a<br />

number <strong>of</strong> times‖ (Stacks, 2011, p. 345). Many suggestions are given to ensure reliability in<br />

survey instruments, most stemming from ensuring excellent word usage, word meaning,<br />

eliminating a neutral or non-response option, and term usage (Fowler, 2009). When considering<br />

ways to ensure reliability during instrument construction, there are three very common types <strong>of</strong><br />

reliability frequently noted that must be addressed. Inter-rater reliability, test/retest reliability,<br />

and internal reliability are <strong>of</strong>ten found in research sources (Fink, 2009).<br />

Internal. Internal reliability, or internal consistency, is the extent to which the instrument<br />

is internally consistent as it measures knowledge or retention (Stacks, 2011). Testing internal<br />

reliability usually involves dissecting an instrument to see if questions regarding the same<br />

construct score in the same manner for respective respondents. This type <strong>of</strong> reliability is<br />

instrument focused.<br />

External. External reliability is reviewed to see at what levels a respective measure<br />

varies from use to use and is respondent focused (Fink, 2009). There are two types <strong>of</strong> external<br />

reliability that are commonly sought for. One is inter-rater or inter-coder reliability and the other<br />

is test/retest. Inter-coder reliability is ―the reliability <strong>of</strong> content analysis coding when the coding<br />

is done by two or more coders‖ (Stacks, 2011, p. 336). Furthermore, it is the degree to which<br />

different raters or observers give consistent estimates to the same phenomenon; sometimes it is<br />

used to ascertain which survey questions measure which constructs (McDavid & Hawthorn,<br />

2006). Test/retest reliability ―involves giving the measure twice and reporting consistency<br />

between scores‖ (Reinard, 2001, p. 203). It is seen as a measure <strong>of</strong> reliability ―over time‖<br />

(Stacks, 2011, p. 349).<br />

Validity. Validity ―is the term that psychologists use to describe the relationship between<br />

an answer and some measure <strong>of</strong> the true score‖ (Fowler, 2009, p. 15). Valid questions provide<br />

answers to correspond with what they were meant to answer, or ―the degree to which a measure<br />

actually measures what is claimed‖ (Reinard, 2001, p. 208). Stated differently, validity tests to<br />

see if the coding system ―is measuring accurately what you want to be measured‖ (Stacks, 2011,<br />

p. 127). For example, the answer to any given question should correspond with what the<br />

researcher is trying to measure (Fowler, 2009). This process is seen by many as subjective<br />

(Stacks, 2011). When discussing validity, Fowler (2009) suggested that ―reducing measurement<br />

error thorough better question design is one <strong>of</strong> the least costly ways to improve survey estimates‖<br />

(p. 112). Furthermore, ―for any survey, it is important to attend to careful question design and<br />

pretesting and to make use <strong>of</strong> the existing research literature about how to measure what is to be<br />

measured‖ (p. 112). Researchers must be concerned with both internal and external validity<br />

when relevant.<br />

Internal. Internal validity ensures that an instrument‘s questions are sound. This includes<br />

face validity, content validity, construct validity and criterion related validity. Face validity<br />

8

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!