09.02.2013 Views

Questionnaire Validation Made Easy - EuroJournals

Questionnaire Validation Made Easy - EuroJournals

Questionnaire Validation Made Easy - EuroJournals

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>Questionnaire</strong> <strong>Validation</strong> <strong>Made</strong> <strong>Easy</strong> 175<br />

Internal Consistency<br />

This is a type of reliability assessment in which the same assessment is completed by the same rater on<br />

two or more occasions. These different ratings are then compared, generally by means of correlation.<br />

Since the same individual is completing both assessments, the rater's subsequent ratings are<br />

contaminated by knowledge of earlier ratings.<br />

Sensitivity<br />

Sensitivity refers to the probability that a diagnostic technique will detect a particular disease or<br />

condition when it does indeed exist.<br />

Specificity<br />

Specificity refers to the probability that a diagnostic technique will indicate a negative test result when<br />

the condition is absent (true negative).<br />

Standardized Response Mean<br />

The standardized response mean (SRM) is calculated by dividing the mean change by the standard<br />

deviation of the change scores.<br />

Floor Effect<br />

The floor effect is when data cannot take on a value lower than some particular number. Thus, it<br />

represents a sub sample for which clinical decline may not register as a change in score, even if there is<br />

worsening of function/behavior etc. because there are no items or scaling within the test that measure<br />

decline from the lowest possible score.<br />

Intraclass Correlation Coefficient (ICC)<br />

Intraclass correlation (ICC) is used to measure inter-rater reliability for two or more raters. It may also<br />

be used to assess test-retest reliability. ICC may be conceptualized as the ratio of between-groups<br />

variance to total variance.<br />

Cronbach's Alpha<br />

Cronbach's alpha is a coefficient (a number between 0 and 1) that is used to rate the internal<br />

consistency (homogeneity) or the correlation of the items in a test. A good test is one that assesses<br />

different aspects of the trait being studied. If a test has a strong internal consistency most measurement<br />

experts agree that it should show only moderate correlation among items (.70 to 0.90).<br />

If correlations between items are too low, it is likely that they are measuring different traits<br />

(e.g. both depression and quality of life items are mixed together) and therefore should not all be<br />

included in a test that is supposed to measure one trait. If item correlations are too high, it is likely that<br />

some items are redundant and should be removed from the test.<br />

Known Groups Method<br />

Known groups method is a typical method to support construct validity and is provided when a test can<br />

discriminate between a group of individuals known to have a particular trait and a group who do not<br />

have the trait. Similarly, known groups may be studied using groups of individuals with differing<br />

levels/severities of a trait. Again the known groups methods will evaluate the test's ability to<br />

discriminate between the groups based on the groups demonstrating different mean scores on the test.<br />

For example, a group of individuals known to be not depressed should have lower scores on a<br />

depression scale then the group known to be depressed (10).

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!