Steps in validating a questionnaire

how can we ensure that the survey questions are valid and are really measuring what they intend to measure?

Validity refers to the extent to which a construct (elements of information sought by a researcher) is adequately captured; this is sometimes referred to as a specification error.

The researcher then reads or listens to the transcript to understand the respondent’s thought process and understanding, and uses this to remove or change any difficult questions or phrasing.

In the latter, the researcher takes a more active role, and respondents are probed with questions to get a better and deeper understanding of their thought processes.

Furthermore, it can reduce response error because it can help identify any difficulties or problems that cannot be identified through statistical methods (Miller 2011; Osborne et al. Cognitive interviewing has evolved and grown since its first use in 1984, and now there are two ways cognitive interviews are conducted: thinking aloud and verbal probing (Priede and Farrall 2011).

In the former, the respondent is asked to vocalise his or her thought process as he or she is filling in the questionnaire.

steps in validating a questionnaire-89steps in validating a questionnaire-47steps in validating a questionnaire-35steps in validating a questionnaire-24

In order to validate surveys, there are two things to consider: sampling errors (not addressed here) and measurement/observation error (are the questions really measuring what we intend for them to measure? To minimise validity and measurement error: Step 1: Send survey for expert reviews in to evaluate the content, cognitive, and usability standards.

Analysis of the cognitive interviews – Each recording of the cognitive interview was listened to and a summary was prepared in an Excel sheet, which contained a list of responses to the probes, and any other problems that could be identified.

This determined whether the interview provided evidence of a ‘definite problem’, ‘possible problem’, or ‘no evidence of a problem’ in relation to each item in the questionnaire, which was followed by a written explanation of the reasons for this judgement (as done by Levine et al. These summaries were then combined under each question, and based on that it was decided whether the question needed modification, removing, changing or leaving as it was.

Furthermore, some participants may find it difficult to think aloud, and you may also want to focus on specific areas.

For example, to check participants’ understandings of specific terms are correct and consistent.

Leave a Reply