Pretesting Discrete-Choice Experiments: A Guide for Researchers

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, which permits any non-commercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc/4.0/.

Associated Data

Data are available upon reasonable request from Norah Crossnohere.

Abstract

Discrete-choice experiments (DCEs) are a frequently used method to explore the preferences of patients and other decision-makers in health. Pretesting is an essential stage in the design of a high-quality choice experiment and involves engaging with representatives of the target population to improve the readability, presentation, and structure of the preference instrument. The goal of pretesting in DCEs is to improve the validity, reliability, and relevance of the survey, while decreasing sources of bias, burden, and error associated with preference elicitation, data collection, and interpretation of the data. Despite its value to inform DCE design, pretesting lacks documented good practices or clearly reported applied examples. The purpose of this paper is: (1) to define pretesting and describe the pretesting process specifically in the context of a DCE, (2) to present a practical guide and pretesting interview discussion template for researchers looking to conduct a rigorous pretest of a DCE, and (3) to provide an illustrative example of how these resources were operationalized to inform the design of a complex DCE aimed at eliciting tradeoffs between personal privacy and societal benefit in the context of a police method known as investigative genetic genealogy (IGG).

Key Points

Pretesting is one of several essential stages in the design of a high-quality discrete-choice experiment (DCE) and involves engaging with representatives of the target population to improve the readability, presentation, and structure of the preference instrument.
There is limited available guidance for pretesting DCEs and few transparent examples of how pretesting is conducted.
Here, we present and apply a guide which prompts researchers to consider aspects of the content, presentation, comprehension, and elicitation when conducting a DCE pretest.
We also present a pretesting interview discussion template to support researchers in operationalizing this guide to their own DCE pretest interviews.

Introduction

Discrete-choice experiments (DCEs) are a frequently used method to explore the preferences of patients and other stakeholders in health [1–5]. The growth in the application of DCEs can be explained by an abundance of foundational theory and methods guidance [6–9], the establishment of good research practices [1, 8, 10, 11], and interest in the approach by decision makers [12, 13]. In recent years, greater emphasis has been placed on confirming the quality and internal and external validity of DCEs to ensure their usefulness, policy relevance, and impact [5, 14–16].

The value that decision-makers place in DCE findings is in large part dependent on the quality of the instrument design process itself. Numerous quality indicators of DCEs have been discussed in the literature, including validity and reliability [5], match to research question [17], patient-centricity [18], heterogeneity assessment [19], comprehensibility [20], and burden [21]. Developing a DCE that reflects these qualities requires a rigorous design process, which is often achieved through activities such as evidence synthesis, expert consultation, stakeholder engagement, pretesting, and pilot testing [15, 17]. Of these, there is ample guidance on activities related to evidence synthesis [22, 23] including qualitative methods [24, 25], stakeholder engagement [26, 27], and pilot testing [11, 27].

By contrast, there remains a paucity of literature on the procedures, methodologies, and theory for pretesting DCEs; even studies which report having completed pretesting typically report minimal explanation of their approach. Existing literature on pretesting DCEs has typically reported on pretesting procedures within an individual study, rather than providing generalized or comprehensive guidance for the field. Practical guidance on how to conduct the pretesting for all components of a DCE is needed to help establish a shared understanding and transparency around pretesting. Ultimately, this information can lead to improvements in the overall DCE design process and the confidence in findings from DCE research.

This paper has three objectives. The first objective is to define pretesting and describe the pretesting process specifically in the context of a DCE. The second objective is to present a guide and corresponding interview discussion template which can be applied by researchers when conducting a pretest of their own studies. The third objective is to provide an illustrative example of how these resources were applied to the pretest of a complex DCE instrument aimed at eliciting trade-offs between personal privacy and societal benefit in the context of a police method known as investigative genetic genealogy (IGG).

What is Pretesting?

Pretesting describes the process of identifying problem areas of surveys and making subsequent modifications to rectify these problems. Pretesting can be used to evaluate and improve the content, format, and structure of a survey instrument. It generally does this by engaging members of the target population to review and provide feedback on the instrument. Additionally, pretesting can be used to reduce survey burden, improve clarity, identify potential ethical issues, and mitigate sources of bias [28]. Pretesting is considered critical to improving survey validity in the general survey design field [29]. Empirical evidence demonstrates that pretesting can help identify problems, improve survey quality, reliability, and improve participant satisfaction in completing surveys [30].

Pretesting typically begins after the design of a complete survey draft. It occurs between a participant from the target population and one or more survey researchers. It is typical to explain to the participant that the activity is a pretest, and that their responses will be used to inform the design of the survey. Researchers often take field notes during the pretest. After each individual or at most any small set of pretests, research teams debrief to review findings and to make survey modifications. The survey is iterated throughout this process.

Several approaches can be used to collect data during a pretest of a survey generally as well as specifically for surveys including DCEs [31]. One approach is cognitive interviewing, which tests the readability and potential bias of an instrument through prospective or retrospective prompts [32]. Cognitive interviewing can ask participants to “think aloud” over the course of the survey, allowing researchers to understand how participants react to questions and how they arrive at their answers, as well as to follow up with specific probes. Another approach is the debriefing approach, wherein participants independently complete the survey or a section of it. Researchers then ask participants to reflect on what they have read, describe what they believe they were asked, and reflect on any specific aspects of interest to researchers such as question phrasing or order of survey content [33].

In behavioral coding approaches, researchers observe participants as they silently complete the activity, noting areas of perceived hesitation or confusion [33]. This is sometimes done through eye-tracking approaches, wherein eye movements are studied to explore how information is being processed [34]. Pretesting can also occur through codesign approaches, which are more participatory in nature. In a codesign approach, researchers may ask participants not just to reflect on the instrument as it is presented but to actively provide input that can be used to refine the instrument [35]. Across all methods, strengths and weaknesses of the instrument can be identified inductively or deductively.

Pretesting in the explicit context of choice experiments has not been formally defined. Rather, it has been used to describe a range of exploratory and flexible approaches for assessing how participants perceive and interact with a choice experiment [1, 36]. Recently, there has been greater emphasis on the interpretation, clarity, and ease of using choice experiments [37] given their increasing complexity and administration online [38–40]. We propose a definition of pretesting for choice experiments here (Box 1).

Pretesting of DCEs is as much an art as it is a science. Specifically, pretesting is often a codevelopment type of engagement with potential survey respondents. This engagement can empower the pretesting participants to suggest changes and to highlight issues. The research team (and potentially other stakeholders) work with these pretesting participants to solve issues jointly. As a type of engagement (as opposed to a qualitative study), we argue that it is process heavy, with the desired outcome of the engagement often being the development of a better instrument. Pretesting may also be incomplete and involve making judgement calls about what may or may not work or what impacts certain additions or subtractions may have.

Box 1. Defining pretesting in choice experiments
One of the key stages of developing a choice experiment, pretesting is a flexible process where representatives of the target population are engaged to improve the readability, presentation, and structure of the survey instrument (including educational material, choice experiment tasks, and all other survey questions). The goal of pretesting a DCE is to improve the validity, reliability, and relevance of the survey, while decreasing sources of bias, burden, and error associated with preference elicitation, data collection, and interpretation of the data.

Additional considerations above and beyond those made during general survey design are required when pretesting surveys including DCEs. Specific efforts should be made to improve the educational material used to motivate and prepare people to participate in the survey. A great deal of effort should be placed on the choice experiment tasks and the process by which information is presented, preferences are elicited, and tradeoffs are made [11, 25, 38, 41]. More than one type of task format and/or preference elicitation mechanism may be assessed during pretesting. It is also important to assess all other survey questions to ensure that data is collected appropriately and to assess the burden and impact of the survey. Pretesting can be aimed at reducing any error associated with preference elicitation and data collection. Information garnered during pretesting may help generate hypotheses or give the research team greater insights into how people make decisions. Hence, pretesting, as with piloting a study, may provide insights that may help with the interpretation of the data from the survey, both a priori and a posteriori.

Pretesting is one of several activities used to inform DCE development (Table ​ (Table1). 1 ). Activities that precede pretesting include evidence synthesis through activities such as literature reviews, stakeholder engagement with members of the target population, and expert input from professionals in the field. These activities can be used to identify and refine the scope of the research questions as well as develop draft versions of the survey instrument. Pilot testing typically proceeds pretesting. Although the terms have sometimes been used interchangeably [42], pretesting and pilot testing are distinct aspects of survey design with unique objectives and approaches. While both methods generally seek to improve surveys, pretesting centers on understanding areas in need of improvement, and quantitative pilot testing typically explores results and whether the survey questions and choice experiment are performing as intended.

Table 1

Stages of choice experiment design