An empirical study of the effects of priming on crowdsourced evaluations of design concepts

Access full-text files




Pang, Michelle Audrey

Journal Title

Journal ISSN

Volume Title



As product development teams begin utilizing crowdsourcing as a means of ideation, the evaluation of large numbers of design concepts becomes a time consuming and resource intensive process that drives development activities and impacts the design of the final product created. Crowdsourcing the evaluation of design concepts has been examined in previous work as a means to reduce the demands on expert raters, while achieving similar evaluation results. In prior crowdsourcing studies, successful use of novice evaluators required detailed, in-person training that can be time and cost prohibitive. This thesis research explores the fidelity of a pairwise comparison method for evaluation that requires minimal training of novice raters. In a pilot study the pairwise method for crowdsourcing evaluations is compared with crowdsourced evaluations using non-pairwise rating scales and with the evaluations of expert raters. The analysis of pilot study responses indicates that the pairwise comparison method is a promising alternative to the other methods. Another focus of this thesis is to examine the impact of priming novice raters prior to their evaluations of alternative design concepts. A follow-on study incorporates written and empathic priming strategies to determine their impact on novice raters’ evaluation of concepts. Raters are asked to consider several criteria, including novelty, feasibility, clarity (of the concept), usefulness, ease of use, and overall worthiness of further development. Results offer insight into the criteria that are most relevant to novice raters and the effect of priming on those evaluations. Specifically, empathic priming focused on ergonomics and ease of use is shown to positively influence the raters’ emphasis on those criteria when evaluating concepts.


LCSH Subject Headings