This study undertook a head-to-head comparison of best-worst, best-best and ranking discrete choice experiments (DCEs) to help decide which method to use if moving beyond traditional single-best DCEs. Respondents were randomized to one of three preference elicitation methods. Rank-ordered (exploded) mixed logit models and respondent-reported data were used to compare methods and first and second choices. First choices differed from second choices and preferences differed between elicitation methods, even beyond scale and scale dynamics. First choices of best-worst had good choice consistency, scale dynamics and statistical efficiency, but this method’s second choices performed worst. Ranking performed best on respondent-reported difficulty and preference; best-best’s second choices on statistical efficiency. All three preference elicitation methods improve efficiency of data collection relative to using first choices only. However, differences in preferences between first and second choices challenge moving beyond single-best DCE. If nevertheless doing so, best-best and ranking are preferred over best-worst DCE.