It has been shown that not all phonological features are equally relevant for word recognition (Martin & Peperkamp, 2015).

Discuss other papers blah blah.

What could drive such asymmetries? Given that word recognition primarily happens in the auditory modality, a natural source of this observation could be low-level acoustic differences between the features. It is likely that the voicing feature be less salient from an acoustic point of view than say the manner feature, given that the difference between a stop and a fricative is the difference between a period of silence and sustained high frequency noise. Another possible source for these differences might be lexical knowledge of the listeners. Indeed, knowing that your language exploits a certain feature more than another to distinguish words from one another might bias you to listen for such featural information more attentively during speech perception.

The present study proposes a two-axed approach to study the sources of asymmetrical featural importance in word recognition. First, we exposit and build upon a measure of lexical organization known as functional load. We then report on an experiment that tested prelexical perceptual biases. We focus our discussion on the possibility that listeners combine multiple sources of information when recognizing words in the auditory modality, including bottom-up acoustic biases and top-down lexical knowledge.