Figure 3. Unimodal data showing significant motion priming of the
current trial’s perceived velocity by the previous trial’s velocity. (A)
For each modality, trials were binned into two groups based on the speed
of the preceding trial (‘previous slower’ = 20, 40, 50 °/s; ‘previous
faster’ = 70, 80, 100 °/s) and psychometric functions were fitted to
each group. For both modalities, the PSE for the ‘previous faster’ group
was significantly lower than for the ‘previous slower’ group. Error bars
show ±1 standard deviation based on 10,000 bootstraps. (B) Motion
priming for all levels of previous velocity. PSEs decline (more
“perceived faster” responses) as preceding velocity increases.
Bootstrap sign-tests confirmed the slopes of the best-fitting lines for
audition and vision were both significantly negative but did not differ
from each other (p = .2078). Error bars show 95% confidence intervals
based on 10,000 bootstraps.
With the priming effect established at the level of fast vs slow, we
went further and analysed the PSE shift more finely. To do this, we
split the ‘previous slower’ group into three smaller groups
corresponding to its three levels of previous velocity (i.e., 20, 40, 50
°/s) and fitted psychometric functions to each of the groups. We did the
same for the ‘previous larger’ data (splitting it into three previous
velocities: 70, 80, 100°/s). Figure 3b shows the PSEs as a function of
these previous velocities, for both audition and vision. Dividing the
data in this way reduces the observations in each group by a factor of
three yet there is still a clear and significant negative slope for each
modality (audition: slope = -.0918, p = .0008; vision: slope = -.0594, p
= .0181), thus showing the same priming relationship as in Figure 3A
(previous faster speeds consistently shifting PSEs to the left,
indicating increasing levels of “perceived faster” responses). The
priming functions for auditory and visual motion are very similar. To
test if they were different, we bootstrapped the data for each level of
previous speed 10,000 times and re-calculated the best linear fit across
the previous speed levels 10,000 times. If the auditory and visual data
have different slopes, then the differences between the bootstrapped
slopes for audition and vision should be consistently different from
zero. A bootstrap sign-test confirmed this was not the case (p = .2078).
Thus, motion priming effects as a function velocity are very similar for
both auditory and visual motion.