Michael A. Lawrence Add description of RT trimming  almost 9 years ago

Commit id: 2e32e18c1a64c5d0924f2a1c8f2c47407f581794

deletions | additions      

       

\subsection{Results}  \subsubsection{Data pre-processing}  All analyses were performed using R \citep{R2015}. All trials during which a response was made when there was no target on screen (1.7\% of trials overall) were removed. Participants missed targets Trials were further filtered  on less than 1\% of trials overall, therefore an explicit analysis the basis  ofmiss rates was not possible. Trials with  response times faster than 200ms ($<$1\%) or slower than 1000ms ($<$1\%) time using a mild yet robust trimming procedure whereby, for each participant and cell of the experimental design, RTs  were removed first log-transformed then any log-RT deviating  from the analysis. median by more than 5 times the median absolute deviation from the median (``MedAbMed``) was flagged for rejection. Application of trimming on the logarithmic scale assures that the slow responses and fast responses have equal weight despite the positive skew typical of RT data. Use of the median and MedAbMed ensure robust application of trimming to a given observed RT that is less sensitive to the presence of even more extreme RTs. Application of this procedure yields a rejection of $2\%$ of trials.  \subsubsection{Modelling}  Detection and memory response data were analyzed separately, but using the same general framework for Bayesian inference. For both data types, a hierarchical model was specified such that, for each parameter of the model, a given individual participant's value for that parameter was drawn from a Gaussian distribution. For readers familiar with the terminology of mixed effects modeling, this scheme implements a random effect of subject on all parameters, but enforces zero correlation amongst the parameters. The assumption of zero correlation was imposed for computational practicality in light of the current work's interest in the overall effects of the manipulated variables and not individual differences nor correlations therebetween. Detection RT was modeled on the log-RT scale where trial-by-trial log-RT was taken to be Gaussian distributed with a noise term common to all trials and participants. The location parameter for the Gaussian distribution was modelled with an intercept and contrasts corresponding to the manipulated variables. Trial-by-trial memory responses were modeled as a finite mixture of a uniform ``guess'' response on the circular domain and a Von Mises distributed ``memory'' response centered on zero degrees of error. The proportion of each memory response type is captured by the parameter $\rho$, while the concentration (i.e. fidelity) of the memory response is captured by the parameter $\kappa$. As $\rho$ is bounded to the domain of $0$ to $1$, and $\kappa$ is a scale parameter that must be $>=0$, the Gaussian sampling of per-participant parameter values noted above took place on the logit and log scales, respectively.