Michael A. Lawrence edited e1_results.tex  almost 9 years ago

Commit id: db9c48c18520745725699f8bf23aa02b0f4c9a20

deletions | additions      

       

\subsection{Results}  \subsubsection{Data pre-processing}  All analyses were performed using R \citep{R2015}. All trials during which a response was made when there was no target on screen (1.7\% of trials overall) were removed. Trials were further filtered on the basis of response time using a mild yet robust trimming procedure whereby, for each participant and cell of the experimental design, RTs were first log-transformed then any log-RT deviating from the median by more than 5 times the median absolute deviation from the median (``MedAbMed'') was flagged for rejection. Application of trimming on the logarithmic scale assures that the slow responses and fast responses have equal weight despite the positive skew typical of RT data. Use of the median and MedAbMed ensure robust application of trimming to a given observed RT that is less sensitive to the presence of even more extreme RTs. Application of this procedure yields a rejection of $2\%$ of trials.  \subsubsection{Modelling} \subsubsection{Modeling}  Detection and memory response data were analyzed separately, but using the same general framework for Bayesian inference. For both data types, a hierarchical model was specified such that, for each parameter of the model, a given individual participant's value for that parameter was drawn from a Gaussian distribution. For readers familiar with the terminology of mixed effects modeling, this scheme implements a random effect of subject on all parameters, but enforces zero correlation amongst the parameters. The assumption of zero correlation was imposed for computational practicality in light of the current work's interest in the overall effects of the manipulated variables and not individual differences nor correlations therebetween. Detection RT was modeled on the log-RT scale where trial-by-trial log-RT was taken to be Gaussian distributed with a noise term common to all trials and participants. The location parameter for the Gaussian distribution was modelled modeled  with an intercept and contrasts corresponding to the manipulated variables. Trial-by-trial memory responses were modeled as a finite mixture of a uniform ``guess'' response on the circular domain and a Von Mises distributed ``memory'' response centered on zero degrees of error. The proportion of each memory response type is captured by the parameter $\rho$, while the concentration (i.e. fidelity) of the memory response is captured by the parameter $\kappa$. As $\rho$ is bounded to the domain of $0$ to $1$, and $\kappa$ is a scale parameter that must be $>=0$, the Gaussian sampling of per-participant parameter values noted above took place on the logit and log scales, respectively. All models were specified and evaluated within the computational framework provided by $Stan$, a probabilistic programming language for Bayesian statistical inference. For all models, a first pass at inference was conducted that imposed flat (``uninformed'') priors to provide a general range of reasonable values for the intercept and scale parameters (ex. mean log-RT, between-participants variance of mean log-RT, etc). From this first pass, weakly informed priors were imposed (for specifics, see Appendix A) to speed computation. Note, however, that while weakly informed in scale, the priors for the effects of manipulated variables (ex. effect of cue validity on mean log-rt) were centered on zero, serving to put the onus on the data to cause an update of beliefs sufficiently large to drive this distribution away from zero. Updating of each model by the observed data was achieved by computation of 8 independent MCMC chains of 10,000 iterations each, yielding confident convergence for all models. See Appendix B for evaluations of convergence and posterior predictive checks.  \subsubsection{Detection response time}  Means and $95%$ credible intervals (CrI_{$95%$}) ($CrI_{95%}$)  for detection response times in all conditions are shown in Figure 2. Computing a cuing score (valid $-$ invalid) in each SOA condition yields cuing estimates of 7ms (CrI_{$95%$}: ($CrI_{95%}$:  1-12ms) in the 100ms SOA condition and 34ms (CrI_{$95%$}: ($CrI_{95%}$:  29-39ms) in the 800ms SOA condition. \subsubsection{Probability of memory}  Probability of memory results here.