Michael A. Lawrence Add modelling section.  almost 9 years ago

Commit id: 4d2adf9c9af62fef09a37d8b4ee4d5c6c0f06a72

deletions | additions      

       

\subsection{Results}  \subsubsection{Data pre-processing}  All analyses were performed using R \citep{R2015}. All trials during which a response was made when there was no target on screen (1.7\% of trials overall) were removed. Participants missed targets on less than 1\% of trials overall, therefore an explicit analysis of miss rates was not possible. Trials with response times(RTs)  faster than 200ms ($<$1\%) or slower than 1000ms ($<$1\%) were removed from the analysis. \subsubsection{Response \subsubsection{Modelling}  Detection and memory response data were analyzed separately, but using the same general framework for Bayesian inference. For both data types, a hierarchical model was specified such that, for each parameter of the model, a given individual participant's value for that parameter was drawn from a Gaussian distribution. For readers familiar with the terminology of mixed effects modeling, this scheme implements a random effect of subject on all parameters, but enforces zero correlation amongst the parameters. The assumption of zero correlation was imposed for computational practicality in light of the current work's interest in the overall effects of the manipulated variables and not individual differences nor correlations therebetween. Detection RT was modeled on the log-RT scale where trial-by-trial log-RT was taken to be Gaussian distributed with a noise term common to all trials and participants. The location parameter for the Gaussian distribution was modelled with an intercept and contrasts corresponding to the manipulated variables. Trial-by-trial memory responses were modeled as a finite mixture of a uniform ``guess'' response on the circular domain and a Von Mises distributed ``memory'' response centered on zero degrees of error. The proportion of each memory response type is captured by the parameter $\rho$, while the concentration (i.e. fidelity) of the memory response is captured by the parameter $\kappa$. As $\rho$ is bounded to the domain of $0$ to $1$, and $\kappa$ is a scale parameter that must be $>=0$, the Gaussian sampling of per-participant parameter values noted above took place on the logit and log scales, respectively.   All models were specified and evaluated within the computational framework provided by $Stan$, a probabilistic programming language for Bayesian statistical inference. For all models, a first pass at inference was conducted that imposed flat (``uninformed'') priors to provide a general range of reasonable values for the intercept and scale parameters (ex. mean log-RT, between-participants variance of mean log-RT, etc). From this first pass, weakly informed priors were imposed (for specifics, see Appendix A) to speed computation. Note, however, that while weakly informed in scale, the priors for the effects of manipulated variables (ex. effect of cue validity on mean log-rt) were centered on zero, serving to put the onus on the data to cause an update of beliefs sufficiently large to drive this distribution away from zero. Updating of each model by the observed data was achieved by computation of 8 independent MCMC chains of 10,000 iterations each, yielding confident convergence for all models. See Appendix B for evaluations of convergence and posterior predictive checks.  \subsubsection{Detection response  time} RT results here  \subsubsection{Probability of memory}  Probability of memory results here. 

Fidelity of memory results here.