Michael A. Lawrence edited e1_results.tex  almost 9 years ago

Commit id: e6b413062f8c50ef7bedfa1ee6edf53bf86e2201

deletions | additions      

       

\subsubsection{Modeling}  Detection and memory response data were analyzed separately, but using the same general framework for Bayesian inference. For both data types, a hierarchical model was specified such that, for each parameter of the model, a given individual participant's value for that parameter was drawn from a Gaussian distribution. For readers familiar with the terminology of mixed effects modeling, this scheme implements a random effect of subject on all parameters, but enforces zero correlation amongst the parameters. The assumption of zero correlation was imposed for computational practicality in light of the current work's interest in the overall effects of the manipulated variables and not individual differences nor correlations therebetween. Detection RT was modeled on the log-RT scale where trial-by-trial log-RT was taken to be Gaussian distributed with a noise term common to all trials and participants. The location parameter for the Gaussian distribution was modeled with an intercept and contrasts corresponding to the manipulated variables. Trial-by-trial memory responses were modeled as a finite mixture of a uniform ``guess'' response on the circular domain and a Von Mises distributed ``memory'' response centered on zero degrees of error. The proportion of each memory response type is captured by the parameter $\rho$, while the concentration (i.e. fidelity) of the memory response is captured by the parameter $\kappa$. As $\rho$ is bounded to the domain of $0$ to $1$, and $\kappa$ is a scale parameter that must be $>=0$, the Gaussian sampling of per-participant parameter values noted above took place on the logit and log scales, respectively.   All models were specified and evaluated within the computational framework provided by $Stan$, a probabilistic programming language for Bayesian statistical inference. For all models, a first pass at inference was conducted that imposed flat (``uninformed'') priors to provide a general range of reasonable values for the intercept and scale parameters (ex. mean log-RT, between-participants variance of mean log-RT, etc). From this first pass, weakly informed priors were imposed (for specifics, see Appendix A) to speed computation. Note, however, that while weakly informed in scale, the priors for the effects of manipulated variables (ex. effect of cue validity on mean log-rt) were centered on zero, serving to put the onus on the data to cause an update of beliefs sufficiently large to drive this distribution away from zero. Updating of each model by the observed data was achieved by computation of 8 independent MCMC chains of 10,000 iterations each, yielding confident convergence for all models. See Appendix B for evaluations of convergence and posterior predictive checks.\subsubsection{Detection response time}  Means and $95\%$ credible intervals ($CrI_{95\%}$) for detection response times in all conditions are shown in Figure 2. Computing a cuing score (valid $-$ invalid) in each SOA condition yields cuing estimates of 7ms ($CrI_{95\%}$: 1-12ms) in the 100ms SOA condition and 34ms ($CrI_{95\%}$: 29-39ms) in the 800ms SOA condition.   \subsubsection{Probability of memory}  Probability of memory results here.  \subsubsection{Fidelity of memory}  Fidelity of memory results here.