Michael A. Lawrence edited e1_results.tex  almost 9 years ago

Commit id: b368aab2260b4e897c3b9ec34dc9d4bb1277b52d

deletions | additions      

       

All models were specified and evaluated within the computational framework provided by $Stan$, a probabilistic programming language for Bayesian statistical inference. For all models, a first pass at inference was conducted that imposed flat (``uninformed'') priors to provide a general range of reasonable values for the intercept and scale parameters (ex. mean log-RT, between-participants variance of mean log-RT, etc). From this first pass, weakly informed priors were imposed (for specifics, see Appendix A) to speed computation. Note, however, that while weakly informed in scale, the priors for the effects of manipulated variables (ex. effect of cue validity on mean log-rt) were centered on zero, serving to put the onus on the data to cause an update of beliefs sufficiently large to drive this distribution away from zero. Updating of each model by the observed data was achieved by computation of 8 independent MCMC chains of 10,000 iterations each, yielding confident convergence for all models. See Appendix B for evaluations of convergence and posterior predictive checks.  \subsubsection{Detection response time}  RT results here Means and $95%$ credible intervals (CrI_{$95%$}) for detection response times in all conditions are shown in Figure 2. Computing a cuing score (valid $-$ invalid) in each SOA condition yields cuing estimates of 7ms (CrI_{$95%$}: 1-12ms) in the 100ms SOA condition and 34ms (CrI_{$95%$}: 29-39ms) in the 800ms SOA condition.  \subsubsection{Probability of memory}  Probability of memory results here.  \subsubsection{Fidelity of memory}