9:50 - 10:10. Practical Advantages and Applications of Bayesian Hypothesis Tests
in Experimental Psychology. Eric-Jan Wagenmakers,
University of Amsterdam.
Experimental psychologists can profit greatly from
the adoption of Bayesian hypothesis tests. Through a series of practical
examples, I illustrate how Bayesian hypothesis tests allow researchers
to quantify evidence in favor of the null hypothesis; how they allow researchers
to monitor the evidence as the data accumulate and stop whenever
they feel so inclined; how they put a premium on parsimony, such
that a high-N study will not automatically lead to a “significant” result;
and, finally, how they relate to concepts that are intuitive and relevant.
Throughout this presentation, I emphasize the recent software developments
that make Bayesian hypothesis testing feasible, easy, and fun.
10:15 - 10:35. Examples of Using Flexible Psychological Models in the Bayesian
Analysis of Data. Michael D. Lee,
University of California, Irvine.
Bayesian methods allow for a more
flexible and mature approach to analyzing data than do traditional methods.
The flexibility comes because it is straightforward in a Bayesian
setting to make realistic assumptions about the complexities of experimental
data, including pervasive issues like individual differences. The
maturity comes because generic descriptive statistical models can easily
be replaced by domain-specific models of psychological processes,
with meaningful psychological parameters replacing default statistical
ones. We give two case studies that make these general points, coming
from the memory retention and category-learning literatures. The
memory retention example focuses on the form of the forgetting curve.
The category-learning example focuses on the role of selective attention
in learning. In both cases, a Bayesian analysis reveals more information
in the data than the traditional analysis can manage and allows for stronger
and more general claims to be drawn.
10:40 - 11:00. A Hierarchical Bayesian Dual-Process Model Reveals That Recognition
Memory May Be Mediated by a Single Process. Jeffrey N. Rouder
& Michael S. Pratte,
University of Missouri, Columbia.
The dual-process signal detection (DPSD) model of Yonelinas has
proved pivotal in assessing processes underlying recognition memory. In
conventional analysis, data are averaged over people or items to form hit
and false alarm rates. We show how this averaging may distort parameter
estimates and threaten prior conclusions of separate recollection and
familiarity processes. We develop a Bayesian hierarchical DPSD model
that posits variation in multiple processes across conditions, individuals,
and items. This model yields simultaneous estimates of recollection
and familiarity effects across conditions, people, and items. Analysis
across a number of confidence-rating recognition memory tasks reveals
no strong evidence for separate processes. Recollection and familiarity
estimates covary strongly across conditions, people, and items, indicating
that much of the variation in confidence-rating recognition memory
can be accounted for by a single mnemonic process.
11:05 - 11:25. A Hierarchical Bayesian Framework for Series of Response Times. Peter F. Craigmile,
& Trisha Van Zandt,
Ohio State University. (Presented by Trisha Van Zandt.)
Response time (RT) data arise from reactions to
a succession of stimuli under varying experimental conditions over time.
Because of the sequential nature of the experiments, there are trends (due
to learning, fatigue, fluctuations in attentional state, etc.) and serial dependencies
in the data. The data also exhibit extreme observations that can be
attributed to lapses, intrusions from outside the experiment, and errors occurring
during the experiment. Any adequate analysis should account for
these features and quantify them accurately. We demonstrate how simple
Bayesian hierarchical models can be built for several RT sequences, differentiating
between subject-specific and condition-specific effects.
11:30 - 11:50. Multiple Comparisons and Power Make Sense in Bayesian Analysis. John K. Kruschke,
Indiana University, Bloomington.
In experiments with multiple conditions, Bayesian methods encourage
thorough data analysis and discovery, including numerous multiple
comparisons, because Bayesian analysis provides rational estimates of
individual and group parameters without being affected by which comparisons
the analyst might intend to conduct. Bayesian analysis produces
a complete distribution of credible combinations of parameter values.
From this distribution, simulated data reveal the probability of achieving
any research goal. Bayesian analysis thereby provides straightforward
estimates of statistical power and replication probability, even for the
complex experimental designs and goals of real research. These points
are illustrated with actual analyses of choice and response time data from
experiments in human learning.
This page URL: