Introduction: Our Love Affair with Psychology
\label{sec:intro}
The field of Human-Robot Interaction, and in particular, the field of
social HRI is nurtured by the weaving of different scientific
perspectives. As a community, we recognise that the technical fields of
engineering, control theory or computer science do not provide much tooling for
the scientific investigation of the ‘Human’ and ’Interaction’ parts of HRI. For this reason, we
take inspiration and ground much of our research in established results from
social sciences – in our field, primarily social psychology, developmental
psychology, and sociology. As academics, we pride ourselves on standing at the
intersection of these many fields, being able to understand and be understood
by programmers, engineers, as well as psychologists.
In this sense, our field embodies the basic idea of cognitive sciences:
building bridges across disciplines to gain new insights on complex scientific
challenges.
That said, the demographics of the academics working in HRI are skewed
towards engineering background TBD: any data to support that? We
could try to go over last year HRI’s author list, and quickly check the
backgrounds?: one often becomes a researcher in HRI by first building robots and
then looking at how the machines might interact with humans. While some of us do
have a primarily academic background in psychology, many do not.
This is not per se an issue: as capable, rigorous scientists, we can read and
understand the literature of social science, and take inspirations or reproduce
tasks, protocols, results. This is actually how science is supposed to work.
We think however that a ‘second order’ effect might be underestimated: because
many of us are ‘consumers’ of the psychology literature rather than ‘producers’
and active contributor to the psychology community, we might not always share
the same common-grounds with these neighbouring academic fields.
This has two consequences: first, as we are generally less familiar with different social science academic communities, we tend to be less critical and we would not
automatically question their findings as we would in our own community. This
effect is reinforced by the perceived maturity of academic fields like social or
developmental psychology, versus the youth of human-robot interaction.
Second, we build assumptions on how research is conducted in other communities
based on our own experience. As our background is often in exact sciences,
we would intuitively expect evaluation methods to deliver as much as possible
robust, exact, clear-cut results. Results that are always reproducible. And we are
certainly embarrassed whenever our results do not draw such a clear, legible
picture.
It is reasonable to think that, consciously or not, we assume the same ‘exact
science’ mindset across all scientific discipline, including social sciences.
Accordingly, we do not tend to question their established results.
However, over the last years, a growing evidence has built up that show that
many of these ‘classical’, ‘established’ results from social sciences –
psychology in particular – might not be as solid and clear as assumed: to much
ado, a recently published article \cite{rpp} evidenced the difficulty to
reproduce several key results from psychology: out of 100 studies sampled from
a range of psychology sub-fields, only 36% of the replication studies found
significant results whereas 97% of the original studies reported significant
effects. The results of two third of 100 studies could not be properly replicated:
whatever the reasons might be (from publication bias, to sociological changes in
the population, to small effect sizes), it calls for exerting caution whenever
we build upon supposedly established results.
Mention that findings true 50 years ago might not be true
anymore today?
We are convinced that many researchers in HRI are already aware of these issues.
The purpose of the article is to make this concern (“we rely too much and too
blindly on results from psychology that might not hold”) surface within the HRI
community. To illustrate the point, we present hereafter two unsuccessful
attempts at reproducing the effect of social
facilitation in presence of robots: social facilitation is an effect where the
mere presence of a (silent, passive) external agent influences one’s behaviour:
the participant would perform better or worst, depending on the task and on the
expectations. A large body of psychology literature evidence this
effect, that has been studied in robotics as well.
However careful our experimental design was, we have not been able
to reproduce the effect.