##Major referee concerns have been
- Paper is written poorly, unconventionally, too many or inappropriate anecdotal examples, or does not fully explain what is going on in the experiment, the experimetrics, why we made these choices, etc. In some instances I believe our writing style led referees to not understand or accept the main point.
- Wrong modelling choices in experimetrics (choice of clustering, etc)
- The model is not general enough to make definite predictions for our experiment and/or our experiments is not rigid enough (in my opinion, 'not artificial enough') to allow a precise theoretical prediction
- The experimental design permits alternative explanations (and they were not convinced by our robustness checks to try to rule out things like 'bitterness from exclusion')
- The experiment does not have enough independent observations thus not enough statistical power to detect or rule out important effects, particularly if we consider the session as the unit of observation
- Also need to cite literature people like
#Paper with current data
- In terms of 'cost-benefit analysis', I think it is worth our time to retool and submit the paper based on current data (if not to ExpEcon, then OEP or Economica or similar)
#Newer experiment work
I think the key strength of our paper is that we find 'enforcement deters learning about types, and this can be harmful', and related, we provide unique evidence on 'the effect of enforcement once it disappears.'
Then I can ask him to work on the paper and also work on considering the design of another experiment. We had some good ideas for experiments that would both address key referee concerns. Namely,
- More data would address 'lack of sufficient independent observations'
- Replacing exclusion with punishment could deal with many alternative hypotheses (and consistent MPCR)
- And at ESSExLab an outside measure of their other-regard/altruism
- We could even have an 'outside punisher', which would allow us to maintain a constant incentive in both stages while ruling out reciprocity (Urs' recent experiments do this)
- We could try to elicit more beliefs about type distributions … and perhaps bring back a strategy-method decision in part; this could allow us to better consider when/whether we are in an environment where the 'information helps' condition holds
- if people stated a narrower distribution of beliefs in the anonymous cae this adds more credibility to the ‘learn more about play’
- If we do web-based, we might be able to move towards a perfect strangers design
##Executing new experiment
- I can probably apply for some funds for this at Exeter, perhaps for next year, or over the Summer.
- It might be worth running this at Essex to pair with the Omnibus and charity data