The fields of action planning and automatic theorem proving in artificial intelligence (AI) have greatly benefitted from well defined benchmark problems and annual competitions. This made a fair comparison between different approaches and systems possible and triggered a competitive spirit to improve the state-of-the-art of the fields and to incorporate new concepts.
We see the necessity to have competitions in the field of human reasoning as well, as the number of cognitive theories that argue to explain parts of human reasoning is continuously increasing (for syllogistic reasoning alone there are at least twelve cognitive theories, Khemlani & Johnson-Laird, 2012), but few comparisons on common data sets exist.
In contrast to AI competitions, where often an optimal solution for a problem needs to be found, cognitive modeling aims at explaining the underlying cognitive processes by approximating the answer distribution generated by the participants. Not only does this require a computational model but a cognitive computational model, i.e., a computational model from which the underlying cognitive processes can be inferred. Multinomial processing trees (for an introduction, see Singmann & Kellen, 2013) are an excellent tool for representing such cognitive processes.
Models can differ on the quality of predictions:
- Which answers are predicted by the theory? Which not?
- Is there a qualitative order between the different answers?
- Is there a quantitative prediction (e.g., 77% of the participants decide for answer A, 18% for B, 5% for C)?
To evaluate different cognitive models several accepted methods from mathematical psychology and artificial intelligence exist. We will evaluate the goodness-of-fit of algorithmic and multinomial process tree models (independently):
1. by the root mean square error (RMSE) of the quantitative data and the
2. Bayesian Information Criterion (BIC) on the multinomial process tree analysis
on so far undisclosed behavioral data.
To participate please send as a zip-file no later than September 1st 2017 to firstname.lastname@example.org (with the Subject: Human Syllogistic Reasoning Challenge).
1. For participation in the algorithmic part:
- a source code (preferably in Python, R, Prolog or Java) and a makefile to execute your code in a command line. It receives an input in form of: the classical abbreviations (e.g., AA1 or IA2, cp. Khemlani & Johnson-Laird, 2012) and the number of participants to fit and the output, for each input, is the predicted answer distribution by your program
- your models' quantitative predictions in a csv or excel-file (we will provide within the next weeks a template at the website below)
2. For participation in the multinomial process tree part:
- a multinomial process tree reflecting the algorithmic cognitive process steps in its nodes and precisely specifying the number of parameters. Please provide the tree in a MPTinR readable form.
Participation is possible in one or both parts. The three best models will be presented at this years KI in Dortmund. Participation is open to everyone.