Decision Workshops


Graphs summarising findings

This page is a brief summary of the two papers here and here .


They describe an ingenious experiment to look at different methods of forecasting the outcome of disputes, and seeing how good they are.


Each column of this graph is a different historical dispute (something that actually happened), but they are quite obscure, so the subjects of the experiment would not know the actual result.   There’s an artist’s protest, a dispute about TV channel distributions, a takeover, an investment decision etc. ..... one of them (water dispute) could have led to a war.  Now as each of these situations actually happened, we know the outcome.  So what the experimenter did was took the real outcome, and mixed it with other plausible outcomes to make a multiple choice test and then saw how good different methods were are at predicting what will happen,


This graph shows what the result would be from guessing:


Now because this is a multiple choice test, you can somethimes get the right result just by guessing, As there was six choices for artist’s protest, there is a 1 in 6 chance of getting a right result by guessing. There are  three choices for the distribution channel outcome, giving a one in three chance of guessing and so on.


So (on average) we would expect to get about 27% right just by guessing. Let’s see how we did using other methods....


But before clicking this button to the next page, think "How well do you think just asking people to make a judgement will be?" and "How well do you think using Game Theory will be?"

Full list of supporting papers Show results of experiment