Problems with utility based choice models

From @nau2001 Savage created a tool where you could model choices under uncertainty by creating ‘consequences’ - prizes that could happen (or be immagined) in any state of the world and would have the same utility index in every state of the world.

Basically this means that the agent is abstracted away from a real world with constraints on what can happen in each state or how different states would change their opinions/ values. Even then, the state the person is in can only be known to that agent.

This leads us directly to the problem of cooperation since other people would have to make decisions based on the unknown state of other people - and their consequences. However, consequences are supposed to be independent of states so knowing other people’s consequences would change the utility of your states based on theirs and vice versa. So the model gets really intense really fast. This is exactly the problem described by Hayek (@hayek1937)


References