Paranoia, paranoia. Everybody’s trying to get me… it might not be rational but it’s what I believe

The big project of Artificial Intelligence is computationalism.  Can we create a formula (or series of formulae, perhaps) which does’t merely simulate reasoning but actually, honestly, really and truly is reasoning?

There are lots of reasons to think that we can’t.  John Searle‘s Chinese Room argument demonstrated that understanding was more than mere symbol processing.  Understanding is a major aspect of our reasoning process and algorithms are symbol processors, therefore reasoning won’t be captured in an algorithm.

It’s all very interesting, but after reading Löwe and Pacuit’s An abstract approach to reasoning about games with
mistaken and changing beliefs
(Australasian Journal of Logic, 2008), I begin to suspect that there might be another problem with computationalism: akrasia.

Akrasia is when you act against your better judgement (either due to weakness or impetuousness).  A good example is when you’re out shopping: ‘You know you ought to save money.  You don’t really need the boxset of the 1970s Japanese television series Monkey.  Your best judgement states that you shouldn’t purchase the boxset, but you do it anyway.’

The problem of akrasia arises from the spontaneity of our belief-making process: we don’t always choose our beliefs and we usually apply reasoning processes after forming our beliefs (there are a myriad of opinions and beliefs which we hold which we’ve never really thought about in much depth).

Löwe and Pacuit tackle the problem of changing beliefs and mapping rational responses when the agent is uncertain of all the relevant details.  Let us suppose that Löwe and Pacuit have successfully mapped the reasoning process of agents with changing and mistaken beliefs.  We still have no bridge between this map and that which the agents will actually believe.  Worse, even with this map, we don’t know which pathway the agents will take.  We need some way of understanding how the agent’s beliefs are going to change.

Unfortunately:

The benefit of our framework is its extraordinary simplicity: we make the player’s preferences the basic entities of the entire algorithm and encode the belief change into the notion of state, thus avoiding to have to discuss the belief change functions. Because of this, we get a very parsimonious and flexible algorithm that can be applied to many different situations. (Benedikt Löwe and Eric Pacuit, “An abstract approach to mistaken and changing beliefs”, Australasian Journal of Logic (6) 2008, p. 176)

We need some way of mapping when agents are going to go against the map (that is, be akratic) and cutting the belief change functions out of the picture seems to crop the mental landscape rather harshly.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s