Explanatory gap

The Computational Explanatory Gap

James Reggia, Derek Monner & Jared Sylvester :: Journal of Consciousness Studies, 21, No. 9&10, 2014, pp. 153-78 :: www.imprint.co.uk/jcs.html

Summary and review of the above paper

INTRODUCTION: The durability of the explanatory gap between neural processing and consciousness is seen as surprising given the successes of neuroscience in recent decades. Models of cognitive processing still require external direction, which is exactly what the conscious areas of human brains do not require.

Computational models in consciousness studies have had a mixed reception, and have been mainly ignored by artificial intelligence researchers. One problem has been the lack of agreement as to the neural correlates of consciousness. Beyond this the authors see what they call ‘the computational explanatory gap’ as a more important obstacle. The authors note that the perceived importance of understanding consciousness has increased over recent years because of the acceptance that subjective first-person experience is not restricted to the perception of sensory qualia, but also involves deliberative thought as the function of higher-level cognition.

Explanatory gap

There is a lack of understanding as to how what they call ‘cognitive information-processing’ can be mapped onto neural processing. This cognitive category includes decision-taking, planning and goal-directed problem solving. The computational explanatory gap is argued to be related to the philosophical explanatory gap, which is the apparent gap between neural processing and subjective experience. However, the crucial problem is here argued to be that of how computations that can support goal-directed behaviour can be mapped onto neural network processing.

Top-down/bottom-up

There are different possible approaches here, as between top-down symbolic or numerical approaches and bottom-up neural processing. Top-down has been successful with areas such as reasoning, problem solving, but not with apparently easier areas such as pattern recognition and motor control. Top-down methods have also been poor at dealing with noise, unexpected developments or small alterations to memory systems. In other words, they are not suitable for dealing with the conditions of the real world.

Bottom-up approaches tend to have the opposite strengths and weaknesses, and have been most successful with learning processes that would be unconscious in a human, and with automated processes such as robotic arms in factories or driverless vehicles. The computational gap is also evident in terms of the slowness of human conscious processing, and such processing is mainly suitable for dealing with one thing at a time; attempts at multiple conscious processing show a rise in processing errors.

Human direction required

The durability of the explanatory gap is seen as puzzling in view of success over the last century or so in understanding unconscious processing in the brain, and also in identifying the prefrontal areas involved in executive decisions; the role of consciousness in all this has nevertheless remained unclear. The modelling of cognitive control processes has proved more challenging than anticipated; models tend to require human direction, and are also unable to generalise from the specific to variations, which means they have essentially failed to do what conscious areas of the human mind do all the time.

Tags: , , Posted by

One Response

  1. Great weblog right here! Also your website so much up very
    fast! What host are you the usage of? Can I get your affiliate hyperlink for your host?

    I desire my web site loaded up as quickly as yours lol

Leave a Reply