Far from the madding brain

ConsciousnessThe Robotics Primer

Maja J. Mataric

MIT Press (2007)

Summary and review of this book

Introduction:  In the 1960s and 1970s there was an emphasis on building robots that thought, reasoned or deliberated in the manner of humans. The computational load involved made the robots too slow for real world conditions. Since then, and despite the Moore’s law increase in computing power, robotocists have moved away from from anything that has a close resemblance to the human brain. The most common form of robot architecture is reactive, in that it involves direct responses to sensory inputs, without internal rationalising, and with no sign of the reward/punisher evaluation system that has an important role in human decison-taking. In describing this trend, Mataric repeatedly stresses the heavy computational load involved in even simple interactions with the environment.

Robots are defined as: (1.) autonomous systems, i.e. not totally dependent on some form of remote control (2.) existing in the external three-dimensional physical world, i.e. not just a simulation in a computer (3.) Sensing their environment and acting on it. A robot has an on-board controller, making it autonomous and sensors to to acquire information from the environment. The robot responds to the sensor input in a manner directed at goals that it acts to achieve. As well as sensing the external world, the robot may also be able to sense itself, meaning that it has input about its internal state. This may include a memory that stores information about the external world. The robot also has effectors that enable it to take actions, such as movement and handlinng objects. The robot’s controller can use sensor inputs and the robot’s memory store to, independently of the robot’s creators, decide what actions to take.

Despite Moore’s law, the author stresses the continued challenge presented by movements in robotics. In nature, organisms that move require brains in a way that plants etc. do not. Moving the robot around is seen as the first challenge for the robot brain, and still after half a century of research one that is far from routine. With regards to movement, evolution settled for various numbers of legs for land-based organisms. Wheels are much less versatile, only functioning well on relatively smooth and not too steep surfaces. However, robot designers have tended to favour wheels unless there was a specific requirement for legs. Legs have a large number of degrees of freedom (possible movements they can make), and these require more computing power. Secondly, it is harder to remain stable on legs than wheels, and this also requires more computing. A bipedal system is particularly challenging for a robot brain, because the centre of gravity of the robot has to be kept over a smaller area than for a robot with a larger number of legs.

With robot motion, there can be two main types of objective, to get to a particular location, or what is more difficult, to follow a particular path throughout a whole journey. A large sector of robot research is dedicated to getting robots to follow a particular path. This involves motion planning, a computationally demanding process requiring a search through all possible routes.

The robot’s manipulator or effector can be an arm or gripper. The end effector is the part effecting the world, comparable to a finger or a hand. Because this is attached to the rest of the robot, the movement involves the robot as a whole. This is a surprisingly complex task involving the robot’s body, the manipulator and its movement through space. The more degrees of freedom there are in the manipulator, the more computing power is needed to control it. With an arm or similar, it is difficult to control it without first registering what it touches. Some animal/human movements such as the ball-and-socket shoulder joint and the complex wrist are particularly difficult to replicate in robots. Human manipulators such as the hand can be general purpose as a result of their complexity. Robot manipulators tend to be simpler with only specialised functions. They may have a specific tool attached to their end point, or be designed to hold such a tool.

To function usefully, a robot needs to be able to sense its own body or ‘internal state’ and the conditions in its immediate environment. As with animals/humans a robot needs proprioceptive sensors. With robots this means sensing such things as the position of its wheels and the joint-angles of its arms. Next to this, it needs sensor of the external environment, such as the level of light, sound and the distance to local objects. The sensors measure physical quantities. Sometimes the same physical property is measured by more than one sensor, which is useful because all the individual measurements have a problem with noise. Noise is one of the major challenges of robotics, along with partially hidden aspects of the environment, changes in the environment and lack of prior knnowledge of the environment. The robot may have to contend with a noisy, messy environment about which its sensors provide only patchy information. The robot sensors can be graded in terms of the amount of information they provide. The most simple switch provides one ‘bit’ of information in being either on or off. The expression ‘bit’ is derived from b(inary) (dig)it, which has two possible values.

The robot’s sensors provide the measurement of quantities in either the internal state or the external environment. These measurements need to be processed by the robot before they become useful for its function. This is analogous to the human brain where sensory inputs at the retina etc. are only useful after several stages of processing in the brain. The more information the sensor provides, the more processing is needed to turn it into something useful. With very simple sensors, the processing can be done on the spot, without resorting to a remote processor. Thus a robot may have a switch on the front of it, which gets pressed when it collides with something. The pressing of the switch has the function of stopping the robot without any resort to a centralised brain. Thus sensory information can be processed in two ways (1.) the very simple switch type, which calls for an immediate action, or ceasing of an action and (2.) processing to determine what the sensory input is telling about the environment or internal state.

The problem of going from the data output of a sensor to a functionally efficient or goal-driven action is known as the ‘signal-to-symbol’ problem. The sensor data is seen as a measure of some quantity such as voltage, current, resistance etc. while action requires symbols to decide it. As an example, there might be a rule that if grandmother is there and is smiling, the robot should approach her. That requires internal symbols for grandmother and smiling to be generated before the robot can act. The author cautions that this may seem obvious as we do it ourselves all the time, but in robotics it is a fundamental and persisting challenge. Sensors provide quantity signals rather than symbols, which have to be extracted from the signals by intensive processing. For instance, if the robot has a microphone picking up human voice signals, the processing of the signal will involve separating it from background noise, and then comparing it with stored recordings (memories) of voices in order to try and achieve voice recognition. A similar example is the robot trying to detect whether the grandmother is in the room. It will have to detect all the objects in the room, separate them from their background, and then compare them to its memory database of of objects in order to register whether grandmother is present. This involves computation at a level described as challenging. In fact, various ‘cheats’ seem to have been devised. One approach is that instead of having the robot just sensing data, it can look for data that might help with the task. With grandmother, it might try looking for objects with the colours of grandmother’s dress or with the approximate size and speed of motion of grandmother. In general, there is often less burdensome computation in looking for clues in an environment rather than going after full knowledge, such as by using a camera because of the intensity of processing involved.

The camera that a robot can be fitted with projects light onto an image plane, which is similar in function to the rods and cones of the human retina. The information in the organic eye must be susequently processed by the brain, and similarly the information on the camera’s image plane must be processed by the robot. The typical first step in image processing is edge detection, or finding all the edges in the image. Algorithms have been developed to deal with the edge detection problem in robotics. The next step beyond edges is to segment or organise the image into objects. The lines are compared to models stored in the memory, and this may come up with the solution that the object is for instance a chair. A complication is that the robot’s processing has to take into account the possibility of viewing the chair or other object from different angles. All possibilities have to be processed, which is computationally very intensive.

Feedback control is an important aspect of the robot brain. This allows the robot to  achieve and maintain a desired state by repeatedly comparing its current state to the the desired state. Feedback refers to information relative to this sent back to the robot’s brain/controller. The desired state of the robot comprises an achievment goal, such as getting to a particular location or alternatively the maintenance goal, which is to maintain its internal state once it gets there. The difference between the current and the desired state of the robot is referred to as the error. If the current and the desired state are not the same, the robot has to decide on what action to take. A feedback system will tend to oscillate around the desired state because it first corrects an error in one direction, and then makes an overshoot in the opposite direction, although this can be reduced by frequent computing of the error or changes in the angle of movement.

CONTROL ARCHITECTURES  – far from the madding brain

There are four main types of robot control architecture (1.) deliberative (2.) Reactive (3.) hybrid (4.) Behaviour based. Decisions about architecture are based on what the robot may need to do. Does the robot need to predict future situations, does the robot need to change its behaviour over time, and how fast do events around the robot happen? Deliberative architectures look into the future and use long time scales. Reactive architectures respond to the immediate environment. Hybrid architectures attempt to combine the aspect of both deliberative and reactive architectures.

In a deliberative architecture, the control system has multiple modules for sensing, planning and acting, and these modules do their work sequentially, with the output of one providing the input of the next. In reactive architectures, there are multiple modules active at the same time. In hybrid systems there is a deliberative section, a reactive section and a middle element.

In many environments, the robot cannot sense everything it needs to know immediately. It may need to have  a memory store of what happened, maps, images etc. or to have processing available to make predictions about the future. Representation is the form in which past material is stored in the robot. This can be regarded as a world model. A map is a common form of such a model. In a map, the robot may have an exact memory path, or alternatively it may have just what course to take at particular land marks. Apart from environmental maps, the robot may store self-knowledge like proprioception, self limitation or plans of action etc. All this is computationally intensive. For instance, working out a path may involve considering all the possible paths.

Deliberative control grew out of deliberative systems in artificial intelligence such as those successfully used for playing chess. This relies on being in situations where there is time available to think, and where there is an advantage in having a thought out strategy. This latter involves planning, which is the process of looking ahead at the outcomes of possible actions. Thus a robot may search every path of a maze, and choose the shortest path unless other criteria are brought into play.

This approach works well with small problems, but the larger the problem, the greater the length of time needed to deal with it. This may become a threat to the robot if it needs to deal with some aspect quickly, such as the risk of collision with objects. The author stresses here the limitations of classical computation despite the much touted Moore’s law. There is still a definite constraint on how much can be processed in the time that it takes for a robot to make  significant movements in its environment. The human brain seems to deal with the same problem by splitting off conscious deliberation from more spontaneous actions.

The deliberative planinng architecture has three stages, sensing, planning and execution of the plan. The computational burdens of this process creates serious problems in robotics.The combined inputs of sensors including complex sensors and the memory store of internal representations, including a variety of different perpsectives on objects, creates a large state space for the robot to search and to keep updated. The consequence of this is a slow search process. This may involve the robot having to stop while it plans its actions, which is impraticable in most real world situations. The author remarks that “Computer memory is cheap so space is not as much of a problem as time, but all memory is finite, and some algorithms can run out of it.” The result of these constraints on deliberative processing was that in the later 20th century robotics moved away from the deliberative planning process, the core of human-type processing , and started to develop methods that bore less resemblance to human processing.

As a result, the simpler alternative of reactive control emerged as a commonly used method of robot control. It is based on the close connection between the initial sensors and the final effectors of the robot’s actions, and is thus quite unlike human brain processing. Purely reactive systems do not utilise internal representations or make predictions about the possible outcome of actions. They concentrate on quick reactions to current sensory data. They are based on a system of rules that couple specific situations to specific actions, and thus lack the flexibility of human brains. In fact, reactive robots use a system comparable to animal/human reflexes controlled by the spinal cord rather than the brain. Complex computation is unnecessary because there is a stock of fast pre-computed rules. With reactive robots, the thinking is done by the designer, who thinks of the situations the robot might encounter, and specifies a response. Even this could lead to an intractably long list of possible actions, so the designer specifies only the more important possibilities. A system called subsumption architecture is common in robot design, and involves a collection of modules each having a particular task, such as movement, avoiding collision, finding doors, picking up specific objects, each one of these as a separate specialist task. Again this structure is quite alien to the organisation of the human brain. In general, reactive robots rely on immediate input from the environment, and minimise internal processing and memory storage. In contrast, the human brain takes a selected input from the environment, and then concentrates major resources on the processing and storage of what it has selected.

Reactive robots are fast but inflexible while deliberative processing is flexible/intelligent but slow. Hybrid robots attempt to combine these two systems. The hybrid robot has three layers, deliberative, reactive and a connecting layer. The last of these has the difficult job of reconciling the different timescales and different representations or lack of them, on which the deliberative and reactive systems work. The example used by the author is a type of robot that might be used to deliver whatever’s needed around a large office or hospital. A conflict might arise between suddenly having an urgent delivery, but not having a pre-planned route to the place. The robot has to decide between first planning a route, or else setting out, and ad libing as it goes along. This is the sort of conflicting decision which the brain is adapted to deal with mainly via the cooperation of dorsolateral prefrontal, orbitofrontal and anterior cingulate. In the hybrid system, the reactive layer may consult the deliberative layer if it runs into a problem such as an obstructed corridor in a hospital. The deliberative system kicks in at this point, but may still take too much time for it to be practically useful. In practice it looks as if hybrid robots, requiring complex interaction between the components, are best suited for special purposes, with the robot needing to be re-designed for each new task.

A final idea for controlling robots is behaviour-based control (BBC), which grew out of reactive control ideas. BBC has no element of deliberative processing. The BBC  control involves modules for different types of behaviour. Behaviours are defined as achieving or maintaining particular goals over time, and include finding objects and other robots or avoidance of other things. The desirabity of behavious is pre-chosen by the robot designer, rather than by being evaluated by the robot, in contrast to the internal evaluations of the human brain.

 

 

Tags: , , Posted by

Leave a Reply