Steve Grand

Life and How to Make it

Phoenix 2001

pg 130

Keywords : FEEDBACK - topological approach to studying complex systems - PHASE LANDSCAPES - complexity theory - hyperspace - an ever-changing, restless adaptive system - PREDICTION - adaptive behaviour - INTELLIGENCE - ability to learn the relationships between cause and effect and to use them to predict the future is what is called intelligence - theory of mind - mental models of the world - INFORMATION - Signals are therefore not stuff. They are non-physical, persistent patterns with a coherence and existence of their own - compressibility - 'meaning' - an observer effect -

Most real systems have more than one variable - more than one thing that feedback can alter - so representing these systems requires more axes.

A system with two variables could be represented as a surface, looking rather like a relief model of a landscape (Figure 14), rather than a simple cross-section. Changes in one axis (variable) can result in changes in the other. The repertoire of hills and hollows is now enriched by other topographies, such as valleys, corries, ridges and saddles. What we are looking at here is approximately what is known as the phase landscape of the system. In the jargon of complexity theory, the end points of positive feedback and the null points of negative feedback, as represented by the hollows and crevices on our surface, are called the attractors of the system, because the system tends to fall into these states. A huge amount of metaphorical and mathematical value can come out of this topological approach to studying complex systems and as a non-mathematician I think it's one of the most delightful mathematical models ever conceived.

If you want to represent a system with more than two variables (and remember that a living organism might represent a system with zillions of them), you have to move into hyperspace. We have taken up all three spatial dimensions to represent a system with two variables (and remember that a living organism might represent a system with zillions of them), you have to move into hyperspace. We have taken up all three spatial dimensions to represent a system with two variables - the third dimension is the height of the surface and describes the way that those two variables interact. For three variables we need three axes, plus a fourth to describe the relationships, so the variables could perhaps be represented as a cubic volume, and the dynamical behaviour of the system by changes in density within this volume. With four variables we are into are into four-dimensional hypercubes, and my imagination abandons me completely. Happily, for most purposes, sticking to a surface of hills and valleys is enough - we cannot represent reality on such a limited canvas, but we can understand all the important principles.

It doesn't rain all the time, even in England

One last demand can be made on our landscape metaphor. Feedback loops can coexist in different regions of the landscape, as I have already described - the ball can roll off one positive feedback hill and ring for a while in the valley at the bottom of the slope, where a negative feedback loop has taken over. Feedback loops can also interact in more subtle ways. It is quite possible to imagine different orders of feedback, where the feedback function of one loop is modulated by that of another. For example the gain of a negative feedback loop (the steepness of the slope, and therefore the rate at which the system collapses back to stasis after a disturbance) could be modulated by another feedback loop, responding to some other variable.

This sort of thing happens in weather systems, for example. Clouds are built from positive feedback loops, as we have seen. Once they start, they tend to accelerate and the clouds get thicker and thicker. But we are not permanently cloaked in cloud, despite the apparent tendency of the system to lock into a state of total cloud cover. This is because there is also a negative feedback loop at work, controlling the gain (the level of amplification) of the system. As the amount of cloud builds up, the amount of sunlight hitting the ground is decreased. This reduction in light lowers the strength of the positive feedback loop, and thus slows down the rate of cloud production when it threatens to take over. The system tends to dance around a happy medium, sometimes with more cloud and sometimes with less, but rarely with total cover and rarely with none at all. If there were only a single, positive feedback loop, then the result would be total cloudiness for ever. With an additional negative loop, the system settles out at a happy medium. That this happy medium can also vary from day to day is evidence for a third loop, caused by the changing amount of water vapour, which is itself controlled by the amount of sunlight hitting the ground over a larger area. Here we have the balance of nature in all its glory: of all the possible moisture levels our atmosphere could find itself in, the region in which both clouds and blue sky can exist simultaneously is very narrow indeed, yet the skies are constantly kept in this region. The sky is an ever-changing, restless adaptive system which not only keeps our planet from freezing or boiling, but also provides the greatest free show on Earth.

So loops can interact on different timescales and at different levels of orders. On our landscape, the easiest way to visualize this is to add time to the equation, and imagine that our hills and valleys are constantly changing shape. It is almost as if the position of the ball (the state of the system) is deforming its own landscape, and thus its future attractors. In fact, this analogy works best when we imagine several balls on the one landscape. Several living things, for example, can occupy the same environment or share the same physiology, and therefore an be described by the same landscape. Each individual will be in a different state at the same moment, and so each can be represented by a different ball. If two balls somehow communicate or otherwise interfere with each other, then it is possible to imagine that one disturbs the landscape on which the other is rolling. This makes the most sense when we consider 'evolutionary landscapes', where the hills mean something else entirely (fitness levels, not feedback loops). Nevertheless, it can be helpful in a feedback landscape too.

Running the ridge

The story of an organism's life can be seen as a journey across a landscape of positive and negative feedback paths. Imagine a real landscape - a sort of badlands - in which wide plains have been cut from the hills, leaving a series of interconnecting ridges. Imagine yourself as an organism, and start out at the top of one of these ridges. To either side of you lies death. If you move too far to the left or right, you will start to slip down the slope into the valley below. If you wander just a little off course, you may have sufficient energy resources to turn around and climb back up the hill, but once you have gone too far and started to slip too quickly, there will be no way back.

If that were not bad enough, now imagine that the part of the ridge you are standing on is subsiding. The longer you stay ill, the more untenable your situation becomes. You have no choice but to run forward for ever. You dare not stay still, but you cannot see more than a few steps in front of you. Which way should you go? If you feel your way forward, away from the crumbling ground under your feet, you might perhaps begin to tell whether your path is taking you away from the ridge. If you turn to the right, perhaps you will start to descend even faster. So you turn left instead, and hopefully you have sufficient energy to regain the ridge. If you turn too sharply you will overshoot and start to descend the other slope, this time with your own momentum sending you even faster towards oblivion.

Even if you judge it just right, and can zero in on the direction of the ridge, you have to contend with the fact that the ridge itself is not straight, and the path that was once correct becomes treacherous as the crest of the ridge dives off to one side. Worse still, you are not the only organism running the ridge - others are too, and as they slip and slide they cause small landslips and deform the landscape in front of your feet. If you are smarter than the average organism, you will not look down at your feet and hope to react in time to your mistakes in direction. Instead, you will look ahead. You will try to see and remember how the ridge twists and turns in front of you, and you will try to adjust your balance to prepare yourself for what is to come. Perhaps you will even allow yourself to dive off down a slope, knowing that the ridge turns around a little farther on and your trajectory will carry you on and up the next slope, back onto safe ground. That way you can make best use of your fading resources, by using the slope to slingshot you in the same way that an interplanetary spacecraft swings itself around Jupiter's gravity well to gain energy.

Looking ahead at the slope is one thing, but remember that the other organisms are changing the landscape in front of you. If you are really smart, you will plan your movements to take account not only of the curving ridge, but also of how you believe the other organisms will react, and therefore what damage they might do to the path.

What you have just experienced in your imagination is what it is like to be alive. We all live life on the edge: we have limited energy resources available to us, and death awaits us round every corner. The challenge is to keep on the ridge, and avoid slipping so far that we can't recover. The ability to control the flow of energy and change course in response to information about whether we are sliding downhill is what is called adaptive behaviour.

All living creatures can do this, but only some can take the next step, which is to look ahead. Looking ahead enables an organism to predict the need for future changes of direction and plan for them.

This ability to learn the relationships between cause and effect and to use them to predict the future is what is called intelligence, and the best and brightest intelligent systems can make plans that are more than just reactions in advance'. These systems can reasonably be called creative, if not conscious. They can make 'what if ...' judgements that lead to innovative ideas like risking death to take advantage of the slingshot effect. Finally, there is the ability to imagine what the effect of other organisms will be on the environment in front of you. At a minimum this requires that you can predict highly non-linear sequences (not just where the slopes are now, but where they will be when someone else has run over them). At a higher level still, it requires the ability to put yourself in someone else's shoes - 'If I were them, I wouldn't try to stay on the ridge there. I'd slingshot myself round that corner, and if they do that, I predict that the landscape will deform because of their action; therefore I had better go this way.' This ability is what psychologists call theory of mind.

Notice that each solution to the problem requires feedback. The simplest form (adaptive behaviour) requires relatively straightforward negative feedback, to compensate for mistakes in direction. Without damping, though, the organism may overshoot, and since the ruggedness of the terrain is unknown and ever-changing, even the level of damping may need to be controlled by feedback. Intelligence thus requires several orders of feedback. Some of them control behaviour, while others control changes in behaviour, a process we call learning. More advanced intelligence requires feedback from events that haven't even happened yet.

The outside world can be visualized as a set of feedback loops as I have just described, and I find it very interesting that the inside world can too. In this ‘inner space', feedback mechanisms not only enable a creature to learn, but also represent the actual result of that learning - these are the loops that determine the creature's responses to its environment. In an important sense these internal feedback loops are a mirror image of those outside, because their job is to compensate for the external forces that are threatening the organism's ability to persist. If a positive feedback loop exists in nature, such as the poverty trap, then the brain's correct response to it can be represented by an equal and opposite negative feedback loop (a role such as 'cut back on expenditure if money is running short'). Because the internal feedback loops are a reflection of the world outside I think it is fair to call them mental models of the world. My present research is focused on working out how general feedback-supporting devices such as neurones can develop this ability to build usable models of an organism's external feedback landscape.


When we speak of feed-forward and feedback, what exactly is it that is being fed forward or back? Any good technical answer to that question is likely to contain some or all of the words, 'signal', 'transmit' and 'information', as in 'a signal is being transmitted around a circuit, causing information to flow from place to place'. This is a good answer but, as always, these words come with some baggage that we need to be wary of.

The transmission of a signal, for example, should not be confused with the movement of material. When we think of electrical signals being passed down a wire, it is easy to assume that the signal is the same thing as the current - that the electrons are carrying the signal from place to place as they squirt down the wire. However, electrons move relatively slowly through metal, while electrical signals travel at near the speed of light. It can be helpful to visualize electrons in a wire as a necklace of metal balls connected by springs. When the ball at one end of the chain is disturbed, it causes a wave of disturbance to travel down the necklace, even though the necklace itself is not necessarily moving along at all. Another analogy is a domino run - knocking over the first domino causes a wave of activity to pass down the entire length of the chain (if you're lucky), even though no individual domino travels more than a few centimetres. Signals are therefore not stuff. They are non-physical, persistent patterns with a coherence and existence of their own. Like so many other entities in this book, signals are non-things.

Information is a rather trickier concept with its own pitfalls. In 1948, Claude Shannon and Warren Weaver from Bell Laboratories published a paper that defined a new science called information theory. This theory was a godsend for telecommunications engineers, and has turned out to have profound implications for many other fields too. Some of the technical terms have even found their way into common usage, as in 'I'm sorry, I can't deal with that nownow - I just don't have the bandwidth.'

In Shannon's theory, all information can be measured in bits - the number of binary digits needed to represent the information content of a signal. The information content of a traffic light, for example, requires no more than three bits. Each bit could represent the state of one of the three lights: a bit would be 1 if the corresponding light was on and O if not. Red would thus be 100, amber 010 and green 001. Red and amber simultaneously would be 110. In fact, three bits is more than we really need, because in practice traffic lights don't make use of all eight possible combinations of colours. A British traffic signal uses four of the possible combinations (most other countries use only three), so two bits are sufficient to represent each of the meaningful states as a binary number from 00 to 11 (O to 3 in decimal). Here we meet an idea that is crucial in information theory: the information content of a signal can be measured in terms of its compressibility. Although our traffic lights could be logically and conveniently coded using three bits, two will actually suffice. The data could thus be compressed from three to two bits, requiring 33 per cent less bandwidth (measured in bits per second) to transmit the information as a signal.

When we work with visual images in a computer (a photograph on a Web page, say), those images are usually stored in a compressed form. All the superfluous, redundant data have been removed, and the remainder have been encoded in a minimal or near-minimal way to reduce the size of the image file. Since compressibility is considered to be a measure of the information content of a signal (or an image), a photograph that can be compressed to one-tenth of its original size is regarded as having less information in it than one that started out the same size but could be compressed by only 50 per cent. This seems a very rational and useful idea. It is easy to imagine that a photograph of a crowd scene is both less compressible and more information-rich than a photograph of a cube. The concept remains usefnl, even though it leads to the rather counter-intuitive conclusion that the most information-rich signal of all is therefore random noise (because noise has none of the regularity or predictability that would enable it to be compressed).

Nevertheless, this conception of 'information' can be misleading or inadequate if taken on its own. Look carefully at Figure 15. This shows two pieces of ultra-pure silicon that have been 'doped' with tiny amounts of impurities, in each case arranged in very slightly different patterns. What is the significant difference between the two objects? Well, according to information theory, there is no difference at all. The pattern on each silicon slab turns out to be equally compressible, and therefore each must contain the same amount of information. Yet it seems to me that there is a crucial difference, which information theory does not account for. Although the left-hand object is nothing more than a slab of dirty silicon, the one on the right is a computer.

Information theory tells us that the two are equivalent in their information content, and yet a computer chip is clearly very much more than a piece of dirty silicon. They may contain identical amounts of information, but there is something distinctly special about the information content of the chip on the right.

Instead of two silicon chips, I could have shown you a statue and a lump of clay, and arranged for the descriptions of their structure to have the same level of compressibility and therefore the same information content. If I had, you would probably have felt that the statue still 'had something' that the lump of clay did not, and you might have decided that this extra something was 'meaning'. Lumps of clay are meaningless, but statues mean something to people. Since a statue means something to a human being and maybe to a sheep, but clearly means nothing at all to an ashtray, we seem to have something here that a physicist would call an observer effect. It is as if a statue contains something special because a human observer can ascribe meaning, or perhaps utility to it. This may be true of statues, but it does not seem entirely satisfactory to me as a general explanation (observer effects are troubling because they often imply vitalism). A computer chip surely possesses something in an absolute sense that a slice of dirty silicon does not, regardless of whether a human being is there to witness it.

Whether this utility is absolute or whether it is meaningful only in the eyes of a human observer, we have here an appealing concept that I can only describe as 'elegance'. A thing is elegant if it maximizes some measure of utility while minimizing the information content (i.e. maximizing its compressibility). Elegance is therefore proportional to utility multiplied by compressibility. Another way of looking at this idea is to distinguish between complexity and complication. Something is complex if it contains a great deal of information that has high utility, while something that contains a lot of useless or meaningless information is simply complicated.

Utility may also be an observer effect, but in the example of the computer we can see something that is potentially absolute and measurable. The computer chip may contain only as much information as the slice of dirty silicon, but computers can generate information, and the potential information embodied in a computer is huge compared with that in a lump of overcooked sand. Perhaps even statues have information potential, since they can evoke responses from human beings and so change the universe in ways that heaps of clay fail to. Certainly 'information’ in the non-technical sense of the term has a qualitative dimension as well as the quantitative one that information theory gives it.

Finally, even the word 'transmitted' has its problems. Transmitting implies an active role and recelving implies a passive one, and that can lead to fallacies about the concept of control. As I have said before, control is not synonymous with domination. It is just something that happens - an effect as much as a cause. Thinking of control in systems as being some kind of manipulation from on high is a dangerous and misleading idea, but it is one that our hierarchical, pugnacious society beats into us from an early age. A feedback loop is just that: a loop, with no end and no beginning. Unlike people, things do not tell one another what to do. Some shout and others listen, but that is not the same thing. In fact, one of the truly important tricks that computer science can learn from nature is to separate the shouting from the listening, so that the shouter need not know about the listener and vice versa. My adrenal glands do not tell my heart to beat faster. Adrenal glands respond to signals by secreting adrenaline; the heart responds to adrenaline by beating faster. Adrenal glands do not know what hearts are, and hearts have never heard of the endocrine system. Adrenaline hasn't heard about either.

This is the way organisms work. There is no architect, and no master controller telling the system what to do. There are just vast numbers of small independent entities that respond to signals as and when it suits them, and emit new signals whose destination they do not know. Top-down control leads to complexity explosions, because something somewhere has to be in charge of the whole system, and how much this master controller needs to know increases exponentially with the number of components in the system. Living systems are bottom

I hope I have given you some ways to visualize complex systems of interacting feedback loops, and at least a flavour of the perpetual domino run that is the universe. Feedback comes in only two varieties, but by varying the null points and timescales of feedback loops, and using one feedback loop to modulate another, systems of enormous complexity, subtlety and persistence can be realized. You and I are living proof of that. But there are still things to be said about exactly how one flow of causality can affect another, what fundamental 'techniques' (such as adaptation) nature uses, and what for. We need to create a Lego set of parts that can be plugged together to make something that lives, thinks and feels.

Steve Grand : Creation

HOME      BOE     SAL     TEXTE