Horses for courses

Or … Why the brain needs different parts with different functions, and how they work together

Sometimes the brain has different parts that work in different ways, because the jobs they have to do are so different. It’s the principle of ‘horses for courses’.

Take memory. We can know facts about the world (the capital of France) and the types of things we encounter in the world (animals, vehicles), so-called semantic memory. And then we can have specific memories of what happened yesterday, or last Wednesday, so-called episodic memory. These two types of memory have very different requirements.

'Semantic memory should lay lots of specific experiences on top of each other and pull out general patterns. What dogs look like in general, rather than any specific dog'

Semantic memory should lay lots of specific experiences on top of each other and pull out general patterns. What dogs look like in general, rather than any specific dog. Such concepts are built from a gradual accumulation of experiences, and have relationships to each other (dogs are similar to cats, they are both animals).

By contrast, episodic memories must be recorded in a moment; they have no relationship to each other than the time when they happened; and they must be kept separate not blended. We don’t want a sense of the sorts of things that happened yesterday, a general concept. We want to know exactly what happened yesterday.

'The two types of memory are best handled by two distinct memory systems in the brain'

These two types of memory therefore have distinct requirements: {gradual accumulation, extract general theme} versus {instant recording, keep memories separate}. The distinct requirements mean the two types of memory are best handled by two distinct memory systems in the brain. Respectively, they are the cortex (in particular, the temporal lobe) for semantic memory about dogs (and other stuff); and the hippocampus for episodic memory about yesterday (and other days). As we saw earlier, one of the challenges for the brain is to move knowledge from episodic memory to semantic memory, since the knowledge is stuck in the connections between neurons. Transfer mostly happens while the system is off-line, during sleep. Then the dog I saw yesterday can change my knowledge about dogs.

This principle of ‘horses for courses’ is found elsewhere. Take vision, and the recognition of objects. The brain has to figure out what an object is and where it is (perhaps it’s moving!) from light signals registering on the retinas at the back of the eyes. For the brain, the challenge of recognising what an object is comes from the fact that ideally, the visual system should be able to recognise an object whatever angle it is seen from and wherever on the retina the image falls. A coffee cup is still a coffee cup whether you see it upside-down, on the left of your field of view or on the right. This is called ‘translation’ or ‘rotation’ invariance.

One way to solve the problem of translation invariance is to have lots of detectors for the coffee cup all over the retina, and to separately store all possible views of the cup. That way, the brain can ignore where the coffee cup is and how it’s oriented, and recognise it anyway. This indeed is what the underneath or ‘ventral’ visual processing channel does in the brain. Sometimes it’s called the ‘what’ channel, because it’s so good at recognising what an object is.

Great. Except that, if you’ve thrown away information about where in space the object is … how could you reach for it? How could you track its motion? Darn it.

'We need to combine channels if we want to catch cricket balls but duck snow balls'

Okay, so the brain is going to need another processing channel, which focuses on movement. It will track motion across the retina, lighting up detectors at different locations. It won’t bother too much about what the object is, just track trajectories, and pass the information to the motor system to prepare for action (move the eyes to follow it, raise a hand to catch it). This is what the top or ‘dorsal’ processing channel does in the brain, sometimes called the ‘where’ or ‘how’ channel. And so long as the two complementary channels can talk to each other, we’ll be catching cricket balls and ducking snowballs to our heart’s content.

If you didn’t have separate channels for what and where, the brain wouldn’t produce illusions like this one: When the dots start moving, you stop noticing that they are changing colour! The Where channel overrides the What channel!1


[1] This illusion is drawn from the paper ‘Motion silences awareness of visual change’. Here’s how the authors, Jordan Suchow and George Alvarez, explain what’s happening: “To detect that a moving object is changing, the visual system must track the object’s state. Presumably, the mechanisms that carry out these measurements are local—i.e., each monitors a fixed location in the visual field that corresponds to a fixed location on the retina. Because a fast-moving object spends little time at any one location, a local detector is afforded only a brief window in which to assess the changing object. This brief exposure may be insufficient to detect any changes … Motion and object-identity processing are fundamental to vision but [are] thought to occur in complementary processing streams. Silencing demonstrates the tight coupling of motion and object appearance. Simply by changing the retinotopic coordinates—moving the object or the eyes—it is possible to silence awareness of visual change, causing objects that had once been obviously dynamic to suddenly appear static.”