# Expert Memory: What Can Memory Experts Teach Us?

That which we persist in doing becomes easier, not that the task itself has become easier, but that our ability to perform it has improved. —Ralph Waldo Emerson

Malcolm Gladwell dragged the notion of deliberate practice into the public lexicon with the publication of his book Outliers. In short, world class performance depends not on talent, but on thousands of hours of a special sort of practice, deliberate practice.

It’s straightforward that practice is the route to improvement of some skill. Take typing. I can type without effort. I’m not thinking about the keys or the movement right now, but instead operating at the level of sentence construction. (Sometimes I wonder if there are yet higher peaks to reach, where one only thinks in images or not at all.) My performance wasn’t always this way, though. Typing used to be a horrible, frustrating affair, and I know this because I’ll experience that frustration again if I switch to an alternate keyboard layout like Dvorak.

### What makes practice deliberate?

There are a few characteristics of deliberate practice:

• It’s effortful. If it wasn’t, everyone would do it and it would no longer separate world class performers from everyone else.
• It’s designed to improve performance. Deliberate practice is about leaving your comfort zone and pushing your limits. It consists of taking something you don’t understand how to do, sitting down and repeating it until mastery has been achieved. It makes you feel dumb.
• There’s feedback. You can tell whether or not you’re doing it right and correct your performance.

Daniel Coyle, who wrote The Talent Code, put it this way:

1. Pick a target
2. Reach for it
3. Evaluate the gap between the target and the reach

### Automatic Plateaus

One might wonder: why do we need a form of practice different than normal practice? The answer is that performance plateaus. A man might drive his entire life, but never become as skilled as a race car driver. His performance plateaued after he learned how to drive and has not improved much since. The same is true of typing. I learned how to type long ago, but my speed has since capped out at about 90 words per minute and not budged since.

Generally, learning a skill seems to at first require our full attention and to be effortful and, after time, gives way to automaticity. At this point, performance plateaus and further improvement must be targeted.

### Breaking Down Skills

To do the impossible, break it down into small bits of possible.

To practice deliberately, then, one ought to break a skill down into small components, each which can be practiced, and then repeat those skills until automaticity has been achieved, at which point one can work on further refinement. This is the road to mastery.

As an example, before one can learn to program, one needs to learn a number of sub-skills, such as general computer literacy (which can further be broken down), the syntax of a programming language, familiarity with different control structures, a text editor, and so on. To write a web application there is still more, like familiarity with how the entire stack works. You’ll probably want some knowledge of the command line, too, and so on. Before all this, one ought to be able to type, know what a computer is, the ability to read, finding information via Google, etc.

The same is true of any skill. Improving one’s understanding of calculus, for example, at least the mechanical parts, consists of one learning to solve different forms of integrals and derivatives. Once mastery on the simpler ones has been attained, one can move on to more complex ones, multivariable calculus, and so on, leading one higher and higher on the infinite ladder that is mathematics. And, of course, there are a million other mundane skills, too, like writing and keeping work organized, noticing when you’re confused, etc.

Indeed, even all of these are at too high a level, each of which should be broken down further. You need to consider the answer to questions like: what does expertise in this field look like? How can I quantify it? What are some goals that would let me know that I’m improving? Make a checklist.

## Paying Attention and Neural Reconfiguration

A man is what he thinks about all day long.

—Ralph Waldo Emerson (again)

There is an awesome post over on Less Wrong about the relationship between neural reconfiguration and attention, which ties in with the earlier discussion of automaticity. The basic idea is that your brain wires whatever it is that you pay attention to. The more often you lean on a neural structure, the more it grows.

Consider mindless practicing: sitting down with a guitar, running through a song haphazard, missing notes like a drunk misses stop signs. In contrast, consider playing through a song with intense focus on every note and fingering. The second sounds is going to be a whole hell of a lot more effective and we have the science to back it up. Take a group of humans and compare brain mass based on whether or not they were paying attention during the task. This has been done.1 Attention makes the brain grow.

It’s as if there is Attention, king of the Neuronal people and, when he becomes interested in something — like mathematics — he yells to his people, “Optimize my kingdom for mathematics!” and the people build math libraries and put chalk boards everywhere.

### How can one improve one’s attention?

There are a few ways I can think of to improve attention. There are stimulants, like caffeine, nicotine, modafanil, and adderall. Beyond that, you can go meta and try to improve attention by paying attention to attention which means — hooray! — you’ve invented Vipassana meditation, the best introduction to which is either Mindfulness in Plain English or Daniel Ingram’s Mastering the Core Teachings of the Buddha. There’s always blocking out distractions (turn off the television!), too, and setting aside time blocks when you’ll worry about only one thing, perhaps via Pomodoros.

## Expert Memory, Insight and Recognition

In 2001, Anna-Maria Botsari played 1102 chess matches simultaneously, winning 1095 of the matches and drawing 7. Perhaps even more impressive, Marc Lang holds the record for simultaneous blindfold chess, having played 46 matches at once, winning 19, drawing 13, and losing 3. (Blindfold chess, for the unaware, is when one plays without a board and is forced to keep all of the positions in memory.)

I have enough trouble remembering the 7 digits of a telephone number. More than 1400 board positions? Not a chance.

Or so you might think, but it turns out that any high-ranked chess player can play blindfold chess. It’s not an innate ability, but something acquired over years of practice. These sort of amazing feats rely on something that’s been dubbed long-term working memory.

The basic idea behind long-term working memory is that the superior memory of experts is the result of years of training, which allows one to access long-term memory in novel ways. This allows for feats like blind-fold chess. (For a poignant example of this, check out the book Moonwalking with Einstein.)

The earliest evidence for this comes from de Groot’s classic study of chess recall.2 He took groups stratified by chess ability and showed them different board positions, which he later asked them to recall. The better a person was at chess, the better their recall of board positions. The more interesting result, though, is that de Groot found that this only held when the board had positions of the sort one would see in actual play. When he showed subjects randomized board positions, experts did as poorly as novices. This has been replicated a number of times in chess,3,4,5,6 bridge,7,8 go,9 music,10 field hockey, dance, and basketball,11 figure skating,12 computer programming,13 electronics,14 and physics.15

The idea behind this is chunking. An untrained individual can hold about seven (plus or minus two) numbers in short-term memory at one time. Short term memory, then, is limited, but one can get around this via chunking. Given the right structure, like a meaningful chess board position, larger chunks can be held in memory. When reading, for instance, one doesn’t hold individual letters in memory, but entire words. The letters have been chunked into words.

Imagine a machine that can only hold four concepts in memory at any one time. Thinking “Red barking dog eating” would fill all available memory, but it has a way around this — a glue operation which, while computationally expensive, allows it to glue concepts together to create a new concept. For example, it could take “barking” and “dog,” glue them together, and create a new concept, “barking dog.” Now the machine could hold “Red + barking dog + eating” in memory and still have room for one more concept.

I propose that this is how expert memory works, with humans having some sort of equivalent of the glue function that takes place during deliberate practice. Herbert Simon estimates that each chunk takes about 30 seconds of focused attention to create, with an expert having created somewhere between 50,000 and 1.8 million chunks — about 10 years of four hours of practice per day.16

From the inside, chunking feels like getting a handle on something, on having a word that compresses some larger idea, or the crystallization of some idea. At least sometimes. I suspect most instances of chunking are non-conscious.

### From Whence Intuition Springeth

Experts are often distinguished by their intuition. Consider the blitz style of play in chess. Specifics vary, but in general it works that each side has five minutes on the clock and a limit of ten seconds per turn. The conditions make it so one has to move without thought, relying on intuition.

It should be of no surprise that stronger chess players trounce weaker ones in blitz matches, but how does it work? From whence does intuition spring? The answer is long-term memory. It works sort of like this: when the brain creates a chunk, it’s saved in long-term memory. A chess master who has studied many matches has created tens or hundreds of thousands of such chunks, with each chunk being something like a board position and what moves are strong and which aren’t. What looks like intuition is the brain pattern-matching against what it has seen before. The chess player looks at the board, similar positions and strong moves are automatically retrieved from long-term memory, and he makes one of those moves.

Insight is the fast, effortless recall of cached experienced. This is memoization. Instead of computing something several times, save it in memory and look it up when you need it. I propose that the human brain works in a similar manner. When we meet with a novel experience or problem, we’re forced to use effortful computation to solve it, which is then chunked and saved in long-term memory. In the future, similar problems are solved via look ups.

### The Mental Molasses Hypothesis

You have to be fast only to catch fleas.

—Israel Gelfand, Soviet mathematician

An individual neuron can fire anywhere between 1 and 200 times per second. This is sorta the equivalent of clock speed of a processor, where each neuron in the brain is a simple processor. Neurons operate at a top speed of 200 hertz, though, while a modern processor can hit speeds of nearly 4 gigahertz, or 4 billion hertz. This means that — and this is a rough comparison — a CPU is 20 million times faster than one neuron.

The difference, though, is that where a modern CPU might have between four and eight of these ultra-fast processors (and more in the future!), a brain has about a hundred billion neurons. It’s the parallel processor.

But this doesn’t do anything about serial problems, where one neuron is going to be the bottleneck. 200 serial steps — and you can’t do much in 200 steps — in the brain will take one second, and there are a whole lot of problems that can’t be parallelized. (This complexity class is called P-complete.) So what’s going on?

Jeff Hawking answers this in his book On Intelligence:

The answer is the brain doesn’t “compute” the answers to problems; it retrieves the answers from memory.

Sound familiar? The brain is a giant cache. Sure, it computes, too, but it’s slow. Most of our thought is retrieval from long-term memory. You can even observe this during conversation, which is almost never the creation of novel thoughts, but mostly the repeating of things you’ve thought and heard before.

## Putting It All Together

Rumor is that a pedestrian on Fifty-seventh Street, Manhattan, stopped Jascha Heifetz and inquired, “Could you tell me how to get to Carnegie Hall?” “Yes,” said Heifetz. “Practice!”

Putting it all together, then, humans are memory machines and expertise is a result of the amount of domain specific knowledge — chunks — that one has stored in memory. These chunks are created during deliberate practice, an effortful activity designed to improve performance, which is distinguished by requiring intense focus. This focus turns out to be a required ingredient for bringing about neural reconfiguration.

This model is nice, but how can you put it into practice? To accelerate the creation of chunks, try using Anki. Be sure to read through this great article on spaced repetition. (Roger Craig used Anki to set records on Jeopardy! Do it! This is a sign! Look at all these exclamations!) Increase the amount of deliberate practice that you engage in by taking a skill you’d like to improve, break down what expertise in that domain looks like, identify your weakness and what you don’t know, then make a step by step plan for improving your skill. Ensure that you break that plan into chunks small enough that they’re no longer intimidating.

Once you have a plan worked out, set aside a couple Pomodoros each day to focus only on deliberate practice. Shut out distraction, drink some coffee or green tea, sit down and focus. (Maybe even try chewing nicotine gum.)

Once you have all that down, periodically review your training and your plan, throw out what doesn’t work, and try new things. Happy practicing!

## Sources

1. Stefan, Katja, Matthias Wycislo, and Joseph

Classen. “Modulation of associative human motor cortical plasticity by attention.” Journal of Neurophysiology 92.1 (2004): 66-72.

1. de Groot, Adriaan David Cornets, and Adrianus Dingeman de Groot. Thought and choise in chess. Vol. 4. Walter de Gruyter, 1978.

1. Frey, Peter W., and Peter Adesman. “Recall memory for visually presented chess positions.” Memory & Cognition 4.5 (1976): 541-547.

1. Chase, William G., and Herbert A. Simon. “Perception in chess.” Cognitive psychology 4.1 (1973): 55-81.

1. Reingold, Eyal M., et al. “Visual span in expert chess players: Evidence from eye movements.” Psychological Science 12.1 (2001): 48-55.

1. Charness, Neil. “Expertise in chess: The balance between knowledge and search.” Toward a general theory of expertise: Prospects and limits (1991): 39-63.

1. Charness, Neil. “Components of skill in bridge.” Canadian Journal of Psychology/Revue canadienne de psychologie 33.1 (1979): 1.

1. Engle, Randall W., and Lee Bukstel. “Memory processes among bridge players of

differing expertise.” The American Journal of Psychology (1978): 673-689.

1. Reitman, Judith S. “Skilled perception in Go: Deducing memory structures from

inter-response times.” Cognitive psychology 8.3 (1976): 336-356.

1. Sloboda, John A. “Visual perception of musical notation: Registering pitch

symbols in memory.” The Quarterly Journal of Experimental Psychology 28.1

(1976): 1-16.

1. Allard, Fran, and Janet L. Starkes. “Motor-skill experts in sports, dance,

and other domains.” Toward a general theory of expertise: Prospects and limits

(1991): 126-152.

1. Deakin, Janice M., and Fran Allard. “Skilled memory in expert figure

skaters.” Memory & Cognition 19.1 (1991): 79-86.

1. McKeithen, Katherine B., et al. “Knowledge organization and skill differences in computer programmers.”

Cognitive Psychology 13.3 (1981): 307-325.

1. Egan, Dennis E., and Barry J. Schwartz. “Chunking in recall of symbolic

drawings.” Memory & Cognition 7.2 (1979): 149-158.

1. Larkin, Jill, et al. “Expert and novice performance in solving physics problems.” Science 208.4450 (1980): 1335-1342.

1. Simon, Herbert Alexander. The sciences of the artificial. MIT press, 1996.

## You've read this far⁠—want more?

Subscribe and I'll e-mail you updates along with the ideas that I don't share anywhere else.