Effective Study Skills for College Students: “Why?” Questions

Consider two sentences:

  • The llama was made out of watermelon flavored cactus.
  • Policeman doe terminology star inconvenience recruit.

If I asked you to close this web page and then recall both sentences, you’d have an easier time with the first sentence. It has meaning and structure — even if a bit strange. I could make this still harder by adding a third sentence that’s just a jumble of letters. That would be less structured and even harder to recall.

Let’s say you’re reading a textbook, like Sedgewick’s The Algorithm Design Manual, and you come across the fact that \( \Theta(n \lg n) \) is the lowest possible complexity of a comparison sort algorithm. You could commit this to long-term memory as is — it’s true, after all. It would be connected to some other knowledge, like what you already know about sorting algorithms. This seems okay.

But, when doing something like this, you’re missing out on a whole lot of structure. If you forgot about the lower bound, you wouldn’t be able to regenerate it from what you already know. It’s connected to other knowledge, but it’s not recomputable. You’re forced to take Sedgewick’s word for the whole thing.

How can we absorb more of the structure of a piece of knowledge — to not be content with knowing a fact that someone else has stated, but to be able to recompute it, to solidly place it in our web of knowledge? The answer is the question, “Why?” There is a massive gulf between knowing that something is true and understanding why something is true. Being able to answer that why question makes all the difference — it forces you to absorb and understand deeper structural characteristics.

If I told you that 3 bits can represent 8 different values, you would not be able to answer the question, “How many bits do you need to represent 1729 different values?” But, if you understood why 3 bits can represent 8 values, that sort of question is trivial. It’s the difference between being able to regurgitate facts from Wikipedia and being able to solve novel problems — to understand the not yet seen.

Asking “Why is this so?” is an easy to implement strategy for absorbing a piece of knowledge, and connecting it to the rest of your beliefs in such a way that you can answer novel questions in the future.

Further Reading

  • I wrote recently about this whole structure thing in “Compressing Knowledge.”
  • Asking “Why?” while learning is sometimes called elaborative interrogation. There’s a review of its effectiveness, along with that of other learning techniques, here.

What Makes Something Interesting?

Francis Galton, cousin of Charles Darwin and maybe best known for his work on intelligence, was a bit obsessed with the idea that people have certain innate traits. You know the movie Minority Report, where a special police department tries to predict crime before it happens? He sorta tried to invent it — in 1883.

He had this idea, see, that you could predict whether or not someone was a criminal based on the structure of their face. He devised a technique of composite photography, which allowed him to create averages of many images. While he didn’t manage to identify criminals, he did find that the average of several faces tended to be more attractive than any of the individual faces he used as input.

More than 100 years later, it turns out Galton was on to something — regarding both crime and attractiveness. Men with wider faces are more aggressive hockey players, less trustworthy in laboratory games, engage in more aggressive behavior, and are more successful CEOs. Computer averages of faces are more attractive than the people used as inputs, and this result holds not only for faces, but for averages of cars, fish, and birds. A wide face is a dangerous face and an average fish is an attractive fish, it seems.

The Beautiful is the Compressible


We can think of human beings as agents who take in information from the environment, run that information through a compressor module, and then store that information in long-term memory. This is not rocket science. Our brains can’t hold all of the information in the world. We forget. We are forced to compress experience down to a few relevant details and store those. Indeed, a fair amount of evidence now supports the hypothesis that memories are reconstructed during recall. Each time you remember something, you’re modifying that memory. The brain is not a high-fidelity recorder.

In our man-as-compressor model, what sets the beautiful, averaged face apart from a typical face? It’s easier to compress. Consider all the information the brain has to store about a hideous face: a giant nose, a lazy eye, a unibrow, scars, maybe a teardrop tattoo. When the brain encounters a beautiful face, though, the compressor says something like, “Ah, a face so face-like that I need not spend any more processing time on it. I can relax.”

This idea is taken to its logical extension in low-complexity art. The aim of low-complexity art is to create images that can be described by a short computer program — a measure of complexity known as Kolmogorov complexity. The picture at the beginning of this section is an example of this style of art.

The Interesting is the Unexpected

Consider two facts:

When I speak to people, they find the second fact a lot more interesting than the first. This is, I think, because it violates their model of the world. They think of evolution as pushing us toward ever increasing complexity, but this is not true. Consider venereal sarcoma, which is today an infectious cancer, but used to be a dog.

This notion of surprise, the violation of expectation, is at the core of interestingness. If you already know something, if you anticipated it, it’s boring. The first time you hear a joke, it’s funny. The second time, not so much.

But not all unexpected data is interesting. If I published random sequences of numbers instead of words on this blog, well, no one would read it, and I wouldn’t blame them. What separates the interesting from the uninteresting?

If we consider our man-as-compressor model, interesting facts are those that improve the future performance of the compressor. Here’s an example: marriages are more likely to dissolve during periods of unemployment, but this only holds for unemployed husbands. For someone unaware of this fact, it improves their compressor — in this case, predicting when a couple will get divorced. (If you can predict something, you can compress it. They’re the same construct.) Depending on the person, this fact might further propagate through their compressor, updating beliefs about human mate preferences.

Indeed, a discovery is “just” a large improvement in the compressor. Consider Darwin’s theory of evolution. It connects and explains a huge amount of the phenomena around us. Where did humans come from? Why do fats taste good? Why do whales have organs similar to those of humans — and not fish? Talk about a compression upgrade.

We can even tie this into curiosity. After all, what is curiosity if not the pursuit of one’s interests? Given that what is interesting are those things that upgrade our model of the world, curiosity can be thought of as a drive to improve the compressor — a drive to improve our understanding of how things work.

Creativity, too, can be understood through the compressor model. Creativity is the consistent violation of other people’s expectations. Consider this poem:

Roses are red,
And ready for plucking,
You’re sixteen,
And ready for high school.
—Kurt Vonnegut, Breakfast of Champions

Notice how it violates the expectations of the compressor? That’s creativity.

All together, then:

  • Humans can be thought of as agents who take in information from the environment, run it through a compressor, and store the result in long-term memory.
  • Something is beautiful insofar as it can be compressed. Example: an average of faces is more beautiful than any individual face.
  • How interesting something is depends on how much it improves the performance of the compressor. When a fact violates expectations and improves one’s model of the world, that’s interesting. It improves the compressor.
  • Curiosity is the pursuit of the interesting — action designed to improve the compressor.
  • Creativity is the consistent violation of the expectations of the compressor.

Further Reading

The Science of Problem Solving

feynman-chalkboardMathematics is like the One Ring in the Lord of the Rings. Once you’ve loaded a problem into your head, you find yourself mesmerized, unable to turn away. It’s an obsession, a drug. You dig deeper and deeper into the problem, the whole time unaware that the problem is digging back into you. Gauss was wrong. Mathematics isn’t a queen. She’s a python, wrapping and squeezing your mind until you find yourself thinking about the integer cuboid problem while dreaming, on waking, while brushing your teeth, even during sex.

Lest you think I exaggerate, Feynman’s second wife wrote in the divorce complaint, “He begins working calculus problems in his head as soon as he awakens. He did calculus while driving in his car, while sitting in the living room, and while lying in bed at night.” Indeed, the above is a picture of Feynman’s blackboard at the time of his death. It says on it, “Know how to solve every problem that has been solved.” I like this sentiment, this idea of man as problem solver. If I were running things, I think I would have sent Moses down the mountain with that as one of the ten commandments instead of two versions of “thou shalt not covet.”

That’s what this post is about: How do humans solve problem and what, if anything, can we do to become more effective problem solvers? I don’t think this needs any motivating. I spend too much time confused and frustrated, struggling against some piece of mathematics or attempting to understand my fellow man to not be interested in leveling up my general problem-solving ability. I find it difficult to imagine anyone feeling otherwise. After all, life is in some sense a series of problems, of obstacles to be overcome. If we can upgrade from a hammer to dynamite to blast through those, well, what are we waiting for? Let’s go nuclear.

A Computational Model of Problem Solving

Problem solving can be understood as a search problem. You start in some state, there’s a set of neighbor states you can move to, and a final state that you would like to end up in. Say you’re Ted Bundy. It’s midnight and you’re prowling around. You’re struck by a sudden urge to kill a woman. You have a set of moves you could take. You could pretend to be injured, lead some poor college girl to your car, and then bludgeon her to death. Or you could break into a sorority house and attack her there, along with six of her closest friends. These are possible paths to the final state, which in this macabre example is murder.

Similarly, for those who rolled lawful good instead of chaotic evil, we can imagine being the detective hunting Ted Bundy. You start in some initial state — the Lieutenant puts you on the case (at least, that’s how it works on television.) Your first move might be to review the case files. Then you might speak to the head detective about the most promising leads. You might ask other cops about similar cases. In this way, you’d keep choosing moves until reaching your goal.

Both of these are a graph. (Not to be confused with the graph of a function, which you learned about in algebra. This sort of graph — pictured below — is a set of objects with links between them.) The nodes of the graph are states of the world, while the links between the nodes are possible actions.


Problem solving, then, can be thought of as, “Starting at the initial state, how do I reach the goal state?”


On this simple graph, the answer is trivial:


On the sort of graph you’d encounter in the real world, though, it wouldn’t be so easy. The number of possible moves in a chess match — itself a simplification when compared to, you know, actual war — is \( 10^{120} \), while the number of atoms in the observable universe is a mere \( 10^{81} \). It’s a near certainty, then, that the human mind doesn’t consider an entire graph when solving a problem, but somehow approximates a graph search. Still, it’s sorta fun to imagine what a real world problem might look like.



A change in perspective is worth 80 IQ points.
—Alan Kay

Insight. In the shower, thinking about nothing much, it springs on us, unbidden and sudden. No wonder the Greeks thought creativity came from an outside source, one of the Muses. It’s like the heavens open up and a lightning bolt implants the notion into our heads. Like we took an extension cord, plugged it into the back of our necks, and hooked ourselves into the Way, the Tao, charging ourselves off the zeitgeist and, boom, you have mail.

It’s an intellectual earthquake. Our assumptions shift beneath us and we find ourselves reoriented. The problem is turned upside down — a break in the trees and a new path is revealed.

That’s what insight feels like. How does it work within the mind? There are a number of different theories and no clear consensus among the literature. However, with that said, I have a favorite. Insight is best thought of as a change in problem representation.

Consider how often insight is accompanied by the realization, “Ohmygod, I’ve been thinking about everything wrong.” This new way of thinking about the problem is a new representation of the problem, which suggests different possible approaches.

Consider one of the problems that psychologists use to study insight:

You enter a room in which two strings are hanging from the ceiling and a pair of pliers is lying on a table. Your task is to tie the two strings together. Unfortunately, though, the strings are positioned far enough apart so that you can’t grab one string and hold on to it while reaching for the other. How can you tie them together?

(The answer is below the following picture if you want to take a second and try to figure it out.)


The trick to this problem is to stop thinking about pliers as pliers, and instead to think of it as a weight. (This is sometimes called overcoming functional fixedness.) With that realization in hand, just tie the pliers to one rope and swing it. If you stand by the other rope, the pliers-rope should eventually swing back to you, and then you can tie them together.

In this case, the insight is changing the representation of pliers as tool-to-hold-objects-together to pliers as weight. More support for this view comes from another famous insight problem.

You are given the objects shown: a candle, a book of matches, and a box of tacks. Your task is to find a way to attach the candle to the wall of the room, at eye level, so that it will burn properly and illuminate the room.


The key insight in this problem is that the box that the tacks are contained in is not just for holding tacks, but can be used as a mount, too — again, a change in the representation.


In fact, the rate at which people solve this problem depends on how it’s presented. If you put people in a room with the tacks in the box, they’re less likely to solve it than if the tacks and box are separate.

The way we frame problems makes them more or less difficult. Insight is the spontaneous reframing of a problem. This suggests that we can increase our general problem solving ability by actively thinking of new ways to represent and think about a problem — different points of view. There are a couple of ways to accomplish this. Translating a problem into another medium is a cheap way of producing insight. Often, creating a diagram for a math problem, for example, can be enough to make the solution obvious, but we need not limit ourselves to things we can draw. We can ask ourselves, “How does this feel in the body?” or imagine the problem in terms of a fable.

Further, we can actively retrieve and create analogies. George Pólya in his How to Solve It writes (paraphrased), “You know something like this. What is it?” The history of science, too, is filled with instances of reasoning by analogy. Visualize an atom. What does it look like? If you received an education anything like mine, you think of it as like a solar system, with subatomic particles rotating a nucleus. This is not really what an atom looks like, though, but it has stuck with us by way of Rutherford.

Indeed, we can often gain cheap insights into something by borrowing the machinery from another discipline and thinking about it in those terms. Social interaction, for instance, can be thought of as a market, or as the behavior of electrons that think. We can think of the actions of people in terms of evolutionary drives, as those of a rational agent, and so on.

This perhaps explains some of the ability of some scientists to contribute to different disciplines with original insights. I’m reminded of Feynman’s work on the connection machine, where he analyzes the computer’s behavior with a set of partial differential equations — something natural for a physicist, but strange for a computer science who thinks in discrete rather than continuous terms.


We can think of problem solving like a walnut, a metaphor that comes to me by way of Grothendieck. There are two approaches to cracking a walnut. We can, with hammer and chisel, force it open, or we can soak the walnut in water, rubbing it from time to time, but otherwise leaving it alone to soften. With time, the shell becomes flexible and soft and hand pressure alone is enough to open it.

The soaking approach is called incubation. It’s the act of letting a problem simmer in your subconscious while you do something else. I find difficult problems easier to tackle after I’ve left them alone in a while.

The science validates this phenomena. A 2009 meta-analysis found significant interactions between incubation and problem solving performance, with creative problems receiving more of a boost. Going further, they also found that the more time that was spent struggling with the problem, the more effective incubation was.


Keep your subconscious starved so it has to work on your problem, so you can sleep peacefully and get the answer in the morning, free.
—Richard Hamming, You and Your Research


A 2004 study published in <em>Nature</em> examined the role of sleep in the process of generating insight. They found that sleep, regardless of time of day, doubled the number of subjects who came up with the insight solution to a task. (Presented graphically above.) This effect was only evident in those who had struggled with the problem, so it was the unique combination of struggling followed by sleep and not sleep alone that boosted insight.

The authors write, “We conclude that sleep, by restructuring new memory representations, facilitates extraction of explicit knowledge and insightful behaviour.”

The Benefits of Mind Wandering

Individuals with ADHD tend to score higher than neurotypical controls on laboratory measures of creativity. This jives with my experience. I have a cousin with ADHD. He’s a nice guy. He likes to draw. Now, I’ve never broken out a psychological creativity inventory at a family reunion and tested him, but I’d wager he’s more creative than normal controls, too.

There’s a good reason for this: mind-wandering fosters creativity. A 2012 study (results pictured below) found that any sort of mind-wandering will do, but the kind elicited during a low-effort task was more effective than even that of doing nothing.


This, too, is congruent with my experience. How much insight has been produced while taking a shower or mowing the lawn? Paul Dirac, the Nobel Prize winning physicist, would take long hikes in the wood. I’d bet money that this was prime mind-wandering time. I know walking without goal is often a productive intellectual strategy for me. Rich Hickey, known as inventor of the Clojure programming language, has sorta taken the best of both worlds — sleep and mind wandering — and combined them into what he calls hammock driven development.

But how does it work?

As is often the case in the social sciences, there is little consensus on why incubation works. One possible explanation, as illustrated by the Hamming quote, is that the subconscious keeps attacking the problem even when we’re not aware of it. I’ve long operated under this model and I’m somewhat partial to it.

Within cognitive science, a fashionable explanation is that during breaks we abandon approaches that are ineffective. Thus, next time we view a problem, we are prone to try something else. There is something to this, I feel, but some sources go too far when they propose that this is all incubation consists of. I have notice significant qualitative changes to the structure of my own beliefs that occur outside of conscious awareness. Something happens to knowledge when it ripens in the brain and forgetting is not all of that something.

In terms of our initial graph, I have a couple ideas. We still do not have a great grasp on why animals evolved the need to sleep, but it seems to be related to memory consolidation. Also note the dramatic change thought processes undergo while on the edge of sleep and while dreaming. This suggests that there are certain operations, certain nodes in our search graph, that can only be processed and accessed during sleep or rest. Graphically, it might look like:


This could be combined with a search algorithm like tabu search. During search, the mind makes a note of where it gets stuck. It then starts over, but uses this information to inform future search attempts. In this manner, it avoids getting stuck in the same way that it was stuck in the past.

Problem Solving Strategies

It is really a lot of fun to solve problems, isn’t it? Isn’t that what’s interesting in life?
—Frank Offher

There may be no royal road to solving every problem with ease, but that doesn’t mean that we are powerless in the face of life’s challenges. There are things you can do to improve your problem solving ability.


The most powerful, though somewhat prosaic, method is practice. It’s figuring out the methods that other people use to solve problems and mastering them, adding them to your toolkit. For mathematics, this means mastering broad swathes of the stuff: linear algebra, calculus, topology, and so on. For those in different disciplines, it means mastering different sorts of machinery. Dan Dennet writes about intuition pumps in philosophy, for instance, while a computer scientist might work at complexity theory or algorithmic analysis.

It is, after all, much easier to solve a problem if you know the general way in which such problems are solved. If you can retrieve the method from memory instead of inventing it from scratch, well, that’s a big win. Consider how impossible modern life would be if you had to reinvent everything, all of modern science, electricity, and more. The discovery of calculus took thousands of years. Now, it’s routinely taught to kids in high school. In terms of imagery, we can think of solving a problem from scratch as a complicated graph search, while retrieving a method from memory as a look-up in a hash table. The difference looks something like this:


All of this is to say that it’s very important that you familiarize yourself with the work of others on different problems. It’s cheaper to learn something that someone else already knows than to figure it out on your own. Our brains are just not powerful enough. This is, I think, one of the most powerful arguments for the benefits of broad reading and learning.


Moods can be thought of as mental lenses, colored sunglasses, that encourage different sorts of processing. A “down” mood encourages focus on detail, while an “up” mood encourages focusing on the greater whole.

Indeed, multiple meta-analyses suggests that those in happier moods are more creative. If you’ve ever met someone who is bipolar, you’ll notice that their manic episodes tend to look a lot like the processing of creative individuals. As someone once told me of his manic episodes, “There’s no drug that can get you as high as believing you’re Jesus Christ.”

This suggests that one ought to think about a problem while in different moods. To become happy, try dancing. To be sad, listen to sad music or watch a sad film. Think about the problem while laughing at stand-up comedy. Discuss it over coffee with a friend. Think about it while fighting, while angry at the world. The more varied states that you are in while considering your problem, the higher the odds you will stumble on a new insight.

Rubber Ducking

Rubber ducking is a technique for debugging that’s famous in the programming community. The idea is that simply explaining your problem to another person is often enough to lead to the eureka! In fact, the theory goes, you don’t even need to describe it to another person. It’s enough to tell it to a rubber duck.

I have noticed this a number of times. I’ll go to write up something I don’t understand, some problem I have on StackOverflow, and then bam, the answer will punch me in the face. There is something about describing something to someone else that solidifies understanding. Why do you think I’m going through the trouble of writing all of this up, after all?

The actual science is a bit mixed. In one study, describing current efforts on a problem reduced the likelihood that one would solve the problem. The theory goes that this forces one to focus on easy-to-verbalize parts of the problem, which may be irrelevant, and thus entrenches the bad approach.

In a different study, though, forcing students to learn something well enough to explain to another person increased their future performance on similar problems. A number of people have remarked that they never really understood something until they had to teach it, and this maybe explains some of the success of the researchers-as-teachers paradigm we see in the university system.

Even with the mixed research, I’m confident that the technique works, based on my own experience. If you’re stuck, try describing the problem to someone else in terms they can understand. Blogging works well for this.

Putting it All Together

In short, then:

  • Problem solving can be thought of as search on a graph. You start in some state and try to find your way to the solution state.
  • Insight is distinguished by a change in problem representation.
  • Insight can be facilitated by active seeking of new problem representations, for example via drawing or creating analogies.
  • Taking breaks during working on a problem solving is called incubation. Incubation enhances problem solving ability.
  • A night’s sleep improves problem solving ability to a considerable degree. This may be related to memory consolidation during sleep.
  • Mind-wandering facilitates creativity. Low effort tasks are a potent means of encouraging mind-wandering.
  • To improve problem solving, one should study solved problems, attack the problem while in different moods, and try explaining the problem to others.

Expert Memory: What Can Memory Experts Teach Us?

That which we persist in doing becomes easier, not that the task itself has become easier, but that our ability to perform it has improved. —Ralph Waldo Emerson

Malcolm Gladwell dragged the notion of deliberate practice into the public lexicon with the publication of his book Outliers. In short, world class performance depends not on talent, but on thousands of hours of a special sort of practice, deliberate practice.

It’s straightforward that practice is the route to improvement of some skill. Take typing. I can type without effort. I’m not thinking about the keys or the movement right now, but instead operating at the level of sentence construction. (Sometimes I wonder if there are yet higher peaks to reach, where one only thinks in images or not at all.) My performance wasn’t always this way, though. Typing used to be a horrible, frustrating affair, and I know this because I’ll experience that frustration again if I switch to an alternate keyboard layout like Dvorak.

What makes practice deliberate?

There are a few characteristics of deliberate practice:

  • It’s effortful. If it wasn’t, everyone would do it and it would no longer separate world class performers from everyone else.
  • It’s designed to improve performance. Deliberate practice is about leaving your comfort zone and pushing your limits. It consists of taking something you don’t understand how to do, sitting down and repeating it until mastery has been achieved. It makes you feel dumb.
  • There’s feedback. You can tell whether or not you’re doing it right and correct your performance.

Daniel Coyle, who wrote The Talent Code, put it this way:

  1. Pick a target
  2. Reach for it
  3. Evaluate the gap between the target and the reach
  4. Return to step one

Automatic Plateaus

One might wonder: why do we need a form of practice different than normal practice? The answer is that performance plateaus. A man might drive his entire life, but never become as skilled as a race car driver. His performance plateaued after he learned how to drive and has not improved much since. The same is true of typing. I learned how to type long ago, but my speed has since capped out at about 90 words per minute and not budged since.

Generally, learning a skill seems to at first require our full attention and to be effortful and, after time, gives way to automaticity. At this point, performance plateaus and further improvement must be targeted.

Breaking Down Skills

To do the impossible, break it down into small bits of possible.

To practice deliberately, then, one ought to break a skill down into small components, each which can be practiced, and then repeat those skills until automaticity has been achieved, at which point one can work on further refinement. This is the road to mastery.

As an example, before one can learn to program, one needs to learn a number of sub-skills, such as general computer literacy (which can further be broken down), the syntax of a programming language, familiarity with different control structures, a text editor, and so on. To write a web application there is still more, like familiarity with how the entire stack works. You’ll probably want some knowledge of the command line, too, and so on. Before all this, one ought to be able to type, know what a computer is, the ability to read, finding information via Google, etc.

The same is true of any skill. Improving one’s understanding of calculus, for example, at least the mechanical parts, consists of one learning to solve different forms of integrals and derivatives. Once mastery on the simpler ones has been attained, one can move on to more complex ones, multivariable calculus, and so on, leading one higher and higher on the infinite ladder that is mathematics. And, of course, there are a million other mundane skills, too, like writing and keeping work organized, noticing when you’re confused, etc.

Indeed, even all of these are at too high a level, each of which should be broken down further. You need to consider the answer to questions like: what does expertise in this field look like? How can I quantify it? What are some goals that would let me know that I’m improving? Make a checklist.

Paying Attention and Neural Reconfiguration

A man is what he thinks about all day long.
—Ralph Waldo Emerson (again)

There is an awesome post over on Less Wrong about the relationship between neural reconfiguration and attention, which ties in with the earlier discussion of automaticity. The basic idea is that your brain wires whatever it is that you pay attention to. The more often you lean on a neural structure, the more it grows.

Consider mindless practicing: sitting down with a guitar, running through a song haphazard, missing notes like a drunk misses stop signs. In contrast, consider playing through a song with intense focus on every note and fingering. The second sounds is going to be a whole hell of a lot more effective and we have the science to back it up. Take a group of humans and compare brain mass based on whether or not they were paying attention during the task. This has been done.1 Attention makes the brain grow.

It’s as if there is Attention, king of the Neuronal people and, when he becomes interested in something — like mathematics — he yells to his people, “Optimize my kingdom for mathematics!” and the people build math libraries and put chalk boards everywhere.

How can one improve one’s attention?

There are a few ways I can think of to improve attention. There are stimulants, like caffeine, nicotine, modafanil, and adderall. Beyond that, you can go meta and try to improve attention by paying attention to attention which means — hooray! — you’ve invented Vipassana meditation, the best introduction to which is either Mindfulness in Plain English or Daniel Ingram’s Mastering the Core Teachings of the Buddha. There’s always blocking out distractions (turn off the television!), too, and setting aside time blocks when you’ll worry about only one thing, perhaps via Pomodoros.

Expert Memory, Insight and Recognition

In 2001, Anna-Maria Botsari played 1102 chess matches simultaneously, winning 1095 of the matches and drawing 7. Perhaps even more impressive, Marc Lang holds the record for simultaneous blindfold chess, having played 46 matches at once, winning 19, drawing 13, and losing 3. (Blindfold chess, for the unaware, is when one plays without a board and is forced to keep all of the positions in memory.)

I have enough trouble remembering the 7 digits of a telephone number. More than 1400 board positions? Not a chance.

Or so you might think, but it turns out that any high-ranked chess player can play blindfold chess. It’s not an innate ability, but something acquired over years of practice. These sort of amazing feats rely on something that’s been dubbed long-term working memory.

The basic idea behind long-term working memory is that the superior memory of experts is the result of years of training, which allows one to access long-term memory in novel ways. This allows for feats like blind-fold chess. (For a poignant example of this, check out the book Moonwalking with Einstein.)

The earliest evidence for this comes from de Groot’s classic study of chess recall.2 He took groups stratified by chess ability and showed them different board positions, which he later asked them to recall. The better a person was at chess, the better their recall of board positions. The more interesting result, though, is that de Groot found that this only held when the board had positions of the sort one would see in actual play. When he showed subjects randomized board positions, experts did as poorly as novices. This has been replicated a number of times in chess,3,4,5,6 bridge,7,8 go,9 music,10 field hockey, dance, and basketball,11 figure skating,12 computer programming,13 electronics,14 and physics.15

The idea behind this is chunking. An untrained individual can hold about seven (plus or minus two) numbers in short-term memory at one time. Short term memory, then, is limited, but one can get around this via chunking. Given the right structure, like a meaningful chess board position, larger chunks can be held in memory. When reading, for instance, one doesn’t hold individual letters in memory, but entire words. The letters have been chunked into words.

Imagine a machine that can only hold four concepts in memory at any one time. Thinking “Red barking dog eating” would fill all available memory, but it has a way around this — a glue operation which, while computationally expensive, allows it to glue concepts together to create a new concept. For example, it could take “barking” and “dog,” glue them together, and create a new concept, “barking dog.” Now the machine could hold “Red + barking dog + eating” in memory and still have room for one more concept.

I propose that this is how expert memory works, with humans having some sort of equivalent of the glue function that takes place during deliberate practice. Herbert Simon estimates that each chunk takes about 30 seconds of focused attention to create, with an expert having created somewhere between 50,000 and 1.8 million chunks — about 10 years of four hours of practice per day.16

From the inside, chunking feels like getting a handle on something, on having a word that compresses some larger idea, or the crystallization of some idea. At least sometimes. I suspect most instances of chunking are non-conscious.

From Whence Intuition Springeth

Experts are often distinguished by their intuition. Consider the blitz style of play in chess. Specifics vary, but in general it works that each side has five minutes on the clock and a limit of ten seconds per turn. The conditions make it so one has to move without thought, relying on intuition.

It should be of no surprise that stronger chess players trounce weaker ones in blitz matches, but how does it work? From whence does intuition spring? The answer is long-term memory. It works sort of like this: when the brain creates a chunk, it’s saved in long-term memory. A chess master who has studied many matches has created tens or hundreds of thousands of such chunks, with each chunk being something like a board position and what moves are strong and which aren’t. What looks like intuition is the brain pattern-matching against what it has seen before. The chess player looks at the board, similar positions and strong moves are automatically retrieved from long-term memory, and he makes one of those moves.

Insight is the fast, effortless recall of cached experienced. This is memoization. Instead of computing something several times, save it in memory and look it up when you need it. I propose that the human brain works in a similar manner. When we meet with a novel experience or problem, we’re forced to use effortful computation to solve it, which is then chunked and saved in long-term memory. In the future, similar problems are solved via look ups.

The Mental Molasses Hypothesis

You have to be fast only to catch fleas.
—Israel Gelfand, Soviet mathematician

An individual neuron can fire anywhere between 1 and 200 times per second. This is sorta the equivalent of clock speed of a processor, where each neuron in the brain is a simple processor. Neurons operate at a top speed of 200 hertz, though, while a modern processor can hit speeds of nearly 4 gigahertz, or 4 billion hertz. This means that — and this is a rough comparison — a CPU is 20 million times faster than one neuron.

The difference, though, is that where a modern CPU might have between four and eight of these ultra-fast processors (and more in the future!), a brain has about a hundred billion neurons. It’s the parallel processor.

But this doesn’t do anything about serial problems, where one neuron is going to be the bottleneck. 200 serial steps — and you can’t do much in 200 steps — in the brain will take one second, and there are a whole lot of problems that can’t be parallelized. (This complexity class is called P-complete.) So what’s going on?

Jeff Hawking answers this in his book On Intelligence:

The answer is the brain doesn’t “compute” the answers to problems; it retrieves the answers from memory.

Sound familiar? The brain is a giant cache. Sure, it computes, too, but it’s slow. Most of our thought is retrieval from long-term memory. You can even observe this during conversation, which is almost never the creation of novel thoughts, but mostly the repeating of things you’ve thought and heard before.

Putting It All Together

Rumor is that a pedestrian on Fifty-seventh Street, Manhattan, stopped Jascha Heifetz and inquired, “Could you tell me how to get to Carnegie Hall?” “Yes,” said Heifetz. “Practice!”

Putting it all together, then, humans are memory machines and expertise is a result of the amount of domain specific knowledge — chunks — that one has stored in memory. These chunks are created during deliberate practice, an effortful activity designed to improve performance, which is distinguished by requiring intense focus. This focus turns out to be a required ingredient for bringing about neural reconfiguration.

This model is nice, but how can you put it into practice? To accelerate the creation of chunks, try using Anki. Be sure to read through this great article on spaced repetition. (Roger Craig used Anki to set records on Jeopardy! Do it! This is a sign! Look at all these exclamations!) Increase the amount of deliberate practice that you engage in by taking a skill you’d like to improve, break down what expertise in that domain looks like, identify your weakness and what you don’t know, then make a step by step plan for improving your skill. Ensure that you break that plan into chunks small enough that they’re no longer intimidating.

Once you have a plan worked out, set aside a couple Pomodoros each day to focus only on deliberate practice. Shut out distraction, drink some coffee or green tea, sit down and focus. (Maybe even try chewing nicotine gum.)

Once you have all that down, periodically review your training and your plan, throw out what doesn’t work, and try new things. Happy practicing!


1. Stefan, Katja, Matthias Wycislo, and Joseph
Classen. “Modulation of associative human motor cortical plasticity by attention.” Journal of Neurophysiology 92.1 (2004): 66-72.

2. de Groot, Adriaan David Cornets, and Adrianus Dingeman de Groot. Thought and choise in chess. Vol. 4. Walter de Gruyter, 1978.

2. Frey, Peter W., and Peter Adesman. “Recall memory for visually presented chess positions.” Memory & Cognition 4.5 (1976): 541-547.

4. Chase, William G., and Herbert A. Simon. “Perception in chess.” Cognitive psychology 4.1 (1973): 55-81.

5. Reingold, Eyal M., et al. “Visual span in expert chess players: Evidence from eye movements.” Psychological Science 12.1 (2001): 48-55.

6. Charness, Neil. “Expertise in chess: The balance between knowledge and search.” Toward a general theory of expertise: Prospects and limits (1991): 39-63.

7. Charness, Neil. “Components of skill in bridge.” Canadian Journal of Psychology/Revue canadienne de psychologie 33.1 (1979): 1.

8. Engle, Randall W., and Lee Bukstel. “Memory processes among bridge players of
differing expertise.” The American Journal of Psychology (1978): 673-689.

9. Reitman, Judith S. “Skilled perception in Go: Deducing memory structures from
inter-response times.” Cognitive psychology 8.3 (1976): 336-356.

10. Sloboda, John A. “Visual perception of musical notation: Registering pitch
symbols in memory.” The Quarterly Journal of Experimental Psychology 28.1
(1976): 1-16.

11. Allard, Fran, and Janet L. Starkes. “Motor-skill experts in sports, dance,
and other domains.” Toward a general theory of expertise: Prospects and limits
(1991): 126-152.

12. Deakin, Janice M., and Fran Allard. “Skilled memory in expert figure
skaters.” Memory & Cognition 19.1 (1991): 79-86.

13. McKeithen, Katherine B., et al. “Knowledge organization and skill differences in computer programmers.”
Cognitive Psychology 13.3 (1981): 307-325.

14. Egan, Dennis E., and Barry J. Schwartz. “Chunking in recall of symbolic
drawings.” Memory & Cognition 7.2 (1979): 149-158.

15. Larkin, Jill, et al. “Expert and novice performance in solving physics problems.” Science 208.4450 (1980): 1335-1342.

16. Simon, Herbert Alexander. The sciences of the artificial. MIT press, 1996.

Deciphering Core Human Values In A Society of Mind

Know thyself? If I knew myself I would run away.
—Johann Wolfgang von Goethe

Humans are evolutionary hacks. I’m often not of one mind, or even two, but of four and sometimes more. Our brains seem to be locked in an eternal struggle, a constant clash of warring preferences. Consider the would-be comedian who, instead of working on his act, spends the day watching Family Guy reruns. He is of two minds: one wishes to watch Family Guy while another wants to brainstorm new routines.

Many-Self Model

You are not the king of your brain. You are the creepy guy standing next to the king going “a most judicious choice, sire”.
—Steven Kaas

It’s interesting to listen to the way that people use language when talking about the self. People say things like, “I had to talk myself into going to the gym.” This is a normal phrase. I hear it all the time. Unremarkable.

But exactly who is talking to who? The self had to convince the self into doing something? Or the popular maxim, “Just be yourself.” Who else are you going to be? All self-talk has this sort of strangeness to it. Why would myself need to counsel myself about anything?

You’re a brain, but there’s not just one you, and these many-you are most evident when they’re in conflict. Consider the overweight man who finds himself in a familiar dilemma: chocolate cake, to eat or not to eat? In one corner, there is a piece of him who wants to eat the cake. In the other corner, there is a piece of him who wants to lose weight. The bell sounds. Fight!

Or take creative alarm clocks. There is one, Clocky, that, like a roomba, moves about the room while it goes off, so that you have to chase it down in the morning. There is another recent one for Android phones that requires you to solve a math problem before it will stop ringing. A friend told me about this. He said he’s been “using” it, but instead of solving it in the morning, he just turns off his phone.

I’ve read, too, about people — adults — who want to stop biting their nails, so they’ll coat them with something bitter. It doesn’t work, though. They just end up finding some clever way to wash it off.

I love these because they characterize the absurdity of the human condition. The present-me installs an alarm clock with every intention of getting up on time, only to be thwarted by morning-me. These two versions of me might as well be different people, each trying to control the other. Our experience is this constant struggle, every part of our brains pulling and pushing us in two or three or a thousand different directions.

Many-Selves and Many-Goals

Imagine that you throw a party for New Year’s Eve and, as part of a game, everyone must write down their resolutions for the incoming year, which you then combine on one sheet of paper. You then go around and guess which resolution belongs to which person.

Now, consider the sheet of everyone’s goals. There’s no reason for them to be consistent with each other. One person might want to save money while another might want to buy a house.

And that’s fine. It’s no problem for these people if their goals conflict. They’re different people, each pursuing rapper Gudda Gudda’s maxim of “You do you. I’ma do me.” It is a problem for you and me, though, because we’re a lot more like a body shared by an entire party of selves (or agents or modules if you prefer) than one consistent identity. A mind is not one individual, but a society. Our goals are as contradictory as a list of the goals of a dozen or so people.

Explicit and Implicit Goal-Keeping

And our list of woes grows longer, because the type of goals that one is willing to write on a list are not the same as the desires of each self inside of us. Our selves have differing time preferences, for example, some preferring instant gratification while others want to plan for the future. I would not write “eat whatever takes the least effort to make” on a list of New Year’s resolutions, but you can be damn sure that there’s a chunk of my mind that prefers convenience over health.

The point is that the human mind is complicated, conflicted, inconsistent, and not so much one unit, but more of a group of competing modules, and this insight forces us to think differently about our goals.

Maybe this is clearer with a thought experiment. Imagine that you’re presented with a genie who is willing to grant you one wish and you wish for a complete list of your goals. This list is going to look a whole lot different than a list that you make by sitting down and thinking about what it is that you want out of life. A list of your explicit and implicit goals is different than a list of just explicit goals.

Let’s make it concrete. Maybe you’re familiar with “Movember,” which is where men grow facial hair during the month of November, in order to raise awareness for men’s health issues, like prostate cancer. This all sounds very nice, yes? But what does growing facial hair have to do with prostate cancer? Nothing. Raising awareness about something doesn’t do much good at all, certainly not as much good as a direct donation. It’s more about appearing caring, convincing other people of your virtue, than about actual helping. Or maybe it’s just about funny facial hair. Either way, not about helping.

Most of us carry around this explicit goal of helping people, while the reality seems to be more sinister. The way we behave seems to be more along the lines of convince-other-people-I’m-virtuous. This is clear whenever some tragedy strikes and my Facebook feed is filled with people posting “My prayers go out to the families of those involved.” First of all, even under the assumption that prayers work, there’s no reason to post on the internet telling everyone about you praying and, second of all, prayers might be nice but a five dollar donation is a lot nicer.

Knowing Thyself

The point I’m developing, then, is:

  • Human value is complicated and often contradictory.
  • Our wants and desires are not obvious.

This leads us to the question of: How can we determine what it is that we want? As a litmus test, do you think an exercise like, “Imagine you’re looking back on your life trying to decide what was important and what wasn’t,” is going to be enough to figure out your goals? The answer is no, although thinking about such a question might give you a starting point.

What we’re after, then, is accurate means of understanding ourselves, techniques that will give us some measure of clarity if it’s to be had. We would like to — where possible — eliminate reliance on subjective experience and inject a measure of rigor into knowing ourselves. We’d like some certainty.

Understanding Why

It’s instructive to step back and survey our surroundings. Why does it matter whether or not we pursue the right goal? There are a whole lot of people at colleges across the country who are right now cramming for finals. They are soon going to forget everything. They’ve replaced the goal of learning with the goal of getting a passing grade.

We care about pursuing certain goals and not others because some will better achieve our values — for the same reason that we prefer eating cheeseburgers to eating dirt: we like cheeseburgers and not dirt.

We can continue down the rabbit hole and ask, “Why ought I prefer one thing to another?” I used to worry about this, but the question is confused. Maybe there is no good reason why you ought to prefer cheeseburgers to dirt, but it’s the case that yous do. Our brains ensure that we have preferences.

The point of a goal, then, is to achieve whatever it is that these preferences are. Over the summer, I did a literature review of the current state of the art of happiness research, because I value happiness. The trouble with the wrong goal is that it moves us towards something we don’t value. It could be the case that people care not so much about doing good than about convincing other people that they’re good. The two values suggest different goals. If I want to help people, I could apply for a consultation at 80,000 hours, while if I want to convince people that I’m a good person, I could work on becoming more charismatic.

Values as Bedrock

People, by and large, act as if goals are nebulous things that appear out of nowhere, as if whispered to them by the gods. Their striving is chaotic, less the product of thoughtful reflection and more the result of the media’s near constant attack on our senses.

Consider the man who decides to become a lawyer because he believes doing so will make him happy. If he had first considered that his ultimate value was happiness, he might first decide to research on what it is that makes people happy and the happiness of lawyers. In the process, he might stumble on Forbes reporting “associate attorney” as the unhappiest job in America, and save hundreds of thousands of dollars and years of striving towards the wrong goal.

The point I’m making, then, is that with an accurate list of your own values, you can come up with plans for achieving those values. If life has a meaning or purpose, this is the closest I’ve come to finding it.

In compelling recipe format, the meaning of life:

  1. Know what it is that you want.
  2. Plan out the best way to get it.
  3. Implement that plan.

Our trouble begins, as I developed earlier, with the first step. It’s not obvious what it is that we want. We need some way to figure it out and, given that this is the foundation on which any goal is built, it’s hard to overstate the importance of some clarity as to our values.

Identifying Values

The most direct route to understanding your own values seems to be by figuring out those of others — at least in part — and then assuming that you also value those things. One example: I don’t have much explicit interest in romantic relationships, but whenever I find myself reading about male-female mate preferences and clash-of-the-sexes-type articles, I notice that I’m fascinated. Given that most people are interested in understanding the opposite sex and that millions of generations worth of evolution has dedicated significant portions of my brain to that task, I find myself forced to update in the direction that, no, I’m not special snowflake who don’t want no woman.

In fact, that anecdote has another point. We can often illuminate ourselves by understanding how evolution has shaped our desires and motives. Indeed, there’s no need to limit ourselves to evolution. Any knowledge that illuminates humankind is useful in furthering our understanding of ourselves, whether it be neuroscience, artificial intelligence, psychology, economics, or politics, which is sort of empowering. There are many routes to self-knowledge.

Given this, what can we say about human values? Well, core human values are straightforward. Most everyone wants:

  • Happiness, positive emotions
  • Freedom from pain, good health, and an absence of negative emotion
  • Fulfilling interpersonal relationships, romantic and otherwise
  • A sense of meaning and purpose in life
  • A conviction that our actions make a difference and that we matter
  • The respect and admiration of other people
  • Personal growth and self-improvement, increases in our own skill and competence

Beyond these, its less obvious. I looked over the New York Times Best Sellers list, but didn’t find it all that illuminating, except I will note that people seem more interested in reading about “proof” of heaven and history in general that I would have thought.

Some values are more idiosyncratic, though. In psychology, there is a personality trait of “openness to experience,” which sort of captures how interested someone is in learning new things, trying new foods, that sort of thing. Creative types score high on openness and this trait varies among individuals. You probably know people who are not interested in reading books or any intellectual pursuits. These people are low on openness.

We could think of this value as “the exploration of one’s interest” or the value of learning about the world. This one is a bit odd because we can ask, “well, do we really care about the exploration of interests or are we interested in the exploration of interests as a means to an end?” It’s a little bit of both. Sometimes we’re interested in things because of what they can do for us, but it’s also enjoyable in and of itself to explore something interesting. This could be grouped under “positive emotions” above, but I think it’s a useful distinction. To be fair, I ought to point out that “positive emotion” and “negative emotion” cover a broad swath of human experience: awe, excitement, interest, anxiety, sadness, dread, contentment, and more.

But what else do we value and want out of life? In economics, there’s this notion of signalling. You might volunteer at a homeless shelter not because you care about the homeless, but because you care about signalling to other people that you’re caring. A fair amount of human activity seems to revolve around looking good rather than anything of substance. Robin Hanson writes quite a bit about this topic.

More troubling are those things that we value — as big, smart monkeys — that we aren’t “supposed” to value. If you’re familiar with Nietzsche, he writes a bit about the enjoyment of cruelty and vengeance. I’m reminded of a scene from Conan the Barbarian, when Conan is asked, “What is best in life?” He responds, “To crush your enemies, see them driven before you, and to hear the lamentation of their women.”

In this vein, you’ll note that people seem, on the whole, more interested in winning arguments than getting to the truth of whatever it is that they’re discussing. It’s more about domination, more battle than discovery. Winning battles against opposing tribes is satisfying (politics!) and, while I have never crushed an enemy — at least not physically — I suspect it feels pretty good.

Stockpiling Self-Knowledge

To know oneself, one should assert oneself.
—Albert Camus

The general undercurrent here, then, seems to be that — in order to identify values — one ought to amass knowledge about oneself and humans in general, develop a certain sensitivity to what it that people value and desire and an accurate understanding of our own idiosyncrasies.

I have a couple ideas about how to go about this, but no silver bullet. There is no royal road to self-knowledge. It’s hard work.

  • Read and learn about different fields that shed light on what it is that humans want. There are a lot of possibilities here, as I mentioned earlier, from psychology, cognitive science, artificial intelligence, ethics, and more. Really, any field that deals with some aspect of humanity has something to offer. (This looks like a good place to start.)
  • Careful observation of ourselves and others. How do we act and feel in different situations? What do our actions and words suggest about our goals and values? When do we clash with other people? What are others striving for? What do people spend money on? Notice what’s popular and why. (Cultivating mindfulness might be useful for this.)
  • It’s instructive to consider what chimpanzees want and how humans are similar and different.
  • Take a different point of view when considering someone. If a stranger did the same things that you do, what would you think about them? If someone does something that you don’t understand, ask why you would do that in their situation. Everyone feels normal from the inside.
  • Try getting honest feedback from others. What do they think about you? How does this differ from your self-concept. One study found that other people were better at predicting the length of a relationship than those in the relationship.
  • You might try keeping a journal, or any other of the thought experiments that people suggest when considering values. What do you want for your children? What makes you jealous? All of these suggest possible values.
  • Some research suggests if we know about our biases, we may be better able to control for them.
  • Reflect. How much would you pay to prevent a chicken from being tortured? Would you rather have more technology or less? Would a happiness pill be a good thing? Construct counterfactuals and intuition pumps. Ask yourself, “Is this my true motive? Is there something deeper here?”
  • In an uncertain world, there’s a great deal of value in preserving your options and hedging your bets. Maybe you don’t think you care about social status or money, but — given that there’s a not insignifcant chance you could be mistaken — invest in something that’s either transferable or that will move you towards many different things simultaneously. Paul Graham writes about majoring in math instead of economics, since a math major can get a PhD in economics, but an economics major can’t get a PhD in mathematics.