Love Is Not A Choice And Other Tools For Thinking

I’m not much of a romantic. If I wanted to hack romance I’d start with going through all the literature on the mate preferences of chimpanzees, bonobos, and great apes generally. Only after I’d taken in the unfiltered humans-are-big-monkeys view would I turn to something with a more human emphasis. It’d be a few months before I started, you know, dating.

When I do listen to other people speak about passionate love — mostly internet people — it’s surreal. Things like, “Love is always a choice.” What, I wonder, are these people on about? The emotion I would describe as passionate love is not this tame, controlled thing. If love were a mode of transportation, it’d be more like surfing in a hurricane than a leisurely bike ride.

Some Thinking Machinery

Love-as-drug is a cliche. If I told you, with a serious face, that love is like being on drugs and you responded by vomiting all over me, well, I would deserve it. But hang on. Imagine if love were literally a drug — a pill you could take.

Say Pfizer releases a new product tomorrow, Passionil, shaped like a heart, no less. The drug, when consumed, results in the consumer imprinting on and falling in violent love with the next person that they maintain eye contact with. It lasts three to six months. Would you take such a drug?

We can turn all sorts of knobs on this machinery. Maybe the drug comes in different forms: fast-acting, short release, standard release, and extended release. The fast-acting love might last a night, the short release a couple of weeks, and the extended release for a year. Would you take any of these drugs?

What if these drugs prove so popular that Pfizer creates an ever-increasing variety of them: a light edition which provides a gentle buzz — a weak infatuation — the standard strength, and an extra strength version for those who really want to lose their minds.

But maybe the drug frame is too suggestive. We can exchange drugs for a type of tropical island fruit. Maybe it can be brewed like coffee, some cups stronger than other. That sounds more natural and maybe a little more palatable.

All of these scenarios center around something — a drug — fruit — that can be controlled, but love is often not something we intend. We can liken falling in love to catching a cold, or being bit by a love mosquito. How do those scenarios make you feel about love?

What if you think about love as evolution’s way of screaming, “have children, have children!” — not so much the product of our own free will, and more the demands of an alien god. The other side of that coin: falling out of love is evolution’s way of telling you to try your chances with a different mate. Real romantic.

There are still more knobs — reciprocal and unrequited love. We can imagine that the pills don’t last a set amount of time, but instead have a one percent chance of ending each day. If you take the drug with another person, you’re running the risk that one of you will fall out of love much sooner than the other. This would not matter if you could just take another pill, so we can imagine side-effects. Maybe the pill zonks out for a while after use.

Intuition Pumps

What we’ve just done is built what Daniel Dennet calls an intuition pump — or at least gathered the parts for one. These are thought experiments that aid the intuition in grappling with a problem or phenomena. In Dennet’s case, he builds them to deal with the consciousness problem. We built a few to deal with love.

The fun thing about building intuition pumps is that you definitely can try this at home. It’s not too hard to get started. The easiest knob, and one of the most useful, is the more or less knob. Should we have more love or less love? Stronger love or weaker love? And so on.

Try it out. Build some of your own.

What Makes Something Interesting?

Francis Galton, cousin of Charles Darwin and maybe best known for his work on intelligence, was a bit obsessed with the idea that people have certain innate traits. You know the movie Minority Report, where a special police department tries to predict crime before it happens? He sorta tried to invent it — in 1883.

He had this idea, see, that you could predict whether or not someone was a criminal based on the structure of their face. He devised a technique of composite photography, which allowed him to create averages of many images. While he didn’t manage to identify criminals, he did find that the average of several faces tended to be more attractive than any of the individual faces he used as input.

More than 100 years later, it turns out Galton was on to something — regarding both crime and attractiveness. Men with wider faces are more aggressive hockey players, less trustworthy in laboratory games, engage in more aggressive behavior, and are more successful CEOs. Computer averages of faces are more attractive than the people used as inputs, and this result holds not only for faces, but for averages of cars, fish, and birds. A wide face is a dangerous face and an average fish is an attractive fish, it seems.

The Beautiful is the Compressible

femme-fractal

We can think of human beings as agents who take in information from the environment, run that information through a compressor module, and then store that information in long-term memory. This is not rocket science. Our brains can’t hold all of the information in the world. We forget. We are forced to compress experience down to a few relevant details and store those. Indeed, a fair amount of evidence now supports the hypothesis that memories are reconstructed during recall. Each time you remember something, you’re modifying that memory. The brain is not a high-fidelity recorder.

In our man-as-compressor model, what sets the beautiful, averaged face apart from a typical face? It’s easier to compress. Consider all the information the brain has to store about a hideous face: a giant nose, a lazy eye, a unibrow, scars, maybe a teardrop tattoo. When the brain encounters a beautiful face, though, the compressor says something like, “Ah, a face so face-like that I need not spend any more processing time on it. I can relax.”

This idea is taken to its logical extension in low-complexity art. The aim of low-complexity art is to create images that can be described by a short computer program — a measure of complexity known as Kolmogorov complexity. The picture at the beginning of this section is an example of this style of art.

The Interesting is the Unexpected

Consider two facts:

When I speak to people, they find the second fact a lot more interesting than the first. This is, I think, because it violates their model of the world. They think of evolution as pushing us toward ever increasing complexity, but this is not true. Consider venereal sarcoma, which is today an infectious cancer, but used to be a dog.

This notion of surprise, the violation of expectation, is at the core of interestingness. If you already know something, if you anticipated it, it’s boring. The first time you hear a joke, it’s funny. The second time, not so much.

But not all unexpected data is interesting. If I published random sequences of numbers instead of words on this blog, well, no one would read it, and I wouldn’t blame them. What separates the interesting from the uninteresting?

If we consider our man-as-compressor model, interesting facts are those that improve the future performance of the compressor. Here’s an example: marriages are more likely to dissolve during periods of unemployment, but this only holds for unemployed husbands. For someone unaware of this fact, it improves their compressor — in this case, predicting when a couple will get divorced. (If you can predict something, you can compress it. They’re the same construct.) Depending on the person, this fact might further propagate through their compressor, updating beliefs about human mate preferences.

Indeed, a discovery is “just” a large improvement in the compressor. Consider Darwin’s theory of evolution. It connects and explains a huge amount of the phenomena around us. Where did humans come from? Why do fats taste good? Why do whales have organs similar to those of humans — and not fish? Talk about a compression upgrade.

We can even tie this into curiosity. After all, what is curiosity if not the pursuit of one’s interests? Given that what is interesting are those things that upgrade our model of the world, curiosity can be thought of as a drive to improve the compressor — a drive to improve our understanding of how things work.

Creativity, too, can be understood through the compressor model. Creativity is the consistent violation of other people’s expectations. Consider this poem:

Roses are red,
And ready for plucking,
You’re sixteen,
And ready for high school.
—Kurt Vonnegut, Breakfast of Champions

Notice how it violates the expectations of the compressor? That’s creativity.

All together, then:

  • Humans can be thought of as agents who take in information from the environment, run it through a compressor, and store the result in long-term memory.
  • Something is beautiful insofar as it can be compressed. Example: an average of faces is more beautiful than any individual face.
  • How interesting something is depends on how much it improves the performance of the compressor. When a fact violates expectations and improves one’s model of the world, that’s interesting. It improves the compressor.
  • Curiosity is the pursuit of the interesting — action designed to improve the compressor.
  • Creativity is the consistent violation of the expectations of the compressor.

Further Reading

The Science of Problem Solving

feynman-chalkboardMathematics is like the One Ring in the Lord of the Rings. Once you’ve loaded a problem into your head, you find yourself mesmerized, unable to turn away. It’s an obsession, a drug. You dig deeper and deeper into the problem, the whole time unaware that the problem is digging back into you. Gauss was wrong. Mathematics isn’t a queen. She’s a python, wrapping and squeezing your mind until you find yourself thinking about the integer cuboid problem while dreaming, on waking, while brushing your teeth, even during sex.

Lest you think I exaggerate, Feynman’s second wife wrote in the divorce complaint, “He begins working calculus problems in his head as soon as he awakens. He did calculus while driving in his car, while sitting in the living room, and while lying in bed at night.” Indeed, the above is a picture of Feynman’s blackboard at the time of his death. It says on it, “Know how to solve every problem that has been solved.” I like this sentiment, this idea of man as problem solver. If I were running things, I think I would have sent Moses down the mountain with that as one of the ten commandments instead of two versions of “thou shalt not covet.”

That’s what this post is about: How do humans solve problem and what, if anything, can we do to become more effective problem solvers? I don’t think this needs any motivating. I spend too much time confused and frustrated, struggling against some piece of mathematics or attempting to understand my fellow man to not be interested in leveling up my general problem-solving ability. I find it difficult to imagine anyone feeling otherwise. After all, life is in some sense a series of problems, of obstacles to be overcome. If we can upgrade from a hammer to dynamite to blast through those, well, what are we waiting for? Let’s go nuclear.

A Computational Model of Problem Solving

Problem solving can be understood as a search problem. You start in some state, there’s a set of neighbor states you can move to, and a final state that you would like to end up in. Say you’re Ted Bundy. It’s midnight and you’re prowling around. You’re struck by a sudden urge to kill a woman. You have a set of moves you could take. You could pretend to be injured, lead some poor college girl to your car, and then bludgeon her to death. Or you could break into a sorority house and attack her there, along with six of her closest friends. These are possible paths to the final state, which in this macabre example is murder.

Similarly, for those who rolled lawful good instead of chaotic evil, we can imagine being the detective hunting Ted Bundy. You start in some initial state — the Lieutenant puts you on the case (at least, that’s how it works on television.) Your first move might be to review the case files. Then you might speak to the head detective about the most promising leads. You might ask other cops about similar cases. In this way, you’d keep choosing moves until reaching your goal.

Both of these are a graph. (Not to be confused with the graph of a function, which you learned about in algebra. This sort of graph — pictured below — is a set of objects with links between them.) The nodes of the graph are states of the world, while the links between the nodes are possible actions.

bundy-graph

Problem solving, then, can be thought of as, “Starting at the initial state, how do I reach the goal state?”

highlight-graph

On this simple graph, the answer is trivial:

simple-graph-shortest-path

On the sort of graph you’d encounter in the real world, though, it wouldn’t be so easy. The number of possible moves in a chess match — itself a simplification when compared to, you know, actual war — is \( 10^{120} \), while the number of atoms in the observable universe is a mere \( 10^{81} \). It’s a near certainty, then, that the human mind doesn’t consider an entire graph when solving a problem, but somehow approximates a graph search. Still, it’s sorta fun to imagine what a real world problem might look like.

giant-graph

Insight

A change in perspective is worth 80 IQ points.
—Alan Kay

Insight. In the shower, thinking about nothing much, it springs on us, unbidden and sudden. No wonder the Greeks thought creativity came from an outside source, one of the Muses. It’s like the heavens open up and a lightning bolt implants the notion into our heads. Like we took an extension cord, plugged it into the back of our necks, and hooked ourselves into the Way, the Tao, charging ourselves off the zeitgeist and, boom, you have mail.

It’s an intellectual earthquake. Our assumptions shift beneath us and we find ourselves reoriented. The problem is turned upside down — a break in the trees and a new path is revealed.

That’s what insight feels like. How does it work within the mind? There are a number of different theories and no clear consensus among the literature. However, with that said, I have a favorite. Insight is best thought of as a change in problem representation.

Consider how often insight is accompanied by the realization, “Ohmygod, I’ve been thinking about everything wrong.” This new way of thinking about the problem is a new representation of the problem, which suggests different possible approaches.

Consider one of the problems that psychologists use to study insight:

You enter a room in which two strings are hanging from the ceiling and a pair of pliers is lying on a table. Your task is to tie the two strings together. Unfortunately, though, the strings are positioned far enough apart so that you can’t grab one string and hold on to it while reaching for the other. How can you tie them together?

(The answer is below the following picture if you want to take a second and try to figure it out.)

pliers-problem

The trick to this problem is to stop thinking about pliers as pliers, and instead to think of it as a weight. (This is sometimes called overcoming functional fixedness.) With that realization in hand, just tie the pliers to one rope and swing it. If you stand by the other rope, the pliers-rope should eventually swing back to you, and then you can tie them together.

In this case, the insight is changing the representation of pliers as tool-to-hold-objects-together to pliers as weight. More support for this view comes from another famous insight problem.

You are given the objects shown: a candle, a book of matches, and a box of tacks. Your task is to find a way to attach the candle to the wall of the room, at eye level, so that it will burn properly and illuminate the room.

candle-problem

The key insight in this problem is that the box that the tacks are contained in is not just for holding tacks, but can be used as a mount, too — again, a change in the representation.

solved-candle-problem

In fact, the rate at which people solve this problem depends on how it’s presented. If you put people in a room with the tacks in the box, they’re less likely to solve it than if the tacks and box are separate.

The way we frame problems makes them more or less difficult. Insight is the spontaneous reframing of a problem. This suggests that we can increase our general problem solving ability by actively thinking of new ways to represent and think about a problem — different points of view. There are a couple of ways to accomplish this. Translating a problem into another medium is a cheap way of producing insight. Often, creating a diagram for a math problem, for example, can be enough to make the solution obvious, but we need not limit ourselves to things we can draw. We can ask ourselves, “How does this feel in the body?” or imagine the problem in terms of a fable.

Further, we can actively retrieve and create analogies. George Pólya in his How to Solve It writes (paraphrased), “You know something like this. What is it?” The history of science, too, is filled with instances of reasoning by analogy. Visualize an atom. What does it look like? If you received an education anything like mine, you think of it as like a solar system, with subatomic particles rotating a nucleus. This is not really what an atom looks like, though, but it has stuck with us by way of Rutherford.

Indeed, we can often gain cheap insights into something by borrowing the machinery from another discipline and thinking about it in those terms. Social interaction, for instance, can be thought of as a market, or as the behavior of electrons that think. We can think of the actions of people in terms of evolutionary drives, as those of a rational agent, and so on.

This perhaps explains some of the ability of some scientists to contribute to different disciplines with original insights. I’m reminded of Feynman’s work on the connection machine, where he analyzes the computer’s behavior with a set of partial differential equations — something natural for a physicist, but strange for a computer science who thinks in discrete rather than continuous terms.

Incubation

We can think of problem solving like a walnut, a metaphor that comes to me by way of Grothendieck. There are two approaches to cracking a walnut. We can, with hammer and chisel, force it open, or we can soak the walnut in water, rubbing it from time to time, but otherwise leaving it alone to soften. With time, the shell becomes flexible and soft and hand pressure alone is enough to open it.

The soaking approach is called incubation. It’s the act of letting a problem simmer in your subconscious while you do something else. I find difficult problems easier to tackle after I’ve left them alone in a while.

The science validates this phenomena. A 2009 meta-analysis found significant interactions between incubation and problem solving performance, with creative problems receiving more of a boost. Going further, they also found that the more time that was spent struggling with the problem, the more effective incubation was.

Sleep

Keep your subconscious starved so it has to work on your problem, so you can sleep peacefully and get the answer in the morning, free.
—Richard Hamming, You and Your Research

sleep-doubles-insight

A 2004 study published in <em>Nature</em> examined the role of sleep in the process of generating insight. They found that sleep, regardless of time of day, doubled the number of subjects who came up with the insight solution to a task. (Presented graphically above.) This effect was only evident in those who had struggled with the problem, so it was the unique combination of struggling followed by sleep and not sleep alone that boosted insight.

The authors write, “We conclude that sleep, by restructuring new memory representations, facilitates extraction of explicit knowledge and insightful behaviour.”

The Benefits of Mind Wandering

Individuals with ADHD tend to score higher than neurotypical controls on laboratory measures of creativity. This jives with my experience. I have a cousin with ADHD. He’s a nice guy. He likes to draw. Now, I’ve never broken out a psychological creativity inventory at a family reunion and tested him, but I’d wager he’s more creative than normal controls, too.

There’s a good reason for this: mind-wandering fosters creativity. A 2012 study (results pictured below) found that any sort of mind-wandering will do, but the kind elicited during a low-effort task was more effective than even that of doing nothing.

benefits-of-mind-wandering

This, too, is congruent with my experience. How much insight has been produced while taking a shower or mowing the lawn? Paul Dirac, the Nobel Prize winning physicist, would take long hikes in the wood. I’d bet money that this was prime mind-wandering time. I know walking without goal is often a productive intellectual strategy for me. Rich Hickey, known as inventor of the Clojure programming language, has sorta taken the best of both worlds — sleep and mind wandering — and combined them into what he calls hammock driven development.

But how does it work?

As is often the case in the social sciences, there is little consensus on why incubation works. One possible explanation, as illustrated by the Hamming quote, is that the subconscious keeps attacking the problem even when we’re not aware of it. I’ve long operated under this model and I’m somewhat partial to it.

Within cognitive science, a fashionable explanation is that during breaks we abandon approaches that are ineffective. Thus, next time we view a problem, we are prone to try something else. There is something to this, I feel, but some sources go too far when they propose that this is all incubation consists of. I have notice significant qualitative changes to the structure of my own beliefs that occur outside of conscious awareness. Something happens to knowledge when it ripens in the brain and forgetting is not all of that something.

In terms of our initial graph, I have a couple ideas. We still do not have a great grasp on why animals evolved the need to sleep, but it seems to be related to memory consolidation. Also note the dramatic change thought processes undergo while on the edge of sleep and while dreaming. This suggests that there are certain operations, certain nodes in our search graph, that can only be processed and accessed during sleep or rest. Graphically, it might look like:

graph-change-during-sleep

This could be combined with a search algorithm like tabu search. During search, the mind makes a note of where it gets stuck. It then starts over, but uses this information to inform future search attempts. In this manner, it avoids getting stuck in the same way that it was stuck in the past.

Problem Solving Strategies

It is really a lot of fun to solve problems, isn’t it? Isn’t that what’s interesting in life?
—Frank Offher

There may be no royal road to solving every problem with ease, but that doesn’t mean that we are powerless in the face of life’s challenges. There are things you can do to improve your problem solving ability.

Practice

The most powerful, though somewhat prosaic, method is practice. It’s figuring out the methods that other people use to solve problems and mastering them, adding them to your toolkit. For mathematics, this means mastering broad swathes of the stuff: linear algebra, calculus, topology, and so on. For those in different disciplines, it means mastering different sorts of machinery. Dan Dennet writes about intuition pumps in philosophy, for instance, while a computer scientist might work at complexity theory or algorithmic analysis.

It is, after all, much easier to solve a problem if you know the general way in which such problems are solved. If you can retrieve the method from memory instead of inventing it from scratch, well, that’s a big win. Consider how impossible modern life would be if you had to reinvent everything, all of modern science, electricity, and more. The discovery of calculus took thousands of years. Now, it’s routinely taught to kids in high school. In terms of imagery, we can think of solving a problem from scratch as a complicated graph search, while retrieving a method from memory as a look-up in a hash table. The difference looks something like this:

solve-versus-retrieve

All of this is to say that it’s very important that you familiarize yourself with the work of others on different problems. It’s cheaper to learn something that someone else already knows than to figure it out on your own. Our brains are just not powerful enough. This is, I think, one of the most powerful arguments for the benefits of broad reading and learning.

Mood

Moods can be thought of as mental lenses, colored sunglasses, that encourage different sorts of processing. A “down” mood encourages focus on detail, while an “up” mood encourages focusing on the greater whole.

Indeed, multiple meta-analyses suggests that those in happier moods are more creative. If you’ve ever met someone who is bipolar, you’ll notice that their manic episodes tend to look a lot like the processing of creative individuals. As someone once told me of his manic episodes, “There’s no drug that can get you as high as believing you’re Jesus Christ.”

This suggests that one ought to think about a problem while in different moods. To become happy, try dancing. To be sad, listen to sad music or watch a sad film. Think about the problem while laughing at stand-up comedy. Discuss it over coffee with a friend. Think about it while fighting, while angry at the world. The more varied states that you are in while considering your problem, the higher the odds you will stumble on a new insight.

Rubber Ducking

Rubber ducking is a technique for debugging that’s famous in the programming community. The idea is that simply explaining your problem to another person is often enough to lead to the eureka! In fact, the theory goes, you don’t even need to describe it to another person. It’s enough to tell it to a rubber duck.

I have noticed this a number of times. I’ll go to write up something I don’t understand, some problem I have on StackOverflow, and then bam, the answer will punch me in the face. There is something about describing something to someone else that solidifies understanding. Why do you think I’m going through the trouble of writing all of this up, after all?

The actual science is a bit mixed. In one study, describing current efforts on a problem reduced the likelihood that one would solve the problem. The theory goes that this forces one to focus on easy-to-verbalize parts of the problem, which may be irrelevant, and thus entrenches the bad approach.

In a different study, though, forcing students to learn something well enough to explain to another person increased their future performance on similar problems. A number of people have remarked that they never really understood something until they had to teach it, and this maybe explains some of the success of the researchers-as-teachers paradigm we see in the university system.

Even with the mixed research, I’m confident that the technique works, based on my own experience. If you’re stuck, try describing the problem to someone else in terms they can understand. Blogging works well for this.

Putting it All Together

In short, then:

  • Problem solving can be thought of as search on a graph. You start in some state and try to find your way to the solution state.
  • Insight is distinguished by a change in problem representation.
  • Insight can be facilitated by active seeking of new problem representations, for example via drawing or creating analogies.
  • Taking breaks during working on a problem solving is called incubation. Incubation enhances problem solving ability.
  • A night’s sleep improves problem solving ability to a considerable degree. This may be related to memory consolidation during sleep.
  • Mind-wandering facilitates creativity. Low effort tasks are a potent means of encouraging mind-wandering.
  • To improve problem solving, one should study solved problems, attack the problem while in different moods, and try explaining the problem to others.

More Links For February

The Ultimate Guide to Simulated Annealing

 

optimization-spaceImagine that you’re approached by the Greek goddess of discord, Eris and, given that Eris is a cruel goddess, she places you into the mathematical space above. She promises, “If you climb to the highest point, I will release you.”

This would not be too difficult except, like most encounters with Greek gods, there’s a catch. The goddess, in her wickedness, has blinded you. You can only tell whether or not a step will take you upwards or downwards. How might you find the highest peak? You think for a while and decide to climb upwards. From any position, you choose the direction that will increase your elevation.

hill-climb

So you climb and you get to the top of this hill. Nothing happens. What gives? You’ve climbed the top of a peak, but you haven’t made it to the top of the highest peak.

hill-climb-fail

You’re going to have to modify your strategy. There are a couple options. You might stumble around at random. You’d make it to the top eventually, but it would take an eternity, most of which would be wasted exploring areas that you’d seen before.

Instead, you could adopt a systematic approach to exploring the landscape, which is much more efficient. With a memory better than mine, you could keep track of where you’ve visited before and focus on exploring those areas you haven’t. You’d start by checking out one mountain, and then the three surrounding, and then the nine surrounding those three, and so on until you’d found the highest point.

This would work, unless Eris has placed you in an infinite space where the elevation increases forever. If you imagine the edges extending forever, such a space would look like this:

infinite-climb

In such a space, Eris is twisted indeed, as there is no way to reach the peak of an infinite climb. Let’s imagine that you toil away at this Sisyphean task for a few billion years until one of the gods, Clementia, takes pity on you. She offers you a new deal, telling you, “I will transport you to a new, finite space. If you can reach one of the highest peaks with sufficient haste, I will free you.” You accept the deal and are teleported here:

clementia-space

This space features a number of valleys filled with sub-optimal plateaus. If you use your original “upwards climb” strategy, you may find yourself stuck for eternity. Alternatively, you could try searching the entire space, but then you run the risk of violating Clementia’s “sufficient haste” clause.

The trick is to modify the initial strategy to sometimes accept a downwards move, which will help prevent you from getting stuck at a sub-optimal plateau. Such a path might look like this:

broken-annealing

Still, this is not quite right, as you can see from the path above. The problem is that your strategy never terminates. You quickly reach a high point, but then are forced to accept sub-optimal moves as those are the only possible moves.

To get around this, you need to modify your strategy such that you’re willing to accept a lot of bad moves early on, but fewer and fewer with time, until you eventually will accept no sub-optimal moves. This strategy will eventually reach a plateau.

simulated-annealing-path

That’s more like it. With this strategy, you begin to climb. Clementia frees you and you live happily ever after (or at least until Eris decides to visit you again.)

Except for formalizing the details, this is the basic intuition behind simulated annealing, which Wikipedia calls a “generic probabilistic metaheuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space.” I’m not convinced such a sentence was written with the realization that humans will have to read it. In English, simulated annealing is a method for finding an approximation of the highest (or lowest) point in a space, like the one above.

Shaking Intuition

Simulated annealing can be understood in terms of body language. It can be felt as motion. You can think of simulated annealing as shaking.

Imagine that you’re holding onto one of the spaces above. (Masterfully illustrated below.) You place a ping pong ball into the space, with the goal of moving the ball to the lowest place possible. The ball will naturally roll downwards thanks to gravity, but sometimes it will get stuck. When it gets stuck, the natural response is to shake the space, dislodging the ping pong ball, allowing it to continue rolling downwards.

simulated-annealing-ping-pong

There you have it. That’s the core of simulated annealing. You could have invented it and, now, if you ever do come across Eris, you’ll be prepared. (On second thought, if you ever come across Eris and are teleported to a mathematical space, see a psychiatrist.)

Further Reading

  • The early strategies mentioned are real search algorithms. The “just climb upwards” algorithm is aptly named hill-climbing. The random exploration method is known as a random walk and the “systematic exploration approach” described is breadth-first search. Its cousin, depth-first search, would have worked equally well.
  • For more on simulated annealing, try this paper. For different interesting search algorithms, check out STAGE and beam search.
  • Simulated annealing is a heuristic search algorithm, meaning that it attempts to find a “close enough” solution. This makes it well suited for otherwise intractable problems, such as those in NP. There’s some discussion of applications here.
  • Simulated annealing was inspired by the natural process of annealing in metallurgy. It’s one of a class of algorithms inspired by nature. Scott Aaronson writes about the relationship between nature and complexity in this paper.
  • In The Algorithm Design Manual (recommended), the author writes (of heuristic search algorithms) , “I find [simulated annealing] to be the most reliable method to apply in practice.”

Where are the women in the IT industry?

It has become fashionable as of late for media outlets like Gawker and others to attack Silicon Valley, math, computer science, and the hard sciences generally for being unfriendly to women. This does not strike me as much different than the bullying of math and computer nerds during high school, except now we’ve exchanged jocks for journalists, and it’s covered in a not-very-convincing veneer of social justice-y but-we’re-bullying-nerds-because-oppression.

The most convincing challenge to this narrative was written by Scott Alexander in a comment, which was a response to many other specific concerns not likely to appeal to most readers. I’m reproducing the relevant bits here in an attempt to, well, fight obscurity with a little less obscurity.

I worry you’re swallowing a narrative uncritically here. How do we know that computer science has unfriendly discourse? Because we hear lots of stories about the unfriendly discourse in computer science, and we know that there are few women in computer science.

But consider an alternative narrative. In 1920, women weren’t allowed pretty much anywhere except maybe nursing and teaching. There were stereotypes that women would be terrible doctors, terrible lawyers, terrible business people, terrible politicians, terrible mathematicians, terrible philosophers, and all these fields were at least moderately unfriendly to the first women to enter.

But enter they did, and now women are at, near, or above parity in medicine, law, business, politics, and philosophy. Yet for some reason, they didn’t get near parity in math and its later descendant computer science. And so everyone said “Aha! Computer science must have lots of unfriendly stereotypes about women!” And then every single incident of someone making a joke about the word “dongle” was televised to the world, and it was agreed that obviously computer scientists are unfriendly to women, with ample wringing of the “creepy nerd” stereotype for all it’s worth.

We compare this to a field like medicine, which is super-toxic and abusive to everyone, where seniors have pretty much absolute power over younger doctors and the extent to which they abuse it is famous, and which has an extremely tight-knit and masculine culture of working super-long hours all the time and making fun of anyone who complains. And in which 47% of beginning med students are now women, because women are interested in the field and people will totally ignore the odd joke in a field they are interested in.

(The abuse suffered by Jackie Robinson when he entered baseball is legendary, but fifty years later African-Americans were over-represented in baseball at almost twice their rate in the general population. Yet a climate of subtle unconscious sexism is supposed to make women suddenly rush away from computing in droves?)

If women hadn’t flocked to medicine, every incident of someone in medicine making a slightly sexist comment would have gone viral, and it would now be a known fact that medicine “suffers from unfriendly discourse”. Since women in fact flocked to medicine, it was never necessary to deploy that argument.

If you think that computer science is unfriendly to women, you need an explanation of why much more macho fields that are much more subjective and therefore have much stronger ability to discriminate against people they don’t like – medicine, politics, business, law, etc – didn’t develop cultures unfriendly toward women – yet quiet, soft-spoken, pure-abstract-objective-mathematics computer science did.

I’ve never heard such an explanation and it seems much more likely to me that culture-of-unfriendliness-toward-group driving-group-away narrative looms a lot larger in discourse than in reality. This seems broadly consonant with the new research suggesting stereotype threat doesn’t really happen in the real world to any significant degree.

Further Reading

The Science of Habit

The truth is that everyone is bored, and devotes himself to cultivating habits.
—Albert Camus, The Plague

To my perpetual dismay, I’m not a rational agent with limitless willpower. I’m not every moment brimming with novel insight and original computation. No. I’m a habit machine, a behavior-executor, on autopilot — a creature of habit. (Or “habbit,” for illiterate googlers.) I do the things that I do because that’s how I’ve done them in the past.

How horrible — but no, habits are adaptive. They are a good thing. Don’t believe the popular wisdom. We’re habit machines. Embrace it. Without habit, you would have to think through all the small things — will I have coffee with breakfast? Should I brush my teeth before or after showering? How do I tie my shoes? Ad infinitum.

Something like this does happen with Parkinson’s patients. The disease damages regions key to habit formation — the basal ganglia and company. This interference results in sufferers performing poorly on a number of laboratory tasks. Less habity-ness than a healthy brain: not positive, not a good thing, not beneficial.

Too much habity-ness is a problem, too. The drugs used to treat Parkinson’s can lead sufferers to develop gambling or sex addictions. Some of the symptoms of OCD look an awful lot like problems with habit — repetitive thoughts, urges to engage in certain rituals, grooming behaviors (hand washing), and more. (Wikipedia lists hair-pulling as a symptom of OCD. I dated a chick with OCD once and she would pull out her hair, so I can very scientifically confirm the truth of this.) The compulsions of OCD are the result of taking the “force” in “force of habit” and amplifying it.

There is a habit spectrum with those who have trouble establishing habits — Parkinson’s disease patients — on one end and those who form habits too easily — OCD — on the other end. In fact, Lally et al. found that there is significant individual variation in habity-ness. For a habit to reach its peak, it took subjects anywhere from 18 to 254 days, with a median of 66 days.

We can visualize this as a probability distribution, in which it takes most people around 66 days to establish a new habit, but with significant variation. The tails of the distribution are characterized by pathology, e.g. OCD and Parkinson’s.

habit-curve

Why Should I Care?

Human behavior is like a natural disaster, an avalanche or a forest fire. You can nudge the Titanic, schedule a controlled burn, and build avalanche barriers, but that’s about the extent of it. These are the equivalent of establishing the right habits during periods of high motivation and control. With consistent nudging, you can set yourself onto a new path.

Consider the man gracing the one-hundred dollar bill, Benjamin Franklin. He was interested in cultivating virtue — contrast with our modern obsession with personality — and developed a system for doing so, writing in his autobiography, “the contrary habits must be broken, and good ones acquired and established.”

He created a weekly chart, marking it “by a little black spot” whenever he failed to live up to one of his 13 virtues. On any given week, he would focus on just one of the virtues. Of his system, he writes, “I was surprised to find myself so much fuller of faults than I had imagined; but I had the satisfaction of seeing them diminish.”

This is all to say that the essence of a man is, in some sense, what he does out of habit. A mathematician is defined by his habit of doing math, a programmer by his habit of programming, and a writer of his habit for writing. To be a kind person, be kind out of habit. To the extent that enduring personality can be shaped and modified, habit is the way.

Consider competence. Excellence in anything is the result of practice. How does one chew through a mountain of practice? Out of habit. If you develop a habit of setting aside a few hours each day to push through your boundaries, this habit will propel you to excellence. This is what the development of expertise looks like. It looks like a habit of waking up at 5 in the morning to do laps in the pool.

What is a Habit?

My friends were wise men of the first rank, and we found the problem soon enough: coffee wanted its victim.
—Honore de Balzac, The Pleasures and Pains of Coffee

A habit is an automatic behavior, repeated often. There is often a cue that prompts it. I have a coffee habit, triggered by sleepiness, waiters asking if I would like coffee, the smell of coffee, reading about coffee and, as I’m just now discovering, also writing about coffee. The caffeine barricades my adenosine receptors and releases a flood of dopamine, reinforcing the behavior. A habit is born.

We can take nervous habits as an example as well, such as stroking the neck. These sort of self-soothing gestures are cued by internal feelings of distress, which launches the behavior (neck rubbing). The reinforcement here is the resulting decrease in distress.

Cigarette smoking works in much the same way. Many people smoke when they wish to relax, so it can be cued by internal feelings of tension. This triggers getting out the cigarette and smoking it, which provides a hit of nicotine. The nicotine acts on the brains reward system, which reinforces the behavior. (Nicotine’s role in encouraging habit formation is why it can be so difficult to quit.)

However, this process is not carved in stone. There are some habits which don’t have clear cues or rewards. As part of writing this, I set my phone to buzz at random intervals during the day, at which point I’ve been reciting the poem “Invictus.” There’s no clear reward, but habit formation has been chugging along nonetheless.

As my Invictus example implies, there are mental habits, too, and they function in the same way. When you mention Illinois State University, my mother — without fail — will say, “Go salukis!” (Her alma mater’s mascot.) When I hear someone say “Turn it up”, a Filip Nikolic remix of “Bring the Noise” hijacks the helm of my consciousness and steers it to the melody of that beat.

Somewhat troubling is the realization that most of our thought is not internally generated, but scripts that run as a result of external cues. Patients presenting with transient global amnesia are a dramatic example of this. Unable to commit anything to long term memory, they continue to execute the same loop of behavior, repeating the same conversations over and over. Radiolab has great coverage of one case in their “Loops” episode.

Habit Formation

When a habit is first being formed, it consists of deliberate, effortful, goal-based activity. This is supported by brain scans, which show activity in the prefrontal cortex — the front brain, sometimes called the seat of reason. For those familiar with dual process theory, this is system 2 behavior.

In the beginning stages, behavior is flexible. Each act of the behavior in question can be thought of as an original (and thus effortful) computation.

As the behavior is repeated, it becomes less effortful, and brain activity begins to shift. Activity in the prefrontal cortex dies down and activity moves into lower, more central brain regions — mainly the basal ganglia. The behavior itself becomes less flexible. The process is much like the life-cycle of a clay bowl: first, wet clay is malleable, but — once fired in a kiln — it hardens and is ready for use.

Habit formation as a progression from effortful search to retrieving one path.

Habit formation as a progression from effortful search to retrieving one path.

For those comfortable with computational metaphors, we can imagine habit formation first as a sort of graph search — trying to find the right sequence of actions that lead to some reward, like alpha-beta search in a chess engine. With time, the brain learns when one path is often retrieved. It “saves” that path and executes that in the future, avoiding a whole lot of computation, but at the cost of flexibility. The graph search corresponds to activity in the prefrontal cortex, while the saved path is executed by the basal ganglia.

The Progress of Habit

The interplay between automaticity and repeated behavior gives us a visual of habit formation over time.

The interplay between automaticity and repeated behavior gives us a visual of habit formation over time.

The picture above is what an activity looks like over time as it solidifies into a habit. It starts hard and effortful. With each execution of the behavior, it becomes more automatic and natural, until it reaches an asymptote. At this point, it levels off and has become a bona fide habit.

Cultivating Good Habits

Thus far, the discussion has been theoretical. We are interested in habits, though, in what they can do for us. We would like to cultivate the right sort of habits in order to become the person that we would like to be.

The science suggests a few guidelines.

  • First, everything becomes easier with practice. This alone is motivating.
  • To cultivate a habit, do that thing as often as possible. The time to enact new habits is in when a high motivation state.
  • Create some cue to prompt the habit. A cell phone alarm is good for this. You can use the TagTime Android application to set up random pinging throughout the day.
  • After the good habit has been executed, reward it in some way. M&M’s are a popular reinforcer, but even positive self-talk can be effective. Some people even use nicotine.
  • There are a number of productivity tools that make habit formation easier, like HabitRPG, chains.cc, Pomodoro timers, and BeeMinder.

Breaking a Bad Habit

The way that people often go about stopping a bad habit is by attempting to just “use their free will” to quit doing it. This does not often work, as evidenced by all of the people who have such difficulty with their fitness goals or quitting smoking.

If you have a specific habit you would like to stop, I would first suggest looking for resources specific to those habits. There are already good resources for those trying to quit smoking, but I’ll admit that I looked around and most guides were not that compelling.

There are a few ways to tame a bad habit. The first is to understand the context of the habit. What’s the cue? What’s the reward? Once you’re able to notice this, it becomes possible to gain some measure of control over it. You can try to figure out a path to removing the cue or the reward, or even replacing it with a disincentive, like when nail-biters coat their nails in something bitter.

Alternatively, and I think this is the best option, is to establish a new habit in place of another, to fight fire with fire. Eating junk food is a habit that many would like to stop, but this is the wrong way of looking at things. One ought to try to eat more healthy food. The junk food will fall by the wayside. For those who wish to stop eating meat, frame it not as “stop eating meat” but instead as eating more plant-based meals.

This is the difference between approach and avoidance goals. Approach goals are framed as something you want to do, while avoidance goals are framed as something you want to avoid. Approach goals (such as eat more vegetables) are more energized over time and more likely to be achieved. Reason #45820 the human brain is hack: just reframing your goals as approach instead of avoidance can improve your odds of completing those goals.

Putting it All Together

To recap:

  • A habit is a behavior that becomes automatic and effortless with repetition.
  • Habits are important because so much of our behavior happens outside of conscious control. Developing the right habits allows us to modify who we are.
  • A habit consists of a cue which triggers a behavior which is then reinforced.
  • Habits start effortful and goal-directed, but become effortless and automatic with repetition.
  • Habitual behavior becomes less flexible over time and can be conceptualized as the migration from graph search to a fixed sequence of behavior. This is computationally cheaper.
  • There are several technologies available that can aid in habit formation.
  • To conquer a bad habit, notice what cues the habit and then try replacing it with a new, better habit or by removing those cues.

Further Reading

Links For February

4chan Is What Free Speech On The Internet Looks Like

Meditation: If there are true things that no one is allowed to say, how will you know them?

Where there are humans,
You’ll find flies,
And Buddhas.

—Kobayashi Issa

Friend, I have a confession. I like 4chan. Whenever I see someone call 4chan the cesspool of the internet or disgusting or whatever, I shake with excitement. I’ve found the King of Fools! At long last, I can kill him and the Fool people will scatter forever.

Woe. It’s never the case that I’ve found the king of fools because — sometimes only minutes later — I stumble across a still greater fool, and I wonder if I’m the King of Fools for believing that I’ll ever find His Fooliness.

You see, friend, if you cannot see the redeeming value of 4chan, I weep for you. It must be difficult to live without eyes. If you think you are Mr-oh-so-sophisticated, I ask you: where do you think memes come from? Yes, friend. Memes come from 4chan. All of them.

But it’s more than memes. 4chan is a place where people can say whatever they please, or at least the closest to such a place I’ve yet to find on the internet. There is no one pretending to be offended in order to score points on the I’m-the-most-offended game.1 On 4chan, if you are a racist, a woman-hater, a man-hater, a fascist, a communist, a Jew or an anti-semite, you can be you. There is no Cathedral,2 no social-hate-machine poised to vilify you for speaking your mind. On 4chan, if you wish to spew hate, spew hate, and if you wish to spew love, spew love.

Forced anonymity is an amazing medium. People can be real with you in a way that they can’t in any other venue. I have shared and received support from anons on different 4chan boards — people with no reason to give a fuck about me, people I will never knowingly interact with again. This, more than anything, ought to be evidence of some capacity for human decency.

1. There is another game people like to play. It’s called I’m-a-good-person-sleight-of-hand. Whoever has the most virtue points wins the game. To score virtue points, you need to convince other people that you’re a good, moral being without lifting a finger to improve the world. I have written about this before here.

2. A term coined by Mencius Moldbug, representing the self-organizing consensus of society as a whole — through the media and other institutions. Specifically, that part of society which condemns opposing ideologies as evil.

Why Replication Is Important

Every bit of evidence one can acquire, in any area, leads one that much closer to what is true.
—Carl Rogers

Here’s why replication is important.

We know what is true based on evidence. The more evidence for some belief, the more confident in that belief you ought to be.

This is where replication comes in. If you have one study that says something, this is not all that much evidence that this something is true. After all, parapsychology keeps churning out results, and we all know that is bullshit.

But, if a study has been replicated, this is at least twice as much evidence that something is true. The more replications, the more evidence, the more likely something is true.

Given that most published findings are false, I no longer pay too much attention to a study before it’s been replicated and, when I do find out something surprising, you better believe I’m searching Google Scholar for replications.