A 60-Second Exercise That Boosts Goal Achievement By 20%

The hero of our tale, Jason Padgett.

The hero of our tale, Jason Padgett.

(Content note: this is an example of what I send out to email subscribers. You can sign up to receive more like it on any of the many forms scattered throughout the website, like the one at the bottom of this post.)

In 2002, Jason Padgett got into a fight. It was the fight of the decade, maybe the century. Not because Jason trounced his two assailants (he didn’t), and not because it was a fair fight — it wasn’t — but because of what happened the next morning.

But, wait, rewind a little. Let me tell you about Jason before everything changed.

Jason Padgett: Jock, Underachiever… Time Traveler?

The year was 2002 but, looking at Jason, you wouldn’t know it.

It was as if he’d been beamed straight from the 80s. A grungy time-traveler left stranded in the future, perhaps a consequence of an evil genius’s twisted revenge plot gone awry.

His blonde hair was cut into a mullet.

Attire: t-shirt with ragged, cut-off sleeves — as if he’d gnawed them off himself, like your dog might when left alone, bored. And the finishing touch, transforming him from trucker-stop chic into a form of trailer-park fashion so common you’d mistake it for an official uniform: he tucked his browning white tee into tight, faded jeans.

Plus a leather jacket. The leather jacket.

Just as The Lord of The Rings hinged on the whims of The One Ring, Jason’s story hinges on The One Leather Jacket.

At 31, with a daughter, he looked almost like an awkward teenager, except — barring Mike Tyson and steroids — I’d never seen a teen so well-muscled.

His hobbies included drinking beer — the existence of which, he liked to say, implied that there must be a God — skydiving, cliff-jumping, and thrill seeking generally.

He’d bounced around college for a while, but books were not his scene. In his own words, “I cheated on everything, and I never cracked a book.”

At least, that was Jason before the attack.

The Attack: When A Bar Fight Is A Blessing

The attack happened on Friday the 13th — a superstitious day, to be sure. If Jason had stayed in, he wouldn’t have ended up in the hospital.

My grandmother likes to say that the one week when she doesn’t play the lotto will be the one week that her numbers are called.

For Jason, if he’d stayed in and avoided the hospital, he would have missed out on the equivalent of a winning lottery ticket.

It happened at a karaoke bar near his home.

Two men attacked him from behind, punching him in the back of the head. The blows knocked him to the ground.

They then kicked him until he handed over his prized leather jacket. Worth maybe, if we’re being generous, 40 bucks on eBay.

An exchange more than worth it for Jason.

He ended up in the hospital, with a concussion and bruised kidney, but they released him that same night.

When he awoke the next morning, everything was different.

Jock Today, Savant Tomorrow

jason-padgett-art

An example of Padgett’s fractal art.

Today, Jason is one of 40 known cases of acquired savant syndrome. He sees mathematics. He can draw complicated geometric fractals by hand.

When the sun glints, he sees the arc.

Before, he worked at a furniture store.

Now, he’s an aspiring number theorist and an artist.

He draws what he can see and then sells it. He’s even written a book about the experience, Struck By Genius, with an upcoming adaptation for the silver screen.

All because someone punched him in the back of the head.

math-trouble-try-head-trauma

That’s what I want to be. The convincing fist that transforms you into a number theorist.

Except, no, maybe that’s not right.

…I know.

I want to be the friendly surgeon that communicates with you via email. I teach you how to remove a spleen, and then you, kitchen knife in hand, do it yourself.

Yeah. That’s who I want to be. Email-spleen-remover guy.

The Toughest Part of Behavior Change: Remembering to Change

For Jason, radical behavior change was the result of someone striking him in the back of the head.

For you and me, that sort of change is decidedly more painful than a concussion, as anyone who’s attempted to lose weight can tell you.

Let me know if this scenario sounds familiar.

You want to change something about yourself.

Maybe you want to be friendlier.

Let’s say you’ve read about operant conditioning and positive reinforcement and you think, hey, this just makes sense — I should treat the people around me better.

So this becomes a goal: treat your colleagues better.

And, to do this, your plan is not more cowbell, but more compliments. Criticism sucks. No one likes receiving it.

Solution: more positive feedback.

So you set this goal.

And then you forget about it.

You go to work, critique people like usual, come home, and then realize: I was going to make a change.

But I didn’t even think about it when the opportunity was present.

I just kept acting out of habit, on autopilot, going through the same motions. Like Sisyphus, doomed to repeat my sentence for eternity.

All intended behavior change suffers from this flaw: forgetting to execute the new behavior when its applicable.

Maybe you want to start taking the stairs more, but every night you’re so tired when you check into your apartment that you opt for the elevator.

Or you want to wake up earlier, but every morning you silence your alarm.

What can be done? Is it hopeless?

No.

If-Then Rules Are A Real Life Cheat Code

…what if I told you that life has cheat codes?

That there are certain techniques you can use to make it more likely that you’ll achieve anything you want? Fully-general goal techniques that will increase your probability of success?

Sounds pretty good, right?

These exist.

They’re buried in textbooks, in scientific papers, across a dozen disciplines. Psychology, cognitive science, operations research, game theory, economics, and more.

Today’s email is about one of those cheat codes.

A way to solidify and increase the odds of permanent behavior change.

A tool to move you from who you are now, to who you want to be.

Today’s email is about if-then rules.

If-Then Rules Prevent Breast Cancer

Comic by Vicki Jacoby.

Comic by Vicki Jacoby.

Let me tell you a story. About boobs.

Orbell, Hodgkins, & Sheeran, 1997 rounded up a bunch of women, who all shared the same goal.

They wanted to perform a breast self-examination, or BSE, sometime during the next month. You know what I’m talking about: where women feel for lumps in order to detect breast cancer.

The authors of the study split participants into two groups.

The first group recited an “implementation intention”, which is just newly invented jargon for “if-then rule.” These are of the form, “If [situation], then [behavior].”

For instance, a participant in the study might form an intention like, “If I’ve just finished washing my hair in the shower, I will perform a breast self-exam.”

Or maybe, “If it’s the first Wednesday of the month, I will perform a breast self-exam while changing into comfortable clothes after work.”

The second group didn’t create any if-then rules — they just had the goal of performing a breast self-exam.

The result?

100% of the if-then group successfully performed a breast self-exam, while only 53% of the second group did so.

With one simple if-then rule recited in probably less than 60-seconds, participants doubled their odds of goal success.

If-Then Rules Are Very Effective, Even Across Different Circumstances

The effectiveness of if-then rules for behavior change has since been confirmed many times, in many circumstances. They’ve been used to:

  • Increase the likelihood of implementing a vigorous exercise program (29% -> 91%.) In contrast, an entire motivational intervention that focused on the danger of coronary heart disease raised compliance merely 10%, from 29% to 39%.
  • Hasten activity resumption after joint replacement.
  • In one study, forming if-then rules for eating healthy foods reliably increased the rate at which people did so.
  • In another instance, drug addicts undergoing withdrawal were given the task of creating a brief resume before 5pm that evening. Of those who didn’t form implementation intentions, none were successful. Of those who did, 80% were successful.
  • This effect has even been observed in those with damage to the prefrontal cortex — the front part of the brain, sometimes called the seat of reason. Forming the implementation intention to work quickly when given a certain stimulus — in this case, the number 3 while completing a computer task — increased the speed at which participants did so.
  • Here’s my favorite example: implementation intentions can make you less sexist. In one study, participants formed the if-then rule, “If I see a woman, I will ignore her gender!” The results? No automatic activation of stereotypical beliefs.
  • This has since been replicated both for the old (“Whenever I see an old person, I tell myself: Don’t stereotype!”) and the poor (“Whenever I see a homeless person, I ignore that he is homeless.”)

At least 94 similar studies have been conducted, and since integrated into a meta-analysis (n=8461). The analysis found that implementing this extremely simple technique had an effect size of d=.65.

What does that mean?

Let’s say that, when it comes to achieving goals, you have exactly average performance — 50% of people do worse than you, and 50% do better. (This is just an example. Given that you’ve read this far, you’re almost certainly above average.)

Given an effect size of .65 for implementation intentions, this would mean that — by implementing relevant if-then rules — you’d improve your goal-achieving-ability by .65 standard deviations.

Which is enough to outperform 20% more people. Just by adding these if-then rules, an average goal achiever would end up outperforming 70% of the population.

Oh, and here’s a neat tip: if-then rules can themselves be supercharged. Stellar (1992) enhanced goal achievement by having participants form an implementation intention, and then adding “I strongly intend to follow the specified plan!”

You should use if-then rules – Here’s how

I’m excited about this technique.

It costs nothing to implement, and it will very probably have a substantial impact on your life — if you bother trying it out.

Here’s how: Come up with some if-then rules, either write them down or say them aloud, and voila!, suddenly you’re more likely to achieve whatever it is that you want.

Plus, you can apply this to anything. It’s a fully general technique.

So why wouldn’t you?

The general template is straightforward: If [situation], then [behavior]. The idea is to pair a concrete scenario with a behavior you want to enact.

Here are some examples:

  • If I’m mindlessly browsing the web, refreshing Reddit, I will instead pick up and read a book.
  • When I go out to eat with friends, I will order a salad.
  • If I have just finished dinner, I will write 500 words.
  • If I’m writing and interrupted, I will ignore it.
  • If I add something to my Amazon cart, I will wait 24 hours before purchasing it.
  • When I get my paycheck, I will set aside 10% as savings.

And my personal favorite: if I’m attacked at a bar, I will become a number theorist.

P.S. You’ve read this far – want more? Get articles like this emailed directly to your inbox, just fill out the form below. Thanks!


 

Sources

  1. Orbell, Sheina, Sarah Hodgkins, and Paschal Sheeran. “Implementation intentions and the theory of planned behavior.” Personality and Social Psychology Bulletin 23.9 (1997): 945-954.

  2. Gollwitzer, Peter M., and Paschal Sheeran. “Implementation intentions and goal achievement: A meta‐analysis of effects and processes.” Advances in experimental social psychology 38 (2006): 69-119.

  3. Gollwitzer, Peter M. “Implementation intentions: strong effects of simple plans.” American Psychologist 54.7 (1999): 493.

  4. Steller, Birgit. Vorsätze und die Wahrnehmung günstiger Gelegenheiten. [Implementation intentions and the detection of good opportunities to act]. tuduv-Verlag-Ges., 1992.

Prolonged Eye Contact and Attraction: What The Science Tells Us

eye_contact_book Edit: Hey guys! This has proved to be one of the most popular articles on the site, so I’ve created a supplemental download on techniques for improving eye contact. Enter your email below (or on any of the forms scattered around the site), and I’ll send it to you, along with ~2 emails per week on research backed techniques for achieving anything.

Here’s the form:


 

 

valentines-day-squid

Belladonna means “beautiful woman” in Italian, but it’s also the name of a type of plant. The origins of the term belladonna are uncertain, but date back to at least 1554.

It’s been suggested (and this is my favorite theory) that the name might be related to belladonna’s use as a cosmetic. Women would consume the plant in order to dilate their pupils, in an attempt to enhance beauty.

The only problem? Belladonna (sometimes called nightshade) is poisonous.

Richard Pultney’s 1757 paper, “A brief botanical and medical history of the Solanum Lethale, Bella-donna, or Deadly Nightshade,” recounts this tale:

Its relaxing quality is very surprising, as appears by that memorable case… of a lady’s applying a leaf of it to a little ulcer, suspected to be of the cancerous kind, a little below her eye, which rendered the pupil so paralytic, that it lost all its motion for some time afterward: and that this event was really owing to that application, appears from the experiment’s being repeated with the same effect three times.

belladonna-in-eyeBut they were really onto something! This is the craziest part of the whole thing. (Suffering for fashion is passé.) Hess (1965) took two pictures of the same woman and presented it to male subjects and asked them to describe the woman
in the picture. The researchers altered the photos so that one had slightly larger pupils. By and large, the male subjects preferred the woman with the larger pupils.

Try it:

woman-small-pupils
woman-large-pupils

(The one on the bottom is the one that you’re supposed to find more attractive, although I’ve just terrifically biased you by telling you that.)

This has since been replicated at least five times.

Let’s just take a minute and reflect on this. Women in 16th century Italy anticipated the findings of modern scientific research by about 400 years. They not only discovered that belladonna reliably increases pupil size, but they also noticed that men were attracted to that.

I propose a hypothesis similar to the efficient markets hypothesis. We’ll call it the efficient beauty hypothesis: if a beauty-increasing cosmetic intervention exists, some enterprising individual somewhere will discover it.

You might wonder, then: are women interested in men with large pupils? Tombs and Silverman’s 2004 paper, “Pupillometry: A sexual selection approach” tried to answer this question. The paper includes this graph:

The realtionship between prolonged eye contact and attraction.

The relationship between prolonged eye contact and attraction.

You’ll notice that women find average pupil sizes (on men) the most attractive, while men subscribe to the Texan, bigger-is-better philosophy. The authors additionally report that, “Further investigation revealed that females attracted by large pupils also reported preferences for proverbial bad boys as dating partners.”

At this point, you might wonder why men find large pupils attractive. And, of course, evolution has good reason for that, as confirmed by a 2007 study:

We found an increase in mean pupil diameter for sexually significant stimuli during the fertile phase and this pupillary change was also specific to pictures of the participants’ actual sexual partners. Moreover, this effect was only seen for women who did not use oral contraceptives. These findings confirm that women’s attention for sexually significant stimuli is higher during their fertile phase of the menstrual cycle, and that changes in sexual interest are implicitly measurable using pupillometry.

Or, in plain English, fertile women tend to have larger pupils.

Motivation

In Elana Clift’s Honors thesis, “Picking Up and Acting Out: Politics of Masculinity in the Seduction Community,” she argues that the “pick up artist” movement is the result of the lack of available dating scripts for young men. Back in, say, Victorian England, everyone knew how this whole relationship thing worked. Today, we’re all horribly confused.

I was sorta convinced by that for a while, and I think that explains some of it, but now I’m plagued by doubt. Lots of pick-up strikes me as actively toxic. I mean, yeah, especially to women — there are a disproportionate amount of vocal misogynists associated with the “manosphere” generally, but I mean to men, too: Pick-up is an advertiser’s wet dream. Nothing sells better than insecurity, and what more poignant insecurity than masculine identity and status anxiety about attractiveness? (Whenever you hear the phrase “real” men, ask what they’re selling.)

Of course, my concerns here are hardly limited to men, although I’m more familiar with the struggles of young men everywhere. Cosmopolitan magazine is the female-equivalent of pick-up, telling young women that they need to fit into some sort of mold in order to attract a guy — that they shouldn’t answer the phone on the first ring or whatever — and I’m sure lots more nonsense which isn’t even on my radar, but probably ought to be.

Which brings me to the topic at hand: eye contact. These unsavory actors sell prolonged eye contact as some sort of panacea. An actual example I found with 10 seconds of googling: “Master These Eye Contact Techniques To Create Powerful Attraction,” complete with tips that the author promised “will blow my mind.” (Hint: they didn’t.) Another blog targeted at “Helping men reclaim their masculinity and their relationships,” (gag) includes this gem: “…strong eye contact is difficult to maintain if you do not have the confidence to back it up (thus making it an honest signal).”

Yeah, right. Because if you don’t maintain strong eye contact, it’s because you lack confidence, and definitely not because you haven’t yet mastered the serial killer’s thousand yard stare.

Frankly, this all smacks of the purest bullshit. Evolution has spent billions of years and computational cycles optimizing male-female relations. If maintaining eye contact with your crush is so effective, why don’t people just do it naturally? Could advising people to maintain strong eye contact be harmful? Maybe unnaturally strong eye contact just comes off as creepy.

I decided to find out.

The Evidence on Prolonged Eye Contact

An interlude during which the author does a lot of research.

My (somewhat begrudging) subjective feeling after reading through 5 or 6 relevant papers is that, yes, the pick-up artists are right, the majority of men ought to be making more eye contact. The case for women is less clear. As far as I can tell, too much eye contact is always better than too little, and eye contact combined with a smile is difficult to get wrong.

My neat evolution-has-optimized-eye-contact argument has at least one damning flaw: children learn the association between eye contact and liking. It’s not innate.

The association between gaze and liking appears to be learned. Children do not use eye contact to judge affiliation and friendship until about age 6 (Abramovitch & Daly, 1978; Post & Hetherington, 1974).

Now, is there such thing as too much gaze? Yes. Moderate gaze is better than constant gaze:

Gaze also influences people’s liking for each other, with moderate amounts of gaze generally preferred over constant or no gaze (Argyle, Lefebvre, & Cook, 1974; Exline, 1971).

Bu-u-u-ut constant eye contact is still better than no eye contact:

British college students rated a same-sex peer they met in an experiment as more pleasant and less nervous when the person gazed at them continuously rather than not at all (Cook & Smith. 1975).

Compare that with a mock interview study, which had students either exhibit low, natural, or high gaze. Notably, researchers defined high gaze here as near-constant eye contact. They found no difference in likability between normal and high gaze:

High-levels of gaze do not differ from normative gaze patterns in earning more favorable endorsements for hiring from an interviewer, in conferring greater credibility, in increasing attraction and in receiving favorable relational communication interpretations.

Indeed, there were even some benefits to near-constant gaze. Interviewers labeled near-constant gazers (not to be confused with goats) as more attractive, more intimate, and more dominate than those who displayed normal levels of eye contact. So, again, more evidence that too much eye-contact is way better than too little.

Those who make lots of eye contact are even judged to be more intelligent (!):

Wheeler, Baron, Michell, and Ginsburg (1979) reported a positive correlation between an interviewee’s eye contact with an interviewer and estimates made by observers of the interviewee’s intelligence.

And it’s not even confined to those you look at. If someone sees you making a lot of eye contact with someone, they’ll like you more than if you didn’t:

The positive feelings associated with gaze generalize to observers, who favor people when they gaze at moderate rather than low levels while approaching others (Gary. 1978a) or in social interactions (Abele, 1981; Shrout & Fiske, 1981).

Of course, people like it most of all when you look at them, which a 2005 study, “The look of love: gaze shifts and person perception,” verified.

Ratings of likability were elevated when social attention was directed toward rather than away from the raters.

In the same study, men rated women who paid attention to them not only as more likable, but more attractive, too:

Whereas gaze cues elevated ratings of likability among both male and female participants, only the men displayed gaze-related effects on person evaluation when the physical attractiveness of the targets was assessed.

Here’s another belief I held that turns out to be wrong. I’ve observed that people look at the speaker while listening, and look away while speaking. But this turns out to be totally okay to violate (surprise!) and you can stare all the time if you want (or, at least, high status people do it):

Equivalent amounts of gazing while speaking and listening were found with research participants who were given high status or who were discussing issues on which they had expertise (Ellyson, Dovidio, & Corson, 1981; Ellyson, Dovidio, Corson. & Vinicur. 1980).

And more eye contact makes you more powerful:

Dovidio and Ellyson (1982) reported that high gazing-while-speaking ratios were directly related to ratings of power in an interaction.

Want to make friends? Have you tried staring at people?

College women gazed more at a female confederate when they were trying to make friends (Pellegrini, Hicks, & Gordon, 1970), and college men gazed more at a woman when they wanted to interest her in a social conversation (Lefebvre, 1975).

It even holds for imaginary friends!

Mehrabian (I968a, 1968b) reported that research participants gazed more when they approached an imaginary person they liked rather than disliked.

And real ones, too:

Russo (1975) reported greater amounts of eye contact between elementary school children who were friends rather than nonfriends.

What does eye contact mean, though?

While doing keyword research for this, I noticed that a lot of men and women are confused about what prolonged eye contact means. Does it indicate sexual interest? Well, it definitely can!:

Participants in a study by Griffitt, May, and Veitch (1974) gazed more at opposite-sex peers when they had previously been exposed to sexually arousing slides.

It might even imply that you’re smokin’ hot (and trust me, gentle reader, you totally are):

Coutts and Schneider (1975) reported positive correlations between gaze directed by research participants toward opposite-sex peers and experimenter ratings of the peers’ physical attractiveness.

But not always. People will look at you more even if you’re just plain nice to them:

People gazed more after receiving positive evaluations (Coutts, Schneider, & Montgomery, 1980; Exline & Winters, 1965; Walsh et al., 1977) or warm nonverbal responses (Ho & Mitchell, 1982).

Is eye contact ever bad?

Even if you’re hitchhiking, more eye contact is better:

Drivers were more likely to stop for gazing hitchhikers (M. Snyder, Grether, & Keller. 1974), pedestrians were more likely to help a gazing experimenter pick up dropped coins (Valentine, 1980) and dropped questionnaires (Goldman & Fordyce, 1983), and bystanders were more likely to help an injured gazing jogger (Shetland & Johnson, 1978).

Or when you’re buying cereal, according to the 2014 study, “Why Is Cap’n Crunch Looking Down at My Child?”:

We showed that eye contact with cereal spokes-characters increased feelings of trust and connection to the brand, as well as choice of the brand over competitors

Now, you might wonder: are there ever times where you shouldn’t make so much eye contact? Well, when waiting for a green light:

Ellsworth et al., (1972) and Greenbaum and Rosenfeld (1978) had experimenters stand on street corners and gaze constantly or not at all at pedestrians and motorists who were waiting for a red light. When the light changed to green, pedestrians and drivers crossed the intersection significantly faster when they had received constant gaze from the experimenter.

But just dress nice and you’re okay:

For example, pedestrians did not cross the street as fast to escape a staring experimenter when the experimenter was dressed and made up to be physically attractive (Kmiecik, Mausar, & Banziger, 1979)

Or add a smile:

People were also less likely to avoid a staring experimenter when the experimenter smiled (Elman, Schulte. & Bukoff. 1977).

Sex Differences

It turns out, though, that there are sex differences. Women (on average) respond positively to lots of eye contact, while men prefer less. For instance, if you want a female friend to reveal all her secrets, eye contact is good:

Female speakers disclosed more personal information about themselves to listeners who gazed. Female speakers also liked gazing listeners more than nongazing listeners. (Ellsworth and Ross 1975)

For men, though, the opposite is true:

Male speakers, in contrast, disclosed more and felt greater liking when the listener did not gaze.

A similar phenomenon holds with asking for help when picking up coins:

For example, women gave more help in picking up dropped coins to a female experimenter who gazed at them (Valentine & Ehrlichman, 1979). Men gave more help to a male experimenter who did not gaze at them.

Women even like it when they’re told that a man looked at them an unusually high amount:

Kleinke et al. (1973) introduced college men and women in pairs and left them in a room to get acquainted. After their conversation, an experimenter told participants that one person (whose gaze was supposedly recorded through a one-way mirror) had gazed at the other person an unusually high, an average, or an unusually low amount of the time. Women were most favorable toward men whose gaze had ostensibly been high.

But not men:

Men’s reactions were exactly opposite. Men were most favorable toward women when they were told the woman’s gaze or their own gaze had been low.

I wonder if this is just male insecurity? If I was told some chick had been staring at me, I might wonder, “Is there something wrong with my hair? Has one of my legs grown two legs and walked off of its own volition?”

Does eye contact cause love?

To see is to devour.
—Victor Hugo, Les Misérables

Finally, though, what you really want to know: if I maintain eye contact with my crush, will they fall madly and deeply in love with me? Well, sorta. If you convince someone to maintain eye contact with you for ~2 minutes, they’ll (on average) be more attracted to you. The experimenters in this study told their subjects to maintain eye contact in order to “tune their extra-sensory abilities” and, afterwards, they rated their partners as significantly more attractive than controls. Hey, worth a shot, right?

Actually, it turns out, just tricking your crush into thinking they look at you a lot is enough. (“Hey, Maria, why do you keep looking at me? Is it because you’re in lo-o-o-ove with me?”)

In one of these, Kleinke, Bustos, Meeker, and Staneski (1973) did not actually induce their subjects to gaze at their partners. Instead the subjects were told that they had done so. This produced modest increases in attraction for the partner.

Further Reading

  • If you want to settle down with a book on relationships, the best scientific overview I’ve read is the Handbook of Relationship Initiation. For lighter fare, The Moral Animal is pretty entertaining.
  • If you liked this, you’ll love the Social Issues Research Centre’s “Guide to Flirting.”
  • If you want to dive into the original sources for yourself (or look up references), start with “Gaze and eye contact: a research review,” which is where the bulk of this information came from. (Where it didn’t, I’ve indicated in the text.)
  • One of the most useful bits of research to come out of the study of human relationships is the notion of the “mere exposure effect” which suggests that the more you see someone (or something), the more you’ll come to like them.

What Is The Purpose of Science? Algorithm Discovery

Consider the trial of Amanda Knox. What’s the purpose of the legal process here?

Well, let’s think about. Here’s how a trial works (at least on television): the prosecution and the defense get up in front of the jury. They present evidence — it could be DNA, surveillance videos, witness testimony, or even a tic-tac-toe playing chicken. Closing arguments follow. Then the jury deliberates and returns a verdict.

Now, the purpose of all this evidence is ostensibly to get at the truth. To figure out what it is that really happened. Did Amanda Knox kill Meredith Kercher? Or not?

We can visualize the jury, then, as a sort of machine. It takes in evidence and then applies that evidence to update two competing hypotheses: guilty or not guilty. At the end of the trial, the jury spits out its verdict.

jury-inference

Science works in the same manner.

What’s a hypothesis?

Okay, I haven’t been entirely honest. A jury doesn’t have just two hypotheses floating around in its collective head. There are a bunch of different possible explanations. When they consider the most likely explanation (“someone else did it”), they decide guilty or not guilty based on that. So in that box above with the G and NG for guilty or not guilty, it really ought to contain all possible explanations.

What are these explanations, really? They’re scenarios which could have produced the evidence. Amanda Knox murdering Meredith Kercher is one possible scenario. Rudy Guédé murdering her is another, or maybe Raffaele Sollecito did it. Or maybe it was aliens or a government conspiracy.

But what’s a scenario here, really? Consider the plight of physicists. They’re trying to uncover the underlying laws of the universe. They look at the world as it is — the evidence — and ask, “What underlying structure produced this?” Much like a paleontologist who carefully brushes away dirt to reveal a fossil.

Now, what’s a structure that produces data, evidence? An algorithm! Physicists are seeking not the laws of the universe, but the algorithm of the universe — what produced it all.

We can think, then, of science as the process of collecting evidence and then updating the likelihood of possible algorithms that might have produced it. Science is the process of algorithm discovery.

updating-hypotheses

Here the colored circles are algorithms (hypotheses) and their size is their likelihood.

Further Reading

Creativity, Fan Fiction, and Compression

I’ve written before about the relationship between creativity and compressibility. To recap, a creative work is one that violates expectations, while a compressible statement is one that’s expected.

For instance, consider two sentences:

  • Where there’s a will, there’s a way.
  • Where there’s a will, there’s a family fighting over it.

I suspect you find the second more creative.

Three more examples of creative sentences:

  • When I was a kid, my parents moved a lot. But I always found them.
  • Dad always said laughter is the best medicine, which is why several of us died of tuberculosis.
  • A girl phoned me the other day and said, “Come on over, there’s nobody home.” I went over. Nobody was home.

Given that less predictable sentences are more creative, and less predictable sentences are less compressible, creative works ought to be less compressible than non-creative ones. And, indeed, I found some evidence for this in a previous experiment.

But that was not too compelling as it compared technical, repetitive works to novels. This time, I decided to compare very creative writing to normal creative writing.

Methods

The idea then is to compare the compressibility of amateur creative writing with that of experts. To accomplish this, I took 95 of the top 100 most downloaded works from Project Gutenberg. I figure that these count as very creative works given that they’re still popular now, ~100 years later. For amateur writing, I downloaded 107 fanfiction novels listed as “extraordinary” from fanfiction.net.

I then selected the strongest open source text compression algorithm, as ranked by Matt Mahoney’s compression benchmarkpaq8pxd. I ran each work through the strongest level of compression, and then compared the ratio of compressed to uncompressed space for each work.

Analysis and Results

I plotted the data and examined the outliers, which turned out to be compressed files that my script incorrectly grabbed from Project Gutenberg. I removed these from the analysis, and produced this:

fanfic-graph1

Here the red dots are fanfiction novels, while the blue-ish ones are classic works of literature. If the hypothesis were true, we’d expect them to fall into distinct clusters. They don’t.

Comparing compressibility alone produces this:

fanfic-graph2

Again, no clear grouping.

Finally, I applied a Student’s t test to the data, which should tell us if the two data sets are distinguishable mathematically. Based on the graphs, intuition says it won’t, and indeed it doesn’t:

The p-value here is 0.1755, which is not statistical significance. The code and data necessary to reproduce this are available on GitHub.

Discussion

I must admit a certain amount of disappointment that we weren’t able to distinguish between literature and fanfiction by compressiblity. That would have been pretty neat.

So, what does this failure mean? There at least six hypothesis that get a boost based on this evidence:

  • Creativity and compression are unrelated.
  • A view of humans as compressors is wrong.
  • Human compression algorithms (the mind) and machine compression algorithms are distinct to the point where one cannot act as a proxy for the other.
  • Compression algorithms are still too crude to detect subtle differences.
  • Fanfiction is as creative as literature.

And so on and, of course, it’s possible that I messed up the analysis somewhere.

Of all of these, my preferred explanation is that compression technology (and hardware) are not yet good enough. Consider, again, the difference between a creative and a not-creative sentence:

  • Honesty is the best policy.
  • I want to die peacefully in my sleep, like my grandfather… not screaming and
    yelling like the passengers in his car.

The first is boring, right? Why? Because we’ve heard it before. It’s compressible — but how’s a compression algorithm supposed to know that? Well, maybe if we trained it on a corpus of the English language, gave it the sort of experience that we have, then it might be able to identify a cliche.

But that’s not how compression works right now. I mean, sure, some have certain models of language, but nothing approaching the memory that a human has, which is where “human as computer compression algorithm” breaks down. Even with the right algorithm — maybe we already know it — the hardware isn’t there.

Scientific American estimates that the brain has a storage capacity of about 2.5 petabytes, which is sort of hand-wavy and I’d bet on the high side, but every estimate I’ve seen puts the brain at more than 4 gigabytes, by at least a couple orders of magnitude. I don’t know of any compressors that use memory anywhere near that, and certainly none that use anything like 2.5 petabytes. At the very least, we’re limited by hardware here.

But don’t just listen to me. Make up your own mind.

Further Reading

  • The idea that kicked off this whole line of inquiry is Jürgen Schmidhuber’s theory of creativity, whichI’ve written up. If you prefer, here’s the man himself giving a talk on the subject.
  • To reproduce what I’ve done here, everything is on GitHub. That repository is also a good place to download the top 100 Gutenberg novels in text form, as rate-limiting makes scraping them a multi-day affair.
  • I similarly compared technical writing and creative writing in this post and did find that technical writing was more compressible.
  • For an introduction to data compression algorithms, try this video.
  • Check out the Hutter Prize, which emphasizes the connection between progress in compression and artificial intelligence.
  • For a ranking of compressors, try Matt Mahoney’s large text compression benchmark. He’s also written a data compression tutorial.

Mike Tyson and Steroids

tyson-at-13This is a picture of Mike Tyson at age 13. Or at least that’s what the Daily Mail says. I’m skeptical because I sure as hell have never seen a 13 year old that looks like that.

Here’s Mike at 14. Still big, but maybe a little more reasonable:
tyson-at-14By that point, he’d already been boxing for two years. At the age of 12, he was able to bench more than 200 pounds:

At 12, Tyson was arrested for purse snatching and sent to the Tryon School for Boys. He soon met Bobby Stewart, a counselor and former boxer who saw in Tyson a pugnacious kid who had grown to 200 pounds and could bench press more than his weight.

The most popular “explanation” is that Mike Tyson is some sort of genetic freak. He just had a lot of natural potential and that’s why he looks like that. In general, though, the phrase “because genetics” sounds a lot like “because magic.”

What’s more likely: Tyson was a kid with a one-in-a-million natural muscular physique, or that he was on the juice? I would give someone 10 to 1 odds that Mike used steroids at least some point in his professional career, and maybe as young as the age of 13.

This is even less surprising if we consider the rates at which teenagers are abusing sex hormones. From the Palo Alto Medical Foundation:

Five to 12 percent of male high school students and 1 percent of female students have used anabolic steroids by the time they are seniors. But, you know, still. Steroids at 13? Ah, but you forget Mike’s background:

Tyson, now 47 and retired, described his ferocious appetite for drink and drugs that dated back to trying cocaine at the age of 11 and first being given alcohol as a baby in New York.

So, even at age 11, Mike wasn’t a stranger to hard drugs. I’m willing to concede, though, that Mike might not have abused steroids in his early teens. Perhaps the two pictures above are taken at a flattering angle, dated incorrectly, or something else.

That said, I’m still confident that Mike was on the juice at some point in his professional career. From Wikipedia:

By 1990, Tyson seemed to have lost direction, and his personal life was in disarray amidst reports of less vigorous training prior to the Douglas match… Contrary to reports that Tyson was out of shape, sources noted his pronounced muscles, absence of body fat and weight of 220 and 1/2 pounds, only two pounds more than he had weighed when he beat Michael Spinks 20 months earlier.

So, I’m supposed to believe that he was 220 pounds of lean muscle at a mere 5’10”? Give me a break. These stats are not attainable without performance enhancing drugs. (Although, certainly, one might argue that Iron Mike was not that lean.)

Oh, and don’t get me started on drug testing:

Confessing he had taken “blow” and “pot” before the bout, he said: “I had to use my whizzer, which was a fake penis where you put in someone’s clean urine to pass your drug test.”

Or get this. Here’s what Mike had to say when asked, “What would you do differently if you could start training all over again?”

Growth hormones. I would’ve used the growth hormones like the rest of the athletes.

Here he is in another interview:

No, no. All the fighters are on it, the ones that can afford it are on it. That’s my opinion only, I haven’t seen nobody do it but it’s common knowledge.

Steroid usage in the large

Now, I want to step back for a moment. My goal here is not to pick on Mike Tyson, who possesses a certain je ne sais quoi, but:

  • To pick on those (most of the public) who are too ready to believe that most muscleheads are genetic miracles and not a walking meat billboard for steroid use.
  • To point out the prevalence of steroid use at the elite level.

In pursuit of my second point, consider that the livelihoods of star athletes are dependent on their ability not only to perform, but their ability to perform better than everyone else. Albert Pujols 10-year contract, for instance, is worth $240 million dollars. Alex Rodriguez was the highest paid player in the MLB last season, earning $28 million. The median salary for a MLB player, in contrast, is around a million. We’re talking a $27 million dollar incentive to find some sort of undetectable super drug that transforms a median player into the best player.

And that’s just baseball. Forbes’ list of the top paid athletes has about 25 players in golf, tennis, football, even cricket, earning more.

How powerful are performance enhancing drugs?

Of course, it’s not at all obvious that steroids can transform someone from a pretty good baseball player into one of the best. Perhaps, you might think, steroids are not all that effective. The New York Department of Health would have us believe that “[S]teroids cannot improve an athlete’s agility or skill.”

Here’s how an HIV positive man describes testosterone replacement:

At that point I weighed around 165 pounds. I now weigh 185 pounds. My collar size went from a 15 to a 17 1/2 in a few months; my chest went from 40 to 44. My appetite in every sense of that word expanded beyond measure. Going from napping two hours a day, I now rarely sleep in the daytime and have enough energy for daily workouts and a hefty work schedule. I can squat more than 400 pounds. Depression, once a regular feature of my life, is now a distant memory. An HIV patient like the essayist above would probably inject between 150 to 200mg every two weeks. The higher end of that range would bring someone to the top of the typical male level. A first steroid cycle for an athlete might be around 1000mg ever two weeks — more than four times as much.

But what does that translate to, you know, physically? A common myth spread by gearheads who really out to know better says:

Gear [steroids] is not a magical pill. It makes hard work more rewarding, it doesn’t give results for doing nothing. But how about some evidence? Okay!

One study placed men into four groups: exercise with testosterone, exercise with placebo, no exercise with testosterone, and no exercise with placebo. The findings? Men who injected testosterone gained strength and lean muscle mass even without exercise:

Among the men in the no-exercise groups, those given testosterone had greater increases than those given placebo in muscle size in their arms (mean [±SE] change in triceps area, 424±104 vs. -81±109 mm2) and legs (change in quadriceps area, 607±123 vs. -131±111 mm2) and greater increases in strength in the bench-press (9±4 vs. -1±1 kg) and squatting exercises (16±4 vs. 3±1 kg).

Dudes not exercising added 20 pounds to their bench press, while those that exercised and juiced added 50 pounds.

Here’s another study:

Increase in one-repetition maximum leg press strength averaged 17.2% with testosterone alone, 17.4% with resistance training alone, and 26.8% with testosterone + resistance training. To put it another way, sitting around and doing nothing while on testosterone will make you as strong as people who actually train with weights. (These subjects were dosed with 1000mg weekly.)

And that’s just vanilla testosterone. We aren’t even talking about the fun steroids, like trenbolone, which is used to fatten up livestock but much loved by bodybuilders everywhere. It’s literally a steroid intended for bulls — you know, giant muscly cows with horns and shit.

Lest you think I’m citing too much from non-athlete populations, from a review of the use in athletes:

Strength gains of about 5–20% of the initial strength and increments of 2–5kg bodyweight, that may be attributed to an increase of the lean body mass, have been observed.

Rademacher et al. reported one study not reporting strength improvements are that in male canoeists, 6 weeks of Oral-Turinabol administration improved strength and performance measured by canoe ergometry with 6% and 9%, respectively. At the 2012 men’s 1000m kayak single, 6% performance more than separates a last from first place finish, and kayaking is not even a strength sport.

I should point out, too, that one additional benefit of steroid use is that they reduce recovery time and thus training time. Ignoring the performance benefits, an Olympian would still take steroids, as it would allow them to train maybe twice as often as an opponent not on them.

The prevalence of performance enhancing drugs among elite athletes

So, we’ve established that:

  • Athletes face millions of dollars worth of incentives to juice.
  • Performance enhancing drugs are very effective at, well, enhancing performance, even among trained athletes.
    But while all this is suggestive, maybe elite athletes do play by the rules — either out of moral goodness or fear that they’ll get caught. Maybe the testing infrastructure is good enough.

So what does the actual rate of steroid use among the athletic population look like?

From one anonymous survey: > From the athletes questioned, a number of 64 (85.33%) accepted that they did take doping pharmacological substances.

From an evaluation of doping among Italian athletes: > Over 10% of athletes indicated a frequent use of amphetamines or anabolic steroids at national or international level, fewer athletes mentioning blood doping (7%) and beta-blockers (2%) or other classes of drugs.

Another anonymous response found that 7% of athletes admitted to doping, in contrast to the .81% caught by testing:

Official doping tests only reveal 0.81% (n = 25,437; 95% CI: 0.70–0.92%) of positive test results, while according to RRT 6.8% (n = 480; 95% CI: 2.7–10.9%) of our athletes confessed to having practiced doping (z = 2.91, p = 0.004).

From a confidential survey of former NFL players:

The high-water mark for steroid use occurred in the 1980s, when about one in every five players, 20.3 percent, said they had tried the drugs. Use declined in the 1990s and beyond to 12.7 percent of players, the researchers reported.

I’m a bit skeptical that the 10% figure is useful as anything other than a lower bound. If you just ask people at the gym about their steroid use, for instance, you get much higher rates: > 160 responses were received, a 53.3% response rate. Of the 160, 62 admitted having taken steroids (38.8%).

The Tour de France, for instance, has abuse rates much higher than 10%:

Scientists estimated at least 80 percent of riders in the grand tours of France, Spain and Italy were manipulating their blood. It became as routine as “saying we have to have air in our tires or water in our bottles,” Armstrong told interviewer Oprah Winfrey this January, when he finally confessed, after years of lawyer-backed denials, that he doped for all seven of his Tour wins from 1999-2005.

The NY Times reports that more than a third of top finishers have been caught. The actual abuse rates must be higher.

Since 1998, more than a third of the top finishers of the Tour de France have admitted to using performance-enhancing drugs in their careers or have been officially linked to doping.

What’s a man to believe?

So, what are the actual abuse rates among elite athletes? My subjective feeling is more than 10 percent and probably less than 70. I suspect that most athletes have tried them at least once, but chronic use is probably less — maybe around 30 percent, but I’m uncertain. Given that those at the top experience both more pressure and enhanced performance, I suspect that the best players make up a disproportionate portion of abusers.

If you enjoyed this, check out the movie Pumping Iron!

Hindsight Bias In The Media: Talking While Driving

What’s more dangerous: texting and driving or talking on a headset and driving?

If I told you that texting and driving was more dangerous, I predict you’d say, “Well, duh. That’s obvious. Everyone knows that.” But what if I told you the opposite? Would you say the same thing?

Well, you don’t need to, because some joker at Scientific American has done it for you:

After describing a recent study that found that texting by hand and hands-free by voice were equally bad for driving in “Crash Text Dummies” [TechnoFiles], David Pogue writes that “the results surprised me.” It would, in fact, be very surprising if they had showed any difference: the reason that driving performance is impaired when people are making phone calls and texting, hands-free or not, is that such tasks require attention. That’s why a sensible driver would, say, stop talking when navigating a curvy ramp.

This kinda thing gets my blood pressure up. It’s a prime example of hindsight bias in the media. Researchers report something unexpected and then people say, “Well, duh. Everyone knows that!” Except they would produce the exact same response if the researchers had found the opposite.

I can invent explanations for anything, but I don’t mistake my brain’s fairy tales for reality.

Further Reading

Why Psychology Is Not A Science

Doubt is not a pleasant condition, but certainty is absurd.
—Voltaire

I was on Reddit earlier, and an exchange went like this (perfectly illustrating why psychology is not a science):

Bob: Psychology isn’t a science. (downvoted)

Alice: I’m a neuroscientist and while a lot of psychology isn’t very good, you’re just not looking at the right sort of psychology. The media doesn’t report on the right sort of psychology because it’s hard to understand. (upvotes)

Jack: What sort of studies are good studies in psychology?

Larry: One of my favorites is Baumeister’s work on ego depletion — willpower as a fixed resource. It’s a model of what psychology ought to look like. It’s applicable and replicable. (upvotes)

Except, you know, the much venerated model that is Baumeister’s work on ego depletion — nearing 2000 citations — has failed to replicate a bunch of times, makes little sense from a computational model of mind, and is probably false.

At least most published research isn’t false, right? I have some bad news.

Developing Good Research Skills: Compressing Knowledge

I wrote a couple of days ago about how we can think of humans as agents who take in information from the environment, compress that information, and then store it in long term memory. I argued that interesting knowledge is knowledge which improves our ability to compress other knowledge — interestingness signals something is an upgrade to our compressor module.

With that in mind, consider what John Baez recently had to say about developing good research skills.

Keep synthesizing what you learn into terser, clearer formulations. The goal of learning is not to memorize vast amounts of data. You need to do serious data compression, and filter out the noise.

John follows this up with an example out of algebraic topology, a field (pun intended) which I know nearly nothing about and could not follow. The gist seems to be, though, that a whole lot of knowledge is a special case of other, more general knowledge (at least in mathematics), and by climbing Mount Abstraction we can compress old knowledge.

Tools for Compressing Knowledge

I have a head full of junk — disconnected facts, half-baked social theories, psychological studies, programming trivia. It would be very nice indeed if this were all organized around some core guiding principles — if it were a beautiful graph of knowledge, spreading out in every direction, tended like a self-organizing garden, refactoring and elaborating itself. How could I go from the current mess to something more like that?

Well, we can imagine a body of knowledge as a literal body, a corpse. We want to figure out the bones of that knowledge, the deep structure, and hang the rest of it — the facts or “flesh” — on it. The trick to getting at such a skeleton, or building your own, is to seek models. Mental processes, for instance, can often be understood in terms of computation — like when I speak of habits as cache lookups.

Then, when new information comes in, one can hang it on an already built skeleton — connect it to what you already know. John suggests comparing and contrasting different phenomena as a means of compressing his own knowledge.

The effectiveness of both of these techniques — connecting and contrasting — may be a side-effect of the benefits of translating knowledge into new, novel forms.

Pennington (1987) compared highly and poorly performing professional programmers. When trying to understand a program, high performers showed a “cross-referencing strategy” characterized by systematic alterations between systematically studying the computer program, translating it to domain terms, and subsequently verifying domain terms back in program terms. In contrast, poorer performers exclusively focused on program terms or on domain terms without building connections between the two “worlds.” Thus, it seems that connecting various domains and relating them to each other is crucial for arriving at a comprehensive understanding of the program and the underlying problem.
—from The Cambridge Handbook of Expertise

If translation is a component of compressing knowledge, a cheap way to implement this would be to transform ideas into words, writing. Consider how much of the scientific process centers around writing — authoring books, papers, taking notes. Is there evidence to suggest that writing facilitates knowledge compression? K. Anders Ericsson’s landmark paper, “The Role of Deliberate Practice in the Acquisition of Expert Performance” supports such a view:

The writing of expert authors on new topics is deliberate and constitutes an extended knowledge-transforming process, quite unlike the less effortful knowledge-telling approach used by novice writers (Scardemalia & Bereiter, 1991). In support of the importance of writing as an activity, Simonton (1988) found that eminent scientists produce a much larger number of publications than other scientists. It is clear from biographies of famous scientists that the time the individual spends thinking, mostly in the context of writing papers and books, appears to be the most relevant as well as demanding activity. Biographies report that famous scientists such as C. Darwin, (E Darwin, 1888), Pavlov (Babkin, 1949), Hans Selye (Selye, 1964), and Skinner (Skinner, 1983) adhered to a rigid daily schedule where the first major activity of each morning involved writing for a couple of hours.

We might suspect that writing is so effective because it forces knowledge to be retrieved and then restructured, sort of like taking iron ore, heating it, and then reworking it. Sounds a lot like compression, huh?

We can understand the benefits of creating and contrasting analogies through this notion of translation. After all, what is an analogy except a mapping — a translation — between two separate phenomena?

What I’m suggesting then, all together, is that knowledge compression can be understood as a process through which one takes dormant knowledge and transforms it. Among eminent scientists, this transformation has typically taken the form of writing with the intention of revealing the bones of some phenomena — discovering its skeleton. We need not limit this process of transformation to the written word, though. Transformation happens when translating an idea into mathematics, a computer program, drawing, music, or when attempting to teach it to another. (I know Dan Dennet writes about the benefits of explaining his ideas to bright undergraduates.)

In terms of subjective experience — what it’s like inside our mental workspace — we can think of it as the recall and then reinterpretation of a piece of knowledge. This reinterpretation need not be radical — it could be as simple as connecting two pieces of heretofore separate ideas, like compressibility and beauty, problem solving and graph search, or penalties as costs for implementing certain strategies.

The general, compressed principle, then is: to compress knowledge, recall that information (drag it into consciousness) and think about that information in some novel way.

The Science of Problem Solving

feynman-chalkboardMathematics is like the One Ring in the Lord of the Rings. Once you’ve loaded a problem into your head, you find yourself mesmerized, unable to turn away. It’s an obsession, a drug. You dig deeper and deeper into the problem, the whole time unaware that the problem is digging back into you. Gauss was wrong. Mathematics isn’t a queen. She’s a python, wrapping and squeezing your mind until you find yourself thinking about the integer cuboid problem while dreaming, on waking, while brushing your teeth, even during sex.

Lest you think I exaggerate, Feynman’s second wife wrote in the divorce complaint, “He begins working calculus problems in his head as soon as he awakens. He did calculus while driving in his car, while sitting in the living room, and while lying in bed at night.” Indeed, the above is a picture of Feynman’s blackboard at the time of his death. It says on it, “Know how to solve every problem that has been solved.” I like this sentiment, this idea of man as problem solver. If I were running things, I think I would have sent Moses down the mountain with that as one of the ten commandments instead of two versions of “thou shalt not covet.”

That’s what this post is about: How do humans solve problem and what, if anything, can we do to become more effective problem solvers? I don’t think this needs any motivating. I spend too much time confused and frustrated, struggling against some piece of mathematics or attempting to understand my fellow man to not be interested in leveling up my general problem-solving ability. I find it difficult to imagine anyone feeling otherwise. After all, life is in some sense a series of problems, of obstacles to be overcome. If we can upgrade from a hammer to dynamite to blast through those, well, what are we waiting for? Let’s go nuclear.

A Computational Model of Problem Solving

Problem solving can be understood as a search problem. You start in some state, there’s a set of neighbor states you can move to, and a final state that you would like to end up in. Say you’re Ted Bundy. It’s midnight and you’re prowling around. You’re struck by a sudden urge to kill a woman. You have a set of moves you could take. You could pretend to be injured, lead some poor college girl to your car, and then bludgeon her to death. Or you could break into a sorority house and attack her there, along with six of her closest friends. These are possible paths to the final state, which in this macabre example is murder.

Similarly, for those who rolled lawful good instead of chaotic evil, we can imagine being the detective hunting Ted Bundy. You start in some initial state — the Lieutenant puts you on the case (at least, that’s how it works on television.) Your first move might be to review the case files. Then you might speak to the head detective about the most promising leads. You might ask other cops about similar cases. In this way, you’d keep choosing moves until reaching your goal.

Both of these are a graph. (Not to be confused with the graph of a function, which you learned about in algebra. This sort of graph — pictured below — is a set of objects with links between them.) The nodes of the graph are states of the world, while the links between the nodes are possible actions.

bundy-graph

Problem solving, then, can be thought of as, “Starting at the initial state, how do I reach the goal state?”

highlight-graph

On this simple graph, the answer is trivial:

simple-graph-shortest-path

On the sort of graph you’d encounter in the real world, though, it wouldn’t be so easy. The number of possible moves in a chess match — itself a simplification when compared to, you know, actual war — is \( 10^{120} \), while the number of atoms in the observable universe is a mere \( 10^{81} \). It’s a near certainty, then, that the human mind doesn’t consider an entire graph when solving a problem, but somehow approximates a graph search. Still, it’s sorta fun to imagine what a real world problem might look like.

giant-graph

Insight

A change in perspective is worth 80 IQ points.
—Alan Kay

Insight. In the shower, thinking about nothing much, it springs on us, unbidden and sudden. No wonder the Greeks thought creativity came from an outside source, one of the Muses. It’s like the heavens open up and a lightning bolt implants the notion into our heads. Like we took an extension cord, plugged it into the back of our necks, and hooked ourselves into the Way, the Tao, charging ourselves off the zeitgeist and, boom, you have mail.

It’s an intellectual earthquake. Our assumptions shift beneath us and we find ourselves reoriented. The problem is turned upside down — a break in the trees and a new path is revealed.

That’s what insight feels like. How does it work within the mind? There are a number of different theories and no clear consensus among the literature. However, with that said, I have a favorite. Insight is best thought of as a change in problem representation.

Consider how often insight is accompanied by the realization, “Ohmygod, I’ve been thinking about everything wrong.” This new way of thinking about the problem is a new representation of the problem, which suggests different possible approaches.

Consider one of the problems that psychologists use to study insight:

You enter a room in which two strings are hanging from the ceiling and a pair of pliers is lying on a table. Your task is to tie the two strings together. Unfortunately, though, the strings are positioned far enough apart so that you can’t grab one string and hold on to it while reaching for the other. How can you tie them together?

(The answer is below the following picture if you want to take a second and try to figure it out.)

pliers-problem

The trick to this problem is to stop thinking about pliers as pliers, and instead to think of it as a weight. (This is sometimes called overcoming functional fixedness.) With that realization in hand, just tie the pliers to one rope and swing it. If you stand by the other rope, the pliers-rope should eventually swing back to you, and then you can tie them together.

In this case, the insight is changing the representation of pliers as tool-to-hold-objects-together to pliers as weight. More support for this view comes from another famous insight problem.

You are given the objects shown: a candle, a book of matches, and a box of tacks. Your task is to find a way to attach the candle to the wall of the room, at eye level, so that it will burn properly and illuminate the room.

candle-problem

The key insight in this problem is that the box that the tacks are contained in is not just for holding tacks, but can be used as a mount, too — again, a change in the representation.

solved-candle-problem

In fact, the rate at which people solve this problem depends on how it’s presented. If you put people in a room with the tacks in the box, they’re less likely to solve it than if the tacks and box are separate.

The way we frame problems makes them more or less difficult. Insight is the spontaneous reframing of a problem. This suggests that we can increase our general problem solving ability by actively thinking of new ways to represent and think about a problem — different points of view. There are a couple of ways to accomplish this. Translating a problem into another medium is a cheap way of producing insight. Often, creating a diagram for a math problem, for example, can be enough to make the solution obvious, but we need not limit ourselves to things we can draw. We can ask ourselves, “How does this feel in the body?” or imagine the problem in terms of a fable.

Further, we can actively retrieve and create analogies. George Pólya in his How to Solve It writes (paraphrased), “You know something like this. What is it?” The history of science, too, is filled with instances of reasoning by analogy. Visualize an atom. What does it look like? If you received an education anything like mine, you think of it as like a solar system, with subatomic particles rotating a nucleus. This is not really what an atom looks like, though, but it has stuck with us by way of Rutherford.

Indeed, we can often gain cheap insights into something by borrowing the machinery from another discipline and thinking about it in those terms. Social interaction, for instance, can be thought of as a market, or as the behavior of electrons that think. We can think of the actions of people in terms of evolutionary drives, as those of a rational agent, and so on.

This perhaps explains some of the ability of some scientists to contribute to different disciplines with original insights. I’m reminded of Feynman’s work on the connection machine, where he analyzes the computer’s behavior with a set of partial differential equations — something natural for a physicist, but strange for a computer science who thinks in discrete rather than continuous terms.

Incubation

We can think of problem solving like a walnut, a metaphor that comes to me by way of Grothendieck. There are two approaches to cracking a walnut. We can, with hammer and chisel, force it open, or we can soak the walnut in water, rubbing it from time to time, but otherwise leaving it alone to soften. With time, the shell becomes flexible and soft and hand pressure alone is enough to open it.

The soaking approach is called incubation. It’s the act of letting a problem simmer in your subconscious while you do something else. I find difficult problems easier to tackle after I’ve left them alone in a while.

The science validates this phenomena. A 2009 meta-analysis found significant interactions between incubation and problem solving performance, with creative problems receiving more of a boost. Going further, they also found that the more time that was spent struggling with the problem, the more effective incubation was.

Sleep

Keep your subconscious starved so it has to work on your problem, so you can sleep peacefully and get the answer in the morning, free.
—Richard Hamming, You and Your Research

sleep-doubles-insight

A 2004 study published in <em>Nature</em> examined the role of sleep in the process of generating insight. They found that sleep, regardless of time of day, doubled the number of subjects who came up with the insight solution to a task. (Presented graphically above.) This effect was only evident in those who had struggled with the problem, so it was the unique combination of struggling followed by sleep and not sleep alone that boosted insight.

The authors write, “We conclude that sleep, by restructuring new memory representations, facilitates extraction of explicit knowledge and insightful behaviour.”

The Benefits of Mind Wandering

Individuals with ADHD tend to score higher than neurotypical controls on laboratory measures of creativity. This jives with my experience. I have a cousin with ADHD. He’s a nice guy. He likes to draw. Now, I’ve never broken out a psychological creativity inventory at a family reunion and tested him, but I’d wager he’s more creative than normal controls, too.

There’s a good reason for this: mind-wandering fosters creativity. A 2012 study (results pictured below) found that any sort of mind-wandering will do, but the kind elicited during a low-effort task was more effective than even that of doing nothing.

benefits-of-mind-wandering

This, too, is congruent with my experience. How much insight has been produced while taking a shower or mowing the lawn? Paul Dirac, the Nobel Prize winning physicist, would take long hikes in the wood. I’d bet money that this was prime mind-wandering time. I know walking without goal is often a productive intellectual strategy for me. Rich Hickey, known as inventor of the Clojure programming language, has sorta taken the best of both worlds — sleep and mind wandering — and combined them into what he calls hammock driven development.

But how does it work?

As is often the case in the social sciences, there is little consensus on why incubation works. One possible explanation, as illustrated by the Hamming quote, is that the subconscious keeps attacking the problem even when we’re not aware of it. I’ve long operated under this model and I’m somewhat partial to it.

Within cognitive science, a fashionable explanation is that during breaks we abandon approaches that are ineffective. Thus, next time we view a problem, we are prone to try something else. There is something to this, I feel, but some sources go too far when they propose that this is all incubation consists of. I have notice significant qualitative changes to the structure of my own beliefs that occur outside of conscious awareness. Something happens to knowledge when it ripens in the brain and forgetting is not all of that something.

In terms of our initial graph, I have a couple ideas. We still do not have a great grasp on why animals evolved the need to sleep, but it seems to be related to memory consolidation. Also note the dramatic change thought processes undergo while on the edge of sleep and while dreaming. This suggests that there are certain operations, certain nodes in our search graph, that can only be processed and accessed during sleep or rest. Graphically, it might look like:

graph-change-during-sleep

This could be combined with a search algorithm like tabu search. During search, the mind makes a note of where it gets stuck. It then starts over, but uses this information to inform future search attempts. In this manner, it avoids getting stuck in the same way that it was stuck in the past.

Problem Solving Strategies

It is really a lot of fun to solve problems, isn’t it? Isn’t that what’s interesting in life?
—Frank Offher

There may be no royal road to solving every problem with ease, but that doesn’t mean that we are powerless in the face of life’s challenges. There are things you can do to improve your problem solving ability.

Practice

The most powerful, though somewhat prosaic, method is practice. It’s figuring out the methods that other people use to solve problems and mastering them, adding them to your toolkit. For mathematics, this means mastering broad swathes of the stuff: linear algebra, calculus, topology, and so on. For those in different disciplines, it means mastering different sorts of machinery. Dan Dennet writes about intuition pumps in philosophy, for instance, while a computer scientist might work at complexity theory or algorithmic analysis.

It is, after all, much easier to solve a problem if you know the general way in which such problems are solved. If you can retrieve the method from memory instead of inventing it from scratch, well, that’s a big win. Consider how impossible modern life would be if you had to reinvent everything, all of modern science, electricity, and more. The discovery of calculus took thousands of years. Now, it’s routinely taught to kids in high school. In terms of imagery, we can think of solving a problem from scratch as a complicated graph search, while retrieving a method from memory as a look-up in a hash table. The difference looks something like this:

solve-versus-retrieve

All of this is to say that it’s very important that you familiarize yourself with the work of others on different problems. It’s cheaper to learn something that someone else already knows than to figure it out on your own. Our brains are just not powerful enough. This is, I think, one of the most powerful arguments for the benefits of broad reading and learning.

Mood

Moods can be thought of as mental lenses, colored sunglasses, that encourage different sorts of processing. A “down” mood encourages focus on detail, while an “up” mood encourages focusing on the greater whole.

Indeed, multiple meta-analyses suggests that those in happier moods are more creative. If you’ve ever met someone who is bipolar, you’ll notice that their manic episodes tend to look a lot like the processing of creative individuals. As someone once told me of his manic episodes, “There’s no drug that can get you as high as believing you’re Jesus Christ.”

This suggests that one ought to think about a problem while in different moods. To become happy, try dancing. To be sad, listen to sad music or watch a sad film. Think about the problem while laughing at stand-up comedy. Discuss it over coffee with a friend. Think about it while fighting, while angry at the world. The more varied states that you are in while considering your problem, the higher the odds you will stumble on a new insight.

Rubber Ducking

Rubber ducking is a technique for debugging that’s famous in the programming community. The idea is that simply explaining your problem to another person is often enough to lead to the eureka! In fact, the theory goes, you don’t even need to describe it to another person. It’s enough to tell it to a rubber duck.

I have noticed this a number of times. I’ll go to write up something I don’t understand, some problem I have on StackOverflow, and then bam, the answer will punch me in the face. There is something about describing something to someone else that solidifies understanding. Why do you think I’m going through the trouble of writing all of this up, after all?

The actual science is a bit mixed. In one study, describing current efforts on a problem reduced the likelihood that one would solve the problem. The theory goes that this forces one to focus on easy-to-verbalize parts of the problem, which may be irrelevant, and thus entrenches the bad approach.

In a different study, though, forcing students to learn something well enough to explain to another person increased their future performance on similar problems. A number of people have remarked that they never really understood something until they had to teach it, and this maybe explains some of the success of the researchers-as-teachers paradigm we see in the university system.

Even with the mixed research, I’m confident that the technique works, based on my own experience. If you’re stuck, try describing the problem to someone else in terms they can understand. Blogging works well for this.

Putting it All Together

In short, then:

  • Problem solving can be thought of as search on a graph. You start in some state and try to find your way to the solution state.
  • Insight is distinguished by a change in problem representation.
  • Insight can be facilitated by active seeking of new problem representations, for example via drawing or creating analogies.
  • Taking breaks during working on a problem solving is called incubation. Incubation enhances problem solving ability.
  • A night’s sleep improves problem solving ability to a considerable degree. This may be related to memory consolidation during sleep.
  • Mind-wandering facilitates creativity. Low effort tasks are a potent means of encouraging mind-wandering.
  • To improve problem solving, one should study solved problems, attack the problem while in different moods, and try explaining the problem to others.

The Science of Habit

The truth is that everyone is bored, and devotes himself to cultivating habits.
—Albert Camus, The Plague

To my perpetual dismay, I’m not a rational agent with limitless willpower. I’m not every moment brimming with novel insight and original computation. No. I’m a habit machine, a behavior-executor, on autopilot — a creature of habit. (Or “habbit,” for illiterate googlers.) I do the things that I do because that’s how I’ve done them in the past.

How horrible — but no, habits are adaptive. They are a good thing. Don’t believe the popular wisdom. We’re habit machines. Embrace it. Without habit, you would have to think through all the small things — will I have coffee with breakfast? Should I brush my teeth before or after showering? How do I tie my shoes? Ad infinitum.

Something like this does happen with Parkinson’s patients. The disease damages regions key to habit formation — the basal ganglia and company. This interference results in sufferers performing poorly on a number of laboratory tasks. Less habity-ness than a healthy brain: not positive, not a good thing, not beneficial.

Too much habity-ness is a problem, too. The drugs used to treat Parkinson’s can lead sufferers to develop gambling or sex addictions. Some of the symptoms of OCD look an awful lot like problems with habit — repetitive thoughts, urges to engage in certain rituals, grooming behaviors (hand washing), and more. (Wikipedia lists hair-pulling as a symptom of OCD. I dated a chick with OCD once and she would pull out her hair, so I can very scientifically confirm the truth of this.) The compulsions of OCD are the result of taking the “force” in “force of habit” and amplifying it.

There is a habit spectrum with those who have trouble establishing habits — Parkinson’s disease patients — on one end and those who form habits too easily — OCD — on the other end. In fact, Lally et al. found that there is significant individual variation in habity-ness. For a habit to reach its peak, it took subjects anywhere from 18 to 254 days, with a median of 66 days.

We can visualize this as a probability distribution, in which it takes most people around 66 days to establish a new habit, but with significant variation. The tails of the distribution are characterized by pathology, e.g. OCD and Parkinson’s.

habit-curve

Why Should I Care?

Human behavior is like a natural disaster, an avalanche or a forest fire. You can nudge the Titanic, schedule a controlled burn, and build avalanche barriers, but that’s about the extent of it. These are the equivalent of establishing the right habits during periods of high motivation and control. With consistent nudging, you can set yourself onto a new path.

Consider the man gracing the one-hundred dollar bill, Benjamin Franklin. He was interested in cultivating virtue — contrast with our modern obsession with personality — and developed a system for doing so, writing in his autobiography, “the contrary habits must be broken, and good ones acquired and established.”

He created a weekly chart, marking it “by a little black spot” whenever he failed to live up to one of his 13 virtues. On any given week, he would focus on just one of the virtues. Of his system, he writes, “I was surprised to find myself so much fuller of faults than I had imagined; but I had the satisfaction of seeing them diminish.”

This is all to say that the essence of a man is, in some sense, what he does out of habit. A mathematician is defined by his habit of doing math, a programmer by his habit of programming, and a writer of his habit for writing. To be a kind person, be kind out of habit. To the extent that enduring personality can be shaped and modified, habit is the way.

Consider competence. Excellence in anything is the result of practice. How does one chew through a mountain of practice? Out of habit. If you develop a habit of setting aside a few hours each day to push through your boundaries, this habit will propel you to excellence. This is what the development of expertise looks like. It looks like a habit of waking up at 5 in the morning to do laps in the pool.

What is a Habit?

My friends were wise men of the first rank, and we found the problem soon enough: coffee wanted its victim.
—Honore de Balzac, The Pleasures and Pains of Coffee

A habit is an automatic behavior, repeated often. There is often a cue that prompts it. I have a coffee habit, triggered by sleepiness, waiters asking if I would like coffee, the smell of coffee, reading about coffee and, as I’m just now discovering, also writing about coffee. The caffeine barricades my adenosine receptors and releases a flood of dopamine, reinforcing the behavior. A habit is born.

We can take nervous habits as an example as well, such as stroking the neck. These sort of self-soothing gestures are cued by internal feelings of distress, which launches the behavior (neck rubbing). The reinforcement here is the resulting decrease in distress.

Cigarette smoking works in much the same way. Many people smoke when they wish to relax, so it can be cued by internal feelings of tension. This triggers getting out the cigarette and smoking it, which provides a hit of nicotine. The nicotine acts on the brains reward system, which reinforces the behavior. (Nicotine’s role in encouraging habit formation is why it can be so difficult to quit.)

However, this process is not carved in stone. There are some habits which don’t have clear cues or rewards. As part of writing this, I set my phone to buzz at random intervals during the day, at which point I’ve been reciting the poem “Invictus.” There’s no clear reward, but habit formation has been chugging along nonetheless.

As my Invictus example implies, there are mental habits, too, and they function in the same way. When you mention Illinois State University, my mother — without fail — will say, “Go salukis!” (Her alma mater’s mascot.) When I hear someone say “Turn it up”, a Filip Nikolic remix of “Bring the Noise” hijacks the helm of my consciousness and steers it to the melody of that beat.

Somewhat troubling is the realization that most of our thought is not internally generated, but scripts that run as a result of external cues. Patients presenting with transient global amnesia are a dramatic example of this. Unable to commit anything to long term memory, they continue to execute the same loop of behavior, repeating the same conversations over and over. Radiolab has great coverage of one case in their “Loops” episode.

Habit Formation

When a habit is first being formed, it consists of deliberate, effortful, goal-based activity. This is supported by brain scans, which show activity in the prefrontal cortex — the front brain, sometimes called the seat of reason. For those familiar with dual process theory, this is system 2 behavior.

In the beginning stages, behavior is flexible. Each act of the behavior in question can be thought of as an original (and thus effortful) computation.

As the behavior is repeated, it becomes less effortful, and brain activity begins to shift. Activity in the prefrontal cortex dies down and activity moves into lower, more central brain regions — mainly the basal ganglia. The behavior itself becomes less flexible. The process is much like the life-cycle of a clay bowl: first, wet clay is malleable, but — once fired in a kiln — it hardens and is ready for use.

Habit formation as a progression from effortful search to retrieving one path.

Habit formation as a progression from effortful search to retrieving one path.

For those comfortable with computational metaphors, we can imagine habit formation first as a sort of graph search — trying to find the right sequence of actions that lead to some reward, like alpha-beta search in a chess engine. With time, the brain learns when one path is often retrieved. It “saves” that path and executes that in the future, avoiding a whole lot of computation, but at the cost of flexibility. The graph search corresponds to activity in the prefrontal cortex, while the saved path is executed by the basal ganglia.

The Progress of Habit

The interplay between automaticity and repeated behavior gives us a visual of habit formation over time.

The interplay between automaticity and repeated behavior gives us a visual of habit formation over time.

The picture above is what an activity looks like over time as it solidifies into a habit. It starts hard and effortful. With each execution of the behavior, it becomes more automatic and natural, until it reaches an asymptote. At this point, it levels off and has become a bona fide habit.

Cultivating Good Habits

Thus far, the discussion has been theoretical. We are interested in habits, though, in what they can do for us. We would like to cultivate the right sort of habits in order to become the person that we would like to be.

The science suggests a few guidelines.

  • First, everything becomes easier with practice. This alone is motivating.
  • To cultivate a habit, do that thing as often as possible. The time to enact new habits is in when a high motivation state.
  • Create some cue to prompt the habit. A cell phone alarm is good for this. You can use the TagTime Android application to set up random pinging throughout the day.
  • After the good habit has been executed, reward it in some way. M&M’s are a popular reinforcer, but even positive self-talk can be effective. Some people even use nicotine.
  • There are a number of productivity tools that make habit formation easier, like HabitRPG, chains.cc, Pomodoro timers, and BeeMinder.

Breaking a Bad Habit

The way that people often go about stopping a bad habit is by attempting to just “use their free will” to quit doing it. This does not often work, as evidenced by all of the people who have such difficulty with their fitness goals or quitting smoking.

If you have a specific habit you would like to stop, I would first suggest looking for resources specific to those habits. There are already good resources for those trying to quit smoking, but I’ll admit that I looked around and most guides were not that compelling.

There are a few ways to tame a bad habit. The first is to understand the context of the habit. What’s the cue? What’s the reward? Once you’re able to notice this, it becomes possible to gain some measure of control over it. You can try to figure out a path to removing the cue or the reward, or even replacing it with a disincentive, like when nail-biters coat their nails in something bitter.

Alternatively, and I think this is the best option, is to establish a new habit in place of another, to fight fire with fire. Eating junk food is a habit that many would like to stop, but this is the wrong way of looking at things. One ought to try to eat more healthy food. The junk food will fall by the wayside. For those who wish to stop eating meat, frame it not as “stop eating meat” but instead as eating more plant-based meals.

This is the difference between approach and avoidance goals. Approach goals are framed as something you want to do, while avoidance goals are framed as something you want to avoid. Approach goals (such as eat more vegetables) are more energized over time and more likely to be achieved. Reason #45820 the human brain is hack: just reframing your goals as approach instead of avoidance can improve your odds of completing those goals.

Putting it All Together

To recap:

  • A habit is a behavior that becomes automatic and effortless with repetition.
  • Habits are important because so much of our behavior happens outside of conscious control. Developing the right habits allows us to modify who we are.
  • A habit consists of a cue which triggers a behavior which is then reinforced.
  • Habits start effortful and goal-directed, but become effortless and automatic with repetition.
  • Habitual behavior becomes less flexible over time and can be conceptualized as the migration from graph search to a fixed sequence of behavior. This is computationally cheaper.
  • There are several technologies available that can aid in habit formation.
  • To conquer a bad habit, notice what cues the habit and then try replacing it with a new, better habit or by removing those cues.

Further Reading