Does race exist?

No one in her left brain could reject reductionism.
—Douglas Hofstadter

Dear friend,

I read your recent response on edge.org, arguing that the concept of race ought to be retired. Race, you argued, has no place in science, being a messy concept with no clear genetic basis. You said things like — I’m paraphrasing here –, “the apparent homogeneity of races is a product of the environmental factors, not genetic determinism and DNA.”

Friend, when I read your ideas, I let out a hoot of delight. I have long felt the same way. Only I didn’t realize it until now. You see, friend, not only does race not exist, I’m certain that humans don’t exist, either.

I know, I know. No humans? But hear me out. It sounds absurd, but surely not so much more absurd than your own ideas at first appear. I feel like some people are White and some are Asian, but I now understand that race — invented by immoral racists — is an imprecise, messy, non-scientific notion and ought to be abandoned.

You see, friend, the concept of human is messy, too. You can’t say, “all two-legged, two-armed things are human.” There are some humans with one arm or no arms and there are chimpanzees with two arms and two legs. Messy!

But maybe this is not enough. You might demand to compare our DNA and say, “Look, humans have different DNA than chimpanzees. What a scientific comparison!” After all, that’s why you rejected race. DNA didn’t support its existence.

Ah, but friend, you have not gone far enough. DNA is not a scientific concept, either. It’s messy. You can tell it apart by its higher level characteristics, it’s structure, sure, but race is the same — you tell races apart by characteristics like skin color or whether they own a Faith Hill CD. You see, with DNA, when you go down a level, down to subatomic particles, all DNA is the same. You can’t tell it apart, not scientifically.

The notion of human, then, along with the notion of DNA ought to be abandoned. The means of telling them apart — relying on subjective judgment regarding high level structure — are vague, messy, and not science. Just like you reduced race to DNA, you need to reduce humans to subatomic particles, and those are all the same. Humans, like race, can’t exist.

But that’s not all, friend. You see, I’ve been a little dishonest with you. Not only do humans not exist, neither do chairs, squash, love, or happiness and, well, anything that is made out of other things. All of these rely on unscientific categories to distinguish them, invented by confused humans, no doubt most of them racists. They’re all made out of subatomic particles and, as you know, subatomic particles are all the same.

In fact, friend, only subatomic particles and fundamental forces exist. The rest, well, as you said, it’s a “social construct.” Just as “racial skeptics see no racial patterns,” I see no patterns at all, only subatomic particles. I hope you see what violent agreement we are in. Just as “race today is best considered a belief system that ‘produces consistencies in perception and practice at a particular social and historical moment’,” chairs, squash, humans, and concepts generally are best considered a belief system that produces consistencies in perception and practice at a particular social and historical moment (invented by — like race and racists — immoral categorizers, no less!).

Polya Urn Model Dissolves The Gender War

Now listen, you queer, you stop calling me a crypto-Nazi or I’ll sock you in the goddamn face and you’ll stay plastered.
—William Buckley1

I’m not sure whether the two sexes are about to stage World War III over gender issues or if I’m in some sort of gender bubble but, for whatever reason, I’ve been hearing about gender issues daily for the past six or so weeks and, as part of a gender, I have some thoughts on it. These thoughts revolve around the notion of boys clubs and general irritation at the idea that I’m somehow in the wrong for being a manly guy who likes to do manly things, and those manly things happen to be math and computers.

You see, friend, I’m part of a number of male-dominated communities, of boys clubs, and — to my perpetual dismay — I find myself surrounded by discussions where some naive progressive yells something like, “Hey, look at all these men here. It’s all men. Where are the women? Therefore, sexism.” And I’m thinking something like, “Hey, fuck you. I’m not a fucking sexist. Take your agenda elsewhere and let me read about category theory in peace.”

But then I had idea. What if boys clubs are like Polya’s urn? And, if you don’t know what Polya’s urn is, don’t worry, because I’m about to tell you. There’s this urn, right, a big fucking urn, and it’s got two balls in it, like one of those lottery contraptions, a blue one and a red one. Here’s the thing about this urn, though. When you draw a blue ball from it, you have to put two blue balls back in. When you draw a red ball, you put two red balls back in.

Now, the notable thing about this urn is that small changes in initial conditions lead to big changes in the long run. If your first draw is blue, it’s pretty likely that all the subsequent draws will be blue, too.

What if male-dominated sites like, let’s say Reddit, are like this? The Reddit founders were two men, Alexis Ohanian and Steve Huffman. Given that men are more likely to have male friends, they’re like the balls in Polya’s urn. They’re in the urn, convincing a friend to sign up is like picking a ball out of the urn and throwing two more back in, and Reddit’s current male-dominance is the long-run outcome of this sort of process. Men invite men, therefore men and not sexism. QED.

Hey, look Ma! My model predicts boys clubs without positing sexism anywhere.

Or, you know, there’s the whole compelling narrative that men and women have different preferences, with men more likely to be interested in things, like computers, and women more interested in people. Oh, and hey, before you go screaming “But that’s sexist. Men and women are the same in every way,” here’s some empirical validation with massive effective sizes.

1. A Google search for both “quotes about men” and “quotes about women” return these pages. Both are filled with quotes about how great women are and what brutes men are.

Expert Memory: What Can Memory Experts Teach Us?

That which we persist in doing becomes easier, not that the task itself has become easier, but that our ability to perform it has improved. —Ralph Waldo Emerson

Malcolm Gladwell dragged the notion of deliberate practice into the public lexicon with the publication of his book Outliers. In short, world class performance depends not on talent, but on thousands of hours of a special sort of practice, deliberate practice.

It’s straightforward that practice is the route to improvement of some skill. Take typing. I can type without effort. I’m not thinking about the keys or the movement right now, but instead operating at the level of sentence construction. (Sometimes I wonder if there are yet higher peaks to reach, where one only thinks in images or not at all.) My performance wasn’t always this way, though. Typing used to be a horrible, frustrating affair, and I know this because I’ll experience that frustration again if I switch to an alternate keyboard layout like Dvorak.

What makes practice deliberate?

There are a few characteristics of deliberate practice:

  • It’s effortful. If it wasn’t, everyone would do it and it would no longer separate world class performers from everyone else.
  • It’s designed to improve performance. Deliberate practice is about leaving your comfort zone and pushing your limits. It consists of taking something you don’t understand how to do, sitting down and repeating it until mastery has been achieved. It makes you feel dumb.
  • There’s feedback. You can tell whether or not you’re doing it right and correct your performance.

Daniel Coyle, who wrote The Talent Code, put it this way:

  1. Pick a target
  2. Reach for it
  3. Evaluate the gap between the target and the reach
  4. Return to step one

Automatic Plateaus

One might wonder: why do we need a form of practice different than normal practice? The answer is that performance plateaus. A man might drive his entire life, but never become as skilled as a race car driver. His performance plateaued after he learned how to drive and has not improved much since. The same is true of typing. I learned how to type long ago, but my speed has since capped out at about 90 words per minute and not budged since.

Generally, learning a skill seems to at first require our full attention and to be effortful and, after time, gives way to automaticity. At this point, performance plateaus and further improvement must be targeted.

Breaking Down Skills

To do the impossible, break it down into small bits of possible.

To practice deliberately, then, one ought to break a skill down into small components, each which can be practiced, and then repeat those skills until automaticity has been achieved, at which point one can work on further refinement. This is the road to mastery.

As an example, before one can learn to program, one needs to learn a number of sub-skills, such as general computer literacy (which can further be broken down), the syntax of a programming language, familiarity with different control structures, a text editor, and so on. To write a web application there is still more, like familiarity with how the entire stack works. You’ll probably want some knowledge of the command line, too, and so on. Before all this, one ought to be able to type, know what a computer is, the ability to read, finding information via Google, etc.

The same is true of any skill. Improving one’s understanding of calculus, for example, at least the mechanical parts, consists of one learning to solve different forms of integrals and derivatives. Once mastery on the simpler ones has been attained, one can move on to more complex ones, multivariable calculus, and so on, leading one higher and higher on the infinite ladder that is mathematics. And, of course, there are a million other mundane skills, too, like writing and keeping work organized, noticing when you’re confused, etc.

Indeed, even all of these are at too high a level, each of which should be broken down further. You need to consider the answer to questions like: what does expertise in this field look like? How can I quantify it? What are some goals that would let me know that I’m improving? Make a checklist.

Paying Attention and Neural Reconfiguration

A man is what he thinks about all day long.
—Ralph Waldo Emerson (again)

There is an awesome post over on Less Wrong about the relationship between neural reconfiguration and attention, which ties in with the earlier discussion of automaticity. The basic idea is that your brain wires whatever it is that you pay attention to. The more often you lean on a neural structure, the more it grows.

Consider mindless practicing: sitting down with a guitar, running through a song haphazard, missing notes like a drunk misses stop signs. In contrast, consider playing through a song with intense focus on every note and fingering. The second sounds is going to be a whole hell of a lot more effective and we have the science to back it up. Take a group of humans and compare brain mass based on whether or not they were paying attention during the task. This has been done.1 Attention makes the brain grow.

It’s as if there is Attention, king of the Neuronal people and, when he becomes interested in something — like mathematics — he yells to his people, “Optimize my kingdom for mathematics!” and the people build math libraries and put chalk boards everywhere.

How can one improve one’s attention?

There are a few ways I can think of to improve attention. There are stimulants, like caffeine, nicotine, modafanil, and adderall. Beyond that, you can go meta and try to improve attention by paying attention to attention which means — hooray! — you’ve invented Vipassana meditation, the best introduction to which is either Mindfulness in Plain English or Daniel Ingram’s Mastering the Core Teachings of the Buddha. There’s always blocking out distractions (turn off the television!), too, and setting aside time blocks when you’ll worry about only one thing, perhaps via Pomodoros.

Expert Memory, Insight and Recognition

In 2001, Anna-Maria Botsari played 1102 chess matches simultaneously, winning 1095 of the matches and drawing 7. Perhaps even more impressive, Marc Lang holds the record for simultaneous blindfold chess, having played 46 matches at once, winning 19, drawing 13, and losing 3. (Blindfold chess, for the unaware, is when one plays without a board and is forced to keep all of the positions in memory.)

I have enough trouble remembering the 7 digits of a telephone number. More than 1400 board positions? Not a chance.

Or so you might think, but it turns out that any high-ranked chess player can play blindfold chess. It’s not an innate ability, but something acquired over years of practice. These sort of amazing feats rely on something that’s been dubbed long-term working memory.

The basic idea behind long-term working memory is that the superior memory of experts is the result of years of training, which allows one to access long-term memory in novel ways. This allows for feats like blind-fold chess. (For a poignant example of this, check out the book Moonwalking with Einstein.)

The earliest evidence for this comes from de Groot’s classic study of chess recall.2 He took groups stratified by chess ability and showed them different board positions, which he later asked them to recall. The better a person was at chess, the better their recall of board positions. The more interesting result, though, is that de Groot found that this only held when the board had positions of the sort one would see in actual play. When he showed subjects randomized board positions, experts did as poorly as novices. This has been replicated a number of times in chess,3,4,5,6 bridge,7,8 go,9 music,10 field hockey, dance, and basketball,11 figure skating,12 computer programming,13 electronics,14 and physics.15

The idea behind this is chunking. An untrained individual can hold about seven (plus or minus two) numbers in short-term memory at one time. Short term memory, then, is limited, but one can get around this via chunking. Given the right structure, like a meaningful chess board position, larger chunks can be held in memory. When reading, for instance, one doesn’t hold individual letters in memory, but entire words. The letters have been chunked into words.

Imagine a machine that can only hold four concepts in memory at any one time. Thinking “Red barking dog eating” would fill all available memory, but it has a way around this — a glue operation which, while computationally expensive, allows it to glue concepts together to create a new concept. For example, it could take “barking” and “dog,” glue them together, and create a new concept, “barking dog.” Now the machine could hold “Red + barking dog + eating” in memory and still have room for one more concept.

I propose that this is how expert memory works, with humans having some sort of equivalent of the glue function that takes place during deliberate practice. Herbert Simon estimates that each chunk takes about 30 seconds of focused attention to create, with an expert having created somewhere between 50,000 and 1.8 million chunks — about 10 years of four hours of practice per day.16

From the inside, chunking feels like getting a handle on something, on having a word that compresses some larger idea, or the crystallization of some idea. At least sometimes. I suspect most instances of chunking are non-conscious.

From Whence Intuition Springeth

Experts are often distinguished by their intuition. Consider the blitz style of play in chess. Specifics vary, but in general it works that each side has five minutes on the clock and a limit of ten seconds per turn. The conditions make it so one has to move without thought, relying on intuition.

It should be of no surprise that stronger chess players trounce weaker ones in blitz matches, but how does it work? From whence does intuition spring? The answer is long-term memory. It works sort of like this: when the brain creates a chunk, it’s saved in long-term memory. A chess master who has studied many matches has created tens or hundreds of thousands of such chunks, with each chunk being something like a board position and what moves are strong and which aren’t. What looks like intuition is the brain pattern-matching against what it has seen before. The chess player looks at the board, similar positions and strong moves are automatically retrieved from long-term memory, and he makes one of those moves.

Insight is the fast, effortless recall of cached experienced. This is memoization. Instead of computing something several times, save it in memory and look it up when you need it. I propose that the human brain works in a similar manner. When we meet with a novel experience or problem, we’re forced to use effortful computation to solve it, which is then chunked and saved in long-term memory. In the future, similar problems are solved via look ups.

The Mental Molasses Hypothesis

You have to be fast only to catch fleas.
—Israel Gelfand, Soviet mathematician

An individual neuron can fire anywhere between 1 and 200 times per second. This is sorta the equivalent of clock speed of a processor, where each neuron in the brain is a simple processor. Neurons operate at a top speed of 200 hertz, though, while a modern processor can hit speeds of nearly 4 gigahertz, or 4 billion hertz. This means that — and this is a rough comparison — a CPU is 20 million times faster than one neuron.

The difference, though, is that where a modern CPU might have between four and eight of these ultra-fast processors (and more in the future!), a brain has about a hundred billion neurons. It’s the parallel processor.

But this doesn’t do anything about serial problems, where one neuron is going to be the bottleneck. 200 serial steps — and you can’t do much in 200 steps — in the brain will take one second, and there are a whole lot of problems that can’t be parallelized. (This complexity class is called P-complete.) So what’s going on?

Jeff Hawking answers this in his book On Intelligence:

The answer is the brain doesn’t “compute” the answers to problems; it retrieves the answers from memory.

Sound familiar? The brain is a giant cache. Sure, it computes, too, but it’s slow. Most of our thought is retrieval from long-term memory. You can even observe this during conversation, which is almost never the creation of novel thoughts, but mostly the repeating of things you’ve thought and heard before.

Putting It All Together

Rumor is that a pedestrian on Fifty-seventh Street, Manhattan, stopped Jascha Heifetz and inquired, “Could you tell me how to get to Carnegie Hall?” “Yes,” said Heifetz. “Practice!”

Putting it all together, then, humans are memory machines and expertise is a result of the amount of domain specific knowledge — chunks — that one has stored in memory. These chunks are created during deliberate practice, an effortful activity designed to improve performance, which is distinguished by requiring intense focus. This focus turns out to be a required ingredient for bringing about neural reconfiguration.

This model is nice, but how can you put it into practice? To accelerate the creation of chunks, try using Anki. Be sure to read through this great article on spaced repetition. (Roger Craig used Anki to set records on Jeopardy! Do it! This is a sign! Look at all these exclamations!) Increase the amount of deliberate practice that you engage in by taking a skill you’d like to improve, break down what expertise in that domain looks like, identify your weakness and what you don’t know, then make a step by step plan for improving your skill. Ensure that you break that plan into chunks small enough that they’re no longer intimidating.

Once you have a plan worked out, set aside a couple Pomodoros each day to focus only on deliberate practice. Shut out distraction, drink some coffee or green tea, sit down and focus. (Maybe even try chewing nicotine gum.)

Once you have all that down, periodically review your training and your plan, throw out what doesn’t work, and try new things. Happy practicing!

Sources


1. Stefan, Katja, Matthias Wycislo, and Joseph
Classen. “Modulation of associative human motor cortical plasticity by attention.” Journal of Neurophysiology 92.1 (2004): 66-72.

2. de Groot, Adriaan David Cornets, and Adrianus Dingeman de Groot. Thought and choise in chess. Vol. 4. Walter de Gruyter, 1978.

2. Frey, Peter W., and Peter Adesman. “Recall memory for visually presented chess positions.” Memory & Cognition 4.5 (1976): 541-547.

4. Chase, William G., and Herbert A. Simon. “Perception in chess.” Cognitive psychology 4.1 (1973): 55-81.

5. Reingold, Eyal M., et al. “Visual span in expert chess players: Evidence from eye movements.” Psychological Science 12.1 (2001): 48-55.

6. Charness, Neil. “Expertise in chess: The balance between knowledge and search.” Toward a general theory of expertise: Prospects and limits (1991): 39-63.

7. Charness, Neil. “Components of skill in bridge.” Canadian Journal of Psychology/Revue canadienne de psychologie 33.1 (1979): 1.

8. Engle, Randall W., and Lee Bukstel. “Memory processes among bridge players of
differing expertise.” The American Journal of Psychology (1978): 673-689.

9. Reitman, Judith S. “Skilled perception in Go: Deducing memory structures from
inter-response times.” Cognitive psychology 8.3 (1976): 336-356.

10. Sloboda, John A. “Visual perception of musical notation: Registering pitch
symbols in memory.” The Quarterly Journal of Experimental Psychology 28.1
(1976): 1-16.

11. Allard, Fran, and Janet L. Starkes. “Motor-skill experts in sports, dance,
and other domains.” Toward a general theory of expertise: Prospects and limits
(1991): 126-152.

12. Deakin, Janice M., and Fran Allard. “Skilled memory in expert figure
skaters.” Memory & Cognition 19.1 (1991): 79-86.

13. McKeithen, Katherine B., et al. “Knowledge organization and skill differences in computer programmers.”
Cognitive Psychology 13.3 (1981): 307-325.

14. Egan, Dennis E., and Barry J. Schwartz. “Chunking in recall of symbolic
drawings.” Memory & Cognition 7.2 (1979): 149-158.

15. Larkin, Jill, et al. “Expert and novice performance in solving physics problems.” Science 208.4450 (1980): 1335-1342.

16. Simon, Herbert Alexander. The sciences of the artificial. MIT press, 1996.

Future Generations Are Your Legacy, All Of Them

I have heard tell of a time in a man’s life where he begins to worry less about his own dreams and invests more in living through his children. His children will be his legacy.

This is a very real and common thing that people value: leaving behind something of some permanence, and children are one means to achieving this. I would like to suggest, though, a broader view of things, of thinking about humanity as a whole as a legacy.

First, realize that there are a whole lot of people that are sorta like you out there in the world. For one, you have a lot more in common with every single human than any chimp; things like speech, thumbs, religion, awe, humor, and no doubt many others. Zooming in further, even if you’re one in a million, there are eight of you in London.

Stepping back a moment, do we care so much about leaving children behind or about people like us existing? I find when I try to think of compelling arguments as to why I ought to prefer myself (and people related to me) over others, I come up with nothing. Sure, I do prefer my own welfare. Evolution has guaranteed that. But as far as compelling justifications goes, I’ve nothing great.

Consider, you could get hit by a bus tomorrow and it would be a tragedy, but the world would go on. People sorta like you would continued to exist. They would have thoughts not so different from your own. Hopes like yours, values like yours, feelings like yours. No doubt a fraction of them (maybe very small!) would be smarter, funnier, prettier, and kinder than you.

It seems silly to me to prefer myself — at least strongly — over people sorta like me. As long as they continue to exist, it doesn’t matter much whether or not they share half my genetic material.

When Is It OK To Break The Rules?

I propose a new way of thinking about rules. Not as something that distinguishes between what one is allowed and not allowed to do, but rather as a penalty that certain actions carry. Not moral law sent down from on high, but costs for implementing certain strategies.

Imagine the virtual city of Neebar, ruled by a horde of half-ox, half-man with a penchant for all things camel. Neebar is notable because it has a strange penal code: running water is illegal.

But it’s not that illegal. The punishment for using running water is a yearly fine of 200 Neeblorinos, roughly equivalent to American dollars, so lots of people decide to have running water anyways and pay the fine.

These citizens of Neebar have decided to implement running water despite the penalty for doing so.

Or consider sports, penalty kicks in soccer and free throws in basketball. These penalties exist not to say that some things are off limits, but rather the penalties are part of the mechanics of the game itself. The rules constrain action only insofar as the penalties constrain what one is willing to do. They act as a disincentive.

Consider a rational agent playing basketball. He realizes that he can win the game, but he will have to make an illegal move to do so. If he weighs the costs (a penalty) and the benefits (winning) and finds that the benefits outweigh the costs, he will implement that action.

More generally, the consequences of breaking a rule are costs that come along with an action, not constraints on action. The actual constraints on action are the expected outcomes of that action. If you expect that travelling in basketball will result in a net loss, you shouldn’t do it, but if it’s a net gain, you ought to do it — even though you have to pay the penalty. Just think of the citizens of Neebar.

Could OSX’s Spotlight Suck More? Doubt It

There was a post about a week ago about how new computer science students don’t get the Unix philosophy and the power (and great responsibility) of the command line. I don’t know these people. Most of my dev time is between Emacs, a terminal, and a web browser.

But during this discussion, or maybe in the article itself, there was an argument along the lines of: kids don’t need to learn how to use find or locate when they have Spotlight, and I nodded along, swallowing this like so much bad medicine.

Until, just moments ago, I went to open up Google Chrome and was like, hey, I’ll just use Spotlight to do it. So I tried. And I waited, and then waited a bit more, and remembered: oh, right, Spotlight is a piece of shit that never actually manages to find anything. Need to search for something? Oh, too bad, out of commission, I’m busy indexing your entire machine for the one hundredth time today and, surprise, this breaks the entire utility.

(Googling around seems to reveal this as an OS X bug fixed in a more recent update. No doubt that introduces more bugs and the Sisyphean cycle continues.)

Thoughts On The Police Body Cameras Privacy Debate

There’s a link on Reddit to the sort of story that the internet can’t get enough of: police abuse. This time, the NYPD sodomizing a black man with a plunger.

People love to hear about the abuse of power. Something along the lines of it keying into our gossip reward centers with a side of moralizing. To indulge in creating my own evo-psych just-so story: gossiping about the misdeeds of your hated rival, head-chimp Heephop, might be a way of polling public sentiment as to your chances at a successful coup. Disrupt the existing peace, topple the power structure and, hey, maybe now you’re on top. Or at least higher up.

But I digress. I want to bring your attention to the top comment, which is:

That is the reason I like the idea of all cops having to wear GoPros strapped to their chest.

To which another user responded (also highly voted):

Honest cops should be in favor of this too, because citizen complaints also drop to near zero (cops often say they get a lot if bullshit complaints from people who want to get back at them). Neither side has much of an argument when there’s video.

This brings me to the police body cameras privacy debate.

The argument, then, is something like, “Police should be recorded because of all the abuse it will prevent. If you need to know what really happened, you can look at the recording.” Or, to lead you to the point of this post, surveillance of the police is an a-okay subset of surveillance more generally.

But these same arguments apply to recording everything! Want to stop people from murdering other people? (Yes.) Record everything, everyone, everywhere. If you need to know who murdered whom, or verify an alibi, go look it up in the archive.

At this point, the discussion we ought to be having is how can surveillance be implemented effectively. People worry, rightly, that the recent NSA revelations and that sort of thing are symptomatic of expanding executive power, which enables abuse. The question should not be how can we reverse surveillance, one might as profitably ask how to reverse the spinning of the earth on its axis. Instead, we should be thinking about questions like, “How ought surveillance be implemented? What sort of power structures are best?”

An Interesting Academic Field

I’m troubled by not only how much I don’t know, which is legion, but how much that I don’t know that I don’t know. There’s so much out there that I’m not even aware of most of my ignorance; it’s as dark matter, lurking unseen and unknown.

Today, I stumbled on an entire scientific field that I wasn’t even aware existed: intellectual history, which is “the study of intellectuals, ideas, and intellectual patterns over time.” I must admit that I’m not much for history, but this seems very useful, and I’d love to get my hands on a decent textbook covering the “greatest hits” of ideas. Alas, I’ve as of yet been unable to track down quite what I’m after, but this looks close.

Bill Thurston on Reading Hard Things

I was really amazed by my first encounters with serious mathematics textbooks. I was very interested and impressed by the quality of the reasoning, but it was quite hard to stay alert and focused. After a few experiences of reading a few pages only to discover that I really had no idea what I’d just read, I learned to drink lots of coffee, slow way down, and accept that I needed to read these books at 1/10th or 1/50th standard reading speed, pay attention to every single word and backtrack to look up all the obscure numbers of equations and theorems in order to follow the arguments.
—Bill Thurston, from the foreword to Teichmüller Theory and Applications to Geometry, Topology, and Dynamics

He goes on to talk about the importance of using one’s whole mind in understanding mathematics. You might want to check it out.

What Is Wisdom?

There’s an art to knowing when;
Never try to guess.
Toast until it smokes & then
20 seconds less.
—Piet Hein, “Timing Toast”

When one first learns a theory, one tends to take it a bit too seriously. I’ve heard that people who later convert to Christianity tend to be much more fervent believers than those who are raised with it, for example, or note the brain damage that first exposure to libertarianism and Ayn Rand seems to do to young people, the same with economics, or the phenomenon where people who have just taken a psychology course tend to see disorder everywhere.

These are each characterized by a lack of sophistication. It’s taking a theory, like the efficient markets hypothesis or utilitarianism, and attempting to interpret everything through that lens until you realize that something has gone very wrong, and then modifying your understanding so that it becomes more nuanced. One might abandon utilitarianism for preference utilitarianism, or realize that no, markets are not magic.

This is characteristic of what it means to be wise: not only to understand a theory, but also to understand its limitations and when it ought be applied. A psychology student who has just learned that happier people tend to engage in positive reframing will have a bad time if they try to point out the bright side at a funeral. One who has meditated on, lived with, and been burned by a theory will not make such mistakes.

But it’s not right to say that more sophistication is better, as it can be a symptom of salvage, of belief-bandaging. Take Christian apologetics, for example, the act of trying to reconcile Christianity with all of the evidence, scientific and otherwise. This field piles excuse upon excuse, explanation upon explanation, each less believable than the last. It’s a constant stream of apologies and rationalizations for all the mistakes in the Bible.

This isn’t wisdom. This isn’t a case of a useful theory being saved by an understanding of its limits, but one where a dying belief is kept alive by scheduled transfusions of excuses. Or we can trot out the proverbial dragon-in-the-garage. Let’s say someone claims that there is a dragon in their garage but, when you ask to see it, they say that it’s invisible, and then you propose throwing flour on it to reveal the shape of the dragon, but then they say that the dragon just happens to be permeable to flour. This person is not becoming more wise thanks to the increased sophistication, but rather propping up a falsehood.

I haven’t any real solution, though, other than to warn you to watch out for excess sophistication and to quote Feynman:

The first principle is that you must not fool yourself and you are the easiest person to fool.