An Argument Against An Agument Against Nihilism

Skepticism, while logically impeccable, is psychologically impossible, and there is an element of frivolous insincerity in any philosophy which pretends to accept it.
—Bertrand Russel

The standard argument against nihilism – the notion that life is an affair devoid of meaning – goes something like this: Nihilists don’t actually believe anything they’re saying because they still act as though the world is ordered. They still value something, like freedom from pain, and this is revealed through their actions.

The implication, then, is that nihilism is bullshit. If people believed it, they would act differently, so you need to pay no attention to nihilism, because no one really believes it anyways.

This is not a satisfying refutation. The argument relies on the notion that one’s beliefs and one’s actions need to be aligned. If Andy says that he believes your pet Burmese python, Handsome, is harmless, but is hesitant to hold him, then Andy is a fucking liar.

But that’s brain damaged. You can believe something on one level while not accepting it on another. You can believe that the odds of being attacked by a shark are nigh non-existent, but still be afraid to swim in the ocean. You can believe that you really ought to stop eating unhealthy food and keep eating it anyways. I know that the Earth is hurtling around the sun at 67,000 miles per hour, but it sure doesn’t feel like it.

The behavior of those who hold a belief doesn’t speak to the accuracy of that belief. There are a lot of stupid atheists, but that’s not evidence either way as to whether or not there is a God. There are a lot of utilitarians failing to live up to the moral standards they set for themselves, but this doesn’t mean they don’t really believe it, and whether or not someone really believes something doesn’t speak to the truth of that belief.

If humans are incapable of being perfect nihilists, this is a fact about human capabilities, not about the truth of nihilism.

How To Get Started With Anything

A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work. You have to start over, beginning with a working simple system.
—Gall’s law

All life is an experiment. The more experiments you make the better.
—Ralph Waldo Emerson

The point of the post is this:
1. Try the dumbest thing that could work.
2. Start experimenting as soon as possible.

That’s it. Now I’m just going to go through examples to hammer the point home.

Get start with anything: Concrete examples

How do you train a dolphin to perform a backflip? You reward it for the right behaviors, which reinforces those, until you can chain it all together and get a backflip.

Thanks to Darwin, we know that humans are animals, too, and we know that a lot of the infrastructure our minds run on is shared with other animals. This means that a significant part of what makes you you is also what makes a chimpanzee a chimpanzee.

The takeaway, then, is that humans can be trained in a similar way to every other animal, with rewards for behavior. That’s positive reinforcement.

Okay, so here’s the scenario. You want to learn more math and intend to do this through solving math problems. You enjoy this once you get started, but you’re lazy. Your brain protests when you pull out the textbook. It just wants to watch television. So, you decide to use positive reinforcement to help reinforce studying behavior.

How do you do it? What reinforcer are you going to use? What are you going to reinforce? What if you reinforce the wrong behavior? Who’s going to dole out the rewards? Start thinking like this and you will become overwhelmed and implement nothing.

Try the dumbest thing that could work. Buy a bag of M&Ms and eat one whenever you solve a problem. If that doesn’t work, iterate and try something different.

Waking up in the morning

Getting out of bed in the morning is the bane of humans everywhere. What’s a guy to do? Informed by this post, you know the answer. What’s the dumbest thing that could work?

Download one of the dozens of Android alarm clock apps and try that. If that doesn’t work, iterate. Reduce caffeine in the evening or increase it in the morning (via caffeine pills). Install bright lights. Fast after dinner. Try melatonin.

Building a chess bot

Want to write a program that plays chess? It’s only overwhelming when you’re thinking: how can I write a program that wins at chess? Wrong goal! First write a program that loses at chess every time. It could pick a move at random, or always move a pawn. Then, iterate from there.

Learning math

Maybe you want to learn more math, but you don’t know where to start. Doesn’t matter. Go find a book about math and start reading, or start working through Khan Academy, or watch some video lectures. Don’t like it? Find another book or something else. Keep experimenting.

Memorizing stuff

Or it seems like a lot of people have trouble getting started with Anki. They wonder: what should I memorize? What should I use this for? It’s a hard question so they get stuck. Waste of time. Just add anything that you want to remember or learn. Keep adding, keep experimenting. You’ll figure out what works as you go along.

By the way, don’t miss the writeup of my experience memorizing more than 10,000 flashcards with Anki.

Exercise

In general, a little bit of data is going to be more enlightening than just thinking about it. Maybe you want to start exercising more, but you’re not sure whether or not you want to run or lift weights. Go out and start running. Don’t like it? Okay, try something else.

The alternative is that you spend a bunch of time googling and trying to figure out which is better for you or which you think you’ll enjoy more. Don’t worry about it. Just go try something. See what sticks.

Being a Good Person Does Not Depend On Perfection

The difficulty lies, not in the new ideas, but in escaping from the old ones, which ramify, for those brought up as most of us have been, into every corner of our minds.
John Maynard Keynes

Ah, being a good person. Consider the following.

A man who only restrains from murdering people most of the time will not be considered a good man. He’s a murderer, even though he doesn’t always murder the people he meets. Slip up one time and bam, you’re a murderer. In contrast, saving one life isn’t enough to make a man a saint.

The point is this: in general, to be a bad person, you only have to be bad some of the time, but to be a good person, you have to be good all of the time. Consider: you can be regarded as a thief even if you do not usually steal, but to be regarded as an honest man you can never steal. To be faithful to your wife means that you are faithful all of the time, while you only have to be unfaithful some of the time to be regarded as unfaithful.

There is an asymmetry here, then. To be good requires perfect goodness, while being bad does not require perfect badness.

This is absurd. Abandon the notion that you need to be perfectly good all of the time. It’s impossible. You need a healthier relationship with the good or you’ll never be able to think straight.

What does this have to do with thinking straight? Most people believe themselves to be good people. This is part of their identity. As I’ve pointed out above, this entails — usually implicitly — that they are perfectly good, or pretty close. If they are confronted by a new idea about what it means to be good, then, and they do not conform to that idea, they will be motivated to reject that idea because it threatens their self-image.

Speaking to someone about renunciation is like hitting a pig on the nose with  a stick. He doesn’t like it at all.

–Tibetan proverb

Let’s make it concrete. When I talk with people and point out that a harm of omission is still a harm, they don’t like this at all, even though it’s pretty straightforward. Here are a few scenarios:

  • A man is going to die unless you press a button. Is it good to press the button?
  • A man is drowning. You can save the man. Is it good to save the man?
  • A man is starving. You can afford to feed the man. Should you feed the man?
  • A man will die of malaria in Africa because he cannot afford a insecticide-treated mosquito net. You could, instead of spending $20 at Starbucks each week, donate to the Against Malaria Foundation and save the man. Should you save the man?

The answer to all of these is yes. If it’s not mind-numbingly obvious to you, you are confused. Seriously. There’s nothing to explain. It’s better to save people than to not save people, even if you have to go without your latte.

The trouble with allowing for harms of omission is that it doesn’t allow you to preserve the notion that to be good means you are perfectly good. If you define being good as not actively harming others, being perfectly good is manageable. If failing to help someone counts as harming, it’s no longer possible to be perfectly good.

Most people respond by arguing against harms of omission. Not because this is the weak link in the chain, but because it’s right there in consciousness, while intuitive beliefs about goodness requiring perfection are lurking in the background.

If you abandon the notion that none of us are perfectly good people — that perfect goodness is too exacting a standard — most of the motivation to reject harms of omission disappears.

Let’s go even further. Let’s say that to be a good person, you have to be perfectly good. We then come to a choice: either, you can define good to make it possible for people to be perfectly good or you can accept the notion that none of us can be called good. But this is missing the point!

Why do we care about what is good? What’s the point of being good? It’s action. It’s to go out there in the world and improve it. It’s not about labels. It’s not about who’s good and who’s bad. It’s about helping.

Further Reading

  • The notion that there is an asymmetry between good and bad events is the main  theme of the paper “Bad Is Stronger Than Good.” I’ve found it a useful concept   when thinking about many different things, from blog comments to dog  training.
  • One of the criticisms of utilitarianism is that it’s too demanding, that  no one can live up to its standards. This argument appeals to the intuition   that to be a good person requires perfect goodness and, as such, perfect goodness must be manageable. See here for an overview.
  • It also seems a strange criticism to argue that a normative theory is too demanding. The rules of multiplication don’t change for large numbers, even though humans have a hard time with them.
  • Paul Graham has an essay on the difficulties of thinking straight about things that are part of your identity.

Is belief a choice?

‘Snow is white’ is true if and only if snow is white.
—Alfred Tarski

Is belief a choice? Let me ruin the surprise: Yes, you get to choose what to believe. If you want to believe that you can fly, you are free to believe that. Reality, however, is a hostile place. It does not care about what you believe.

Jump from a cliff and you will not fly, no matter how much you wish it to be so. Beliefs do not change what is. They do not change what is true. You can choose to believe true things or false things, but your belief does not change what is real, what is actual.

If you choose to believe only nice things, you will end up believing many false things. Reality is not nice. There are not only nice things out there in the world. While the idea of an eternal afterlife is nice, the niceness of the idea says nothing about whether or not it is true. It would be nice if serial killers were just pretending and their supposed victims actually went to live in the tropics, but this doesn’t make it so.

I don’t mean only to attack nice beliefs as false. The opposite holds as well. Something that is painful to think is not true just by virtue of being painful to think. Negative, pessimistic beliefs are not true because of their negativity. They are true if and only if they correspond to reality. Gary Kasparov might think to himself, “I’m no good at chess,” and he might feel bad after thinking it, but that doesn’t make it true.

What is real, what is actual, what is true, all of these things are already so. You can believe whatever you like, but this doesn’t change what is already so. When people believed that the sun rotated around the earth, this didn’t make it so. Belief is a choice, but truth is not. You don’t choose what is true. It has already been decided. Belief feels like a choice because it is a choice, but do not confuse belief with truth. Believing something does not make it so. Truth already is.

Why Some Weird Beliefs Aren’t

People hold a lot of weird beliefs, but these weird things seem a whole lot less weird once you understand the reasoning behind them. In this post, I’m going to sketch out the gist of a couple “weird” beliefs.

The hope is that once you understand why people believe weird things, you’ll stop thinking of them as crazy and realize that they, too, are human beings just like you and me. I don’t necessarily endorse the beliefs here, but they are things that used to baffle me, but I now feel like, “I get where you’re coming from.”

Veganism

Vegans consume only non-animal products. Some vegans will still eat some products, such as honey, while others might abstain from animal products entirely, going so far as boycotting leather and even leather lookalikes.

People go vegan for different reasons. These are not mutually exclusive.

An argument from animal suffering

Factory farming is institutionalized cruelty on a scale that is hard to comprehend.1 Vegans who go vegan for reasons of animal suffering usually do so based on the belief that buying factory farmed meat is wrong. Specifically, they value not participating in animal torture more than the inconvenience of not eating animal products.

A lot of people will retort with, “Who cares? They’re animals,” which strikes me as a rationalization. If I came to your house and started kicking your dog, you would not like that, even though your dog is “just” an animal.

So, then, one might say: yes, but I love my dog, and I don’t love the cow I’m eating, which is fair enough. If I invited you to my house and said, “Give me a nickel or I’m going to torture this cow”, you would probably give me the nickel (or call the police), which suggests that you, yes, you value animal suffering.

At this point, you might argue, well, yes, I’ll pay not to see an animal tortured, but as long as I’m not aware of it, who cares? You could reason this way, but it strikes me as not all that plausible. Why should torturing a cow only be wrong when you witness it?

An argument from human suffering

Let us say that you do not value animal suffering, or that you do not value it enough such that you’re willing to change your eating habits. There are other reasons why one might choose to be vegan.

Veganism is more sustainable than factory farming.2 Meat is not an efficient source of energy. Only about a fourth of the energy from the grain that we feed cows makes it into the meat itself.3 We could maintain a higher population of happy, non-starving humans if the world was populated by vegans. That is: you might go vegan because you value other human beings not suffering, not because you care about animal welfare.

Further, livestock have a huge impact on the environment. Cattle farming is responsible for dumping more carbon dioxide into the environment than transportation.4 Given that we value all the global warming doomsday scenarios not occurring, veganism should be appealing.

An argument from personal health

Finally, one might become a vegan because they value their own personal health more than they value eating animal products.

Some people argue that veganism is not that healthy for you, that vegans are missing some of the vitamins that are mostly obtained through animal products. This is missing the point. The question should be: is the average vegan diet healthier than the average non-vegan diet? The answer to which is almost certainly yes.5,6 Comparing the ideal non-vegan diet to the average vegan diet is not a relevant or fair comparison.

The best individual comparison might be: will a vegan diet be healthier for me than my current diet? If so, given that you value your own health, you should be willing to consider veganism. It is, of course, possible that the costs of switching to a vegan diet outweigh how much you value the health benefits, which would imply that you should not switch.

The status quo

Another interesting question that can be posed regarding veganism is, “If you were born into a vegan society and grew up eating a vegan diet, do you really think your would choose to eat animal products?” Or even, “How much would you have to pay vegans to convince them to go back to eating meat?”

Cryonics

People who sign up for cryonics do not believe in an afterlife. If you are going to live eternally in heaven, there is no reason to freeze yourself in the hope of being resurrected in the future. (Although, if you suspect you are going to suffer eternal damnation hell, resurrection starts to look mighty appealing.)

Those who sign up for cryonics don’t necessarily believe that cryonics works or will work, but they do believe that there is a higher probability that they will be brought back to life by signing up for cryonics than if they don’t sign up for cryonics.

Essentially, the choice boils down to a decision between:

  • The probability of resurrection given that you sign up for cryonics.
  • The probability of resurrection given that you do not sign up for cryonics.

Thus, signing up for cryonics seems reasonable given that:

  • You value living.
  • There is no afterlife.
  • It is more likely that you will be resurrected if you sign up for cryonics than if you don’t.
  • The expected value of signing up for cryonics outweighs the hassle of going through the process.

Existential risk

People who would never dream of hurting a child hear of an existential risk, and say, “Well, maybe the human species doesn’t really deserve to survive.”
Eliezer Yudkowsky

With the rise of nuclear warfare, the human race now has the ability to cause destruction on a scale that was not possible in the past, including self-destruction. Unchecked global warming could destroy our biosphere or an especially virulent bioweapon could kill everyone.

If we extrapolate from the past trend of technology toward more and more control over the environment, the future seems to hold even more dangerous technological advances (e.g. nanotechnology and grey goo). Nick Bostrom offers the analogy of the scientific process as pulling random technologies from an urn of possible technologies.7 You never know when you will stumble on something terrible.

The study of existential risks is the study of events that could cause human extinction. Those in the field estimate that there is a significant chance that humanity will not survive this century. One survey of experts placed the probability at 19%.8

So, given that you value your own life and those that you love, you should value reducing the threat of human extinction.

Future generations

Wiping out the human race not only kills all of those who are living now, but it would also mean that future humans will never be born. In essence, human extinction means that they never get the chance to live.

Most of the people interested in existential risk believe that future humans have some moral significance. Even if we assume that only one billion humans can live on the earth sustainably, and that the earth will remain habitable another billion years, then \( 10^{16} \) future lives would be lost if the human race destroys itself.7

If you multiply a small reduction in existential risk by the number of future human lives (an expected value calculation), you get staggering numbers. Reducing the risk of extinction by a tenth of a percent is worth as much as \( \frac{1}{1000} * 10^{16} \), or ten trillion future lives.

Work on existential risk seems reasonable, then, given that:

  • There is a significant risk of human extinction.
  • Whatever action does the most good is best (e.g. saving the most lives).
  • Future human lives have some worth.

We’re living in a computer simulation

The simulation argument argues that at least one of the following is true:

  • We will go extinct before uncovering the necessary technological capability necessary to run simulations of entire worlds.
  • Humans will choose not to run simulations of worlds once we have the technology necessary to do so.
  • We are living in a computer simulation.

There are also a few assumptions:

  • Human-level intelligence is substrate independent, meaning that it could be implemented via other things than brains, such as on computer hardware.
  • Intelligence does not consist of some supernatural life force, like a soul.

If civilizations similar to ours inevitably go extinct, then there is no reason to believe that we are being computer simulated. After all, who would be simulating us?

If civilizations similar to our own don’t go extinct and do invariably reach greater levels of technological attainment, such that they can simulate worlds like this one, they either choose not to simulate people (perhaps believing it immoral) or we are living in a simulation.

Why is this so? An advanced civilization would be able to simulate many possible worlds given the amount of computational power that they would control. The number of simulated worlds, then, is some large number \( N \) and then number of real, non-simulated realities is one. The probability that we just happen to be the non-simulated civilization is \( \frac{1}{N + 1} \). The more simulations a civilization would choose to run, the more likely it is that we are, right now, in a simulation.

If advanced civilizations do not run such simulations or do not have the capability to run the simulations, then the probability that we are currently being simulated is near zero.

So, the reasoning for people who believe that we are currently living in a computer simulation is something like this:

  • Human-like civilizations tend to attain a level of technological advancement such that simulating entire worlds is possible.
  • These civilizations choose to run many simulations.
  • We are almost certainly living in a simulation.

Singularitarianism

Singularitarianism is the idea that the future is going to look drastically different than the present, and that it’s going to happen very quickly. At the core of singularitarianism is the idea that change, technological progress, is accelerating. Things are improving more and more rapidly.

One common example is strong artificial intelligence. That is: machines that are smarter than humans. If a human can build a machine that is smarter than a human, then this machine should be able to build a machine even smarter than itself, and so on, culminating in something different than whatever we can imagine.

Most of singularity type ideas revolve around the idea of smarter than human intelligence, but this isn’t essential. You might believe that more technological progress enables even more progress and that this keeps compounding on itself, such that technology improves at faster and faster rates.

So, for example, a singularitarian might think of all the progress that has been made in the past 100 years, and posit that a similar amount of progress will be made in the next ten years. This would become more and more compressed, such that the next ten years after that might encompass something like a thousand years of progress or more. Envisioning such changes becomes impossible. What does another thousand years of progress look like?

The reasoning, then, is something like:

  • The rate of technological advancement is accelerating.
  • This trend will continue into the foreseeable future, compounding on itself, leading to rapid, unimaginable change.

Polyamory

Most educated people are, these days, pro gay marriage, but if you suggest that people ought to be allowed to marry more than one person, this is crossing a line. Most of the reasoning for allowing gay couples to marry, however, also applies to marrying multiple partners.

Of course, you can be polyamorous without marrying multiple people, e.g. an “open relationship.”

Most of the arguments against open dating (ignoring religious concerns) center around the issue of jealousy. According to at least some people in open relationships, jealousy is less of an issue then you might at first think, with a couple people reporting here that it is a non-issue.

Or, if jealousy is an issue, those in polyamory communities or who engage in open relationships figure that it is something that can be overcome. I suspect that there is a significant selection effect going on here, though, such that people who experience strong feelings of jealousy tend to never become polyamorous or, once they try it, decide that it is not for them.

All of this avoids the question, though, why be in an open relationship? Even if jealousy is a non-issue, it must have significant benefits over traditional relationships for it to be worthwhile.

Some reasons are:

  • Humans have not evolved for monogamy. There is an innate tendency to sleep around and fighting nature makes everyone miserable.
  • Having multiple partners provides many sources of social support.
  • Multiple partners have a diverse range of skills, strengths, and perspectives that can be drawn upon.
  • Polyamory is a relationship style in the same sense that being gay is a relationship style. It is a stable individual trait with a sizable genetic component, not a choice.

Further Reading

  • For more on the environmental impact of meat production, check out this Wikipedia page. For information about sustainability arguments for veganism, there’s this page.
  • What foods cause the most suffering? A stab at an answer to that question is here.
  • According to this informal survey of philosophers’ eating habits, philosophers are ten to twenty times more likely to be vegan than the general population.
  • For a critique of moral vegetarianism, there is this paper.
  • The importance of existential risk from an ethical standpoint is laid out in Derek Parfit’s Reasons and Persons. Nick Bostrom describes the reasoning in this TedX talk.
  • The book Global Catastrophic Risks covers the threats facing humanity’s continued existence.
  • The (debunked) doctrine that living organism are fundamentally different from non-living things (due to a soul or life force) is called vitalism.
  • For a whimsical discussion of the philosophical issues regarding artificial intelligence (and much more!), check out Hofstader’s Gödel, Escher, Bach.
  • The original and more thorough treatment of the simulation argument is covered here. There are a number of resources — papers, interviews, an FAQ — related to the idea here.
  • Here is a far less artificial (given that it’s written by polyamorists in the wild, something I’m not), discussion of polyamory.

Sources


1. Given the conditions of factory farms (Wikipedia has the most neutral article I could find), it is hard to imagine how this could be otherwise.

2. Pimentel, David, and Marcia Pimentel. “Sustainability of meat-based and plant-based diets and the environment.” The American Journal of Clinical Nutrition 78.3 (2003): 660S-663S.

3. Singer, Peter. Practical ethics. Cambridge University Press, 1993.

4. Steinfeld, Henning, et al. Livestock’s long shadow. Rome: FAO, 2006.

5. Appleby, Paul N., et al. “The Oxford vegetarian study: an overview.” The American journal of clinical nutrition 70.3 (1999): 525s-531s.

6. Key, Timothy J., et al. “Mortality in vegetarians and nonvegetarians: detailed findings from a collaborative analysis of 5 prospective studies.” The American journal of clinical nutrition 70.3 (1999): 516s-524s.

7. Bostrom, Nick. “Existential Risk Prevention as Global Priority.” Global Policy 4.1 (2013): 15-31.

8. Sandberg, Anders, and Nick Bostrom. “Global catastrophic risks survey.” civil wars 98.30 (2008): 4.

How To Spot Important Problems In The World Today

If you do not work on an important problem, it’s unlikely you’ll do important work.
Richard Hamming, You and Your Research

How can you distinguish important problems from those which aren’t? A problem’s importance is determined by the amount of good that work on it produces.

What’s Good?

On all plausible theories, everyone’s well-being consists at least in part in being happy, and avoiding suffering.
Derek Parfit, On What Matters

The essence of “what is good” is the extent to which something reduces suffering and increases happiness. This is not to claim that these are the sole factors that determine goodness, but rather that any theory of the good would be incomplete without them.

More Good is Gooder

That is wise. Were I to invoke logic, however, logic clearly dictates that the needs of the many outweigh the needs of the few.
Spock, Star Trek II: The Wrath of Khan (1982)

It’s better to save two lives than one. The more people that work on a problem helps, the more good that work does and the more important that work is.

Hard Problems

Work on impossible problems is not important. It will not lead anywhere. The human condition will not be improved. The more likely it is that work on a problem will do a lot of good, the more important that problem is. It is better to work on something where you have a 95% chance of saving a million lives than it is to work on something where you have a 5% chance of saving a million lives.

Working on something hard is not in and of itself virtuous. People working on research in pure mathematics are working on hard problems, but given how disconnected pure mathematics is from reality, just donating money to a charity is probably more important than working on the millennium problems.

You might think, “Yeah, but sometimes pure mathematics does have important real world consequences.” I agree. This is not a real objection, though. We are again talking about the likelihood of doing a lot of good.

One could argue, for example, that you might have some kind of powerful insight into many of the world’s greatest ills by setting the record for most olives eaten in a single sitting. The chance is negligible. You would have a higher likelihood of success by working directly on solving world hunger, etc.

If Not Me, Then Who?

It has always appalled me that really bright scientists almost all work in the most competitive fields, the ones in which they are making the least difference. In other words, if they were hit by a truck, the same discovery would be made by somebody else about 10 minutes later.
Aubrey de Grey

It is important to consider context when deciding whether or not to work on a problem. Consider two scenarios:

  1. Curing a rare disease that will save 20 lives per year. Discovering the cure is an active area of inquiry within your discipline and there is a 90% chance that someone will discover a cure within the next year regardless of your contribution.

  2. Curing a rare disease that will save 5 lives per year. The disease is absent from the academic literature and most researchers have no idea that it exists. Those that do know of its existence are not interested in finding a cure. There is less than a 1% chance that someone will discover a cure within the next year without your contribution.

You should work on the second problem.

This has an odd implication. Most people believe that Isaac Newton’s discovery of calculus and Alexander Graham Bell’s invention of the telephone were important. However, all of these were discovered by other people: Leibniz (among others) discovered calculus at the same time as Newton. Elisha Gray filed a patent for the invention of the telephone on the same day as Alexander Graham Bell. The infamous formula \( E=mc^2 \) was chanced upon by Henri Poincaré, Olinto De Pretto, Paul Langevin and, of course, Albert Einstein.

It’s useful here to make a distinction. I’m not claiming that the discovery of calculus or the invention of the telephone were not important. Rather, my point is that the individual work of Alexander Graham Bell and Isaac Newton was less important than it first appears. The world may have been better off if they had invested energy in some other pursuit.

Our aim should be an efficient distribution of intellectual resources among problems so that we maximize the amount of good accomplished. One aspect of reducing waste is to prevent duplicated work, such as two people inventing the telephone. By taking into account the amount of work that other people are doing on a problem before you begin working on it, you can maximize the amount of difference you can make as an individual.

Recognition and Reproduction

Doing important work doesn’t always feel important and is often not recognized as such. Important work is distinct from recognition for doing important work. When we think of doing groundbreaking research, we think of Albert Einstein and how great it would be to be like him.

This is a focusing illusion. We only hear about important work when it has been recognized. Important work without recognition is invisible. We have no memory of it because we’ve never heard of it. Who knows how many important discoveries have been ignored?

A significant amount of the appeal of doing important work is connected to the social status that we expect to gain as a result of doing that work. In this sense, then, aspirations of being a great researcher are not much different from dreams of being rich and famous. Our monkey brains want desperately to maximize their reproductive fitness.

The model I have presented here is not about maximizing reproductive fitness. If that is your goal, you would be better served by donating to a sperm bank or studying seduction than by setting out to do important work. This model is concerned with doing important work regardless of whether or not one achieves recognition for it.

Would you be content with improving the world even if someone else received credit for it? If the answer is yes, then you are interested in defining important work as I’ve presented here. If not, you are interested in something else and ought to be focusing on that goal instead.

Further Reading

  • Robin Hanson explores some similar themes in this post.
  • For more multiple discoveries, like the invention of the telephone, Wikipedia has a list.
  • The idea that one should choose whatever will do the most good is standard consequentialism, which you can read more about here.
  • The dependence of a problem’s importance on the probability of solving it is an expected value calculation, which Wikipedia covers here, and is related to the concept of comparative advantage in economics.
  • The failure to take into consideration a decision’s context is termed “system neglect.” You can read more about it in this paper.
  • The observation that unrecognized things are invisible is what Donald Rumsfeld called “unknown unknowns” in a 2002 speech.
  • Daniel Kahneman has a great paper on a relevant focusing illusion, the relationship between income and happiness.
  • The intimate relationship between goals and reproductive fitness is described in this paper.