Edit: This article made it to the front page of HackerNews and was read nearly 100k times in the first week. On the basis of this demand, I started Conceptually to email you a free concept every week (but you can also just read them on the website).
Learn 1 concept per week
Join 7,000+ subscribers learning a concept every week from philosophy, economics, and elsewhere to better understand the world.
Concepts change the way we think. Before we knew about evolution, we couldn’t explain much of what was going on around us; for example, how there came to be such a diversity of species on earth. We’ve collated a list of some concepts that have changed the way we think about the world that might be useful additions to your cognitive toolkit.
Not all of them are useful – in fact, you’ll find some to be downright useless. People seem to be at different stages in their patterns of thinking: perhaps at your stage #7 will be too obvious or inapplicable, while #26 will be just the idea you need to help answer some problems you’ve been thinking about. We wrote this list because we’re sure us-of-the-past would have appreciated such a list. We hope you do too. We're always on the hunt for concepts and the like – if any come to mind, please leave them in the comments.
There really is so much more to be said on most of these points than we have here. However, we’ve tried to keep things brief, simplifying where necessary, while providing lots of links for those who are interested in reading further. Looking at links will be of particularly high value for this post, as many of these concepts will seem obvious and scarcely worth saying when you simply read their definition. It’s only in seeing their application and how that changes your thinking that you can realise their value.
Signalling/ countersignalling - the idea that an action conveys information to someone about the actor. Buying an expensive wedding ring conveys that you don’t think you’re about to run off immediately after the wedding (lest you lose three months of your salary). Countersignalling is when the information you’re trying to signal is so obvious that you need not have signalled in the first place. For instance, Warren Buffett doesn’t need to drive around in a Ferrari to signal how rich he is - everybody already knows that - and not driving around in a Ferrari differentiates him from the plebs that do need to drive one. Note: Buffett still lives in his house in Omaha that cost $31,000 back in 1958, despite being worth billions, but possibly for reasons other than countersignalling.
Nudge - the manner in which choices are presented to a target audience can have dramatic effects on their chosen action – even without changing the incentives which might typically motivate an individual to make a given choice. You can install News Feed Eradicator to decrease the likelihood of your future self spending all day scrolling through Facebook. The tax office can write letters saying that 90% of people who work the same job as you have completed their tax return already. Your electricity company can install an electricity meter which constantly reminds you how many watts you’re chewing through at a given time. Male urinals are sometimes made with a small and shoddy picture of a housefly in the centre of them as a target to reduce, err, puddles.
Marginal thinking - what are extra resources worth? If you have no bananas, and you get a banana, it’s probably worth more to you than if you already had one million bananas and you get an extra. This is an important concept because it might lead to the realisation that not all the bananas hold the same value to you. Slightly less trivially, what matters with your charitable donation is what happens with that donation, not the average of what the charity achieves. Extra donations to distribute bed nets could do more once there are already the distribution lines established, so it could achieve more. Conversely, it might be already distributing at capacity, and extra donations buy bed nets for $10 instead of $5. Fortunately, the Against Malaria Foundation will still be able to distribute bed nets at exceptional efficiencies, and is a great target for your festive givings.
System 1 & System 2 thinking - when we’re making decisions, we use two different systems. System 1 is fast and subconscious, often described as our ‘gut feeling’. It has an edge in social situations or when time is a limiting factor. System 2 is slower and more methodical, better with models and numbers, and deliberate.
Comparative vs. absolute advantage - if you are better than me at earning an income from your job and cleaning the house, you have an absolute advantage in both these activities. Does that mean that both of us are better off if you do both? Not necessarily. Suppose you earn $100/hour at work, 10 times more than my $10/hour and further that you clean the house approximately twice as fast as I do. Of these two jobs, I’m ‘least bad’ at cleaning the house, so it might be (depending on the social costs that this arrangement incurs) that both of us would be better off if you paid me $25 an hour to clean your house. Generally, people should do the action that they’re the least bad at, thus working to their comparative advantage.
Efficient market hypothesis - there are many people approximately as clever as you who value approximately the same things you do. It’s unlikely you’ll be walking down the street and stumble on $100 000 sitting on the ground which no one else has picked up. In the same way, it’s unlikely you’ll be able to choose a company on the stock market that will do 100 times better than the average company which no one else has already found and invested in (driving the cost of parts of the company (shares) up). This is the same reason why you might have a hard time finding a car park that is (i) free, (ii) right next to work, and (iii) somewhere you can park in all day. Even though such car parks do exist, over time word gets out, and they are occupied in the short term or monetised in the long term. Everything is a market. Thus, when taking the efficient market hypothesis into account, you should 1) look for the things you value in places that other people have systematically failed to look, and 2) think that if something looks too good to be true, it probably is.
Illusion of transparency - we tend to overestimate how much our mental state is known by others, a fact not helped by the ambiguity of the English language. “Elizabeth Newton created a simple test that she regarded as an illustration of the phenomenon. She would tap out a well-known song, such as "Happy Birthday" or the national anthem, with her finger and have the test subject guess the song. People usually estimate that the song will be guessed correctly in about 50 percent of the tests, but only 3 percent pick the correct song. The tapper can hear every note and the lyrics in his or her head; however, the observer, with no access to what the tapper is thinking, only hears a rhythmic tapping.” - Wiki. This phenomenon applies equally well to explanations of novel concepts and the interpretation of emotional states.
Opportunity cost - when we choose to take one option, we are implicitly not taking another. If you only have enough room for one meal and your favourite is the Pad Thai, you choose it. But this means you can’t also get the Massaman curry (your second favourite). Economists call this your opportunity cost — your choice minus the benefits of the next best alternative. Opportunity costs are everywhere and form a critical part of decision making. If you’re not donating to the very best charity, you’re not helping others as much as possible. Furthermore, if you don’t spend your time in the best way possible, that is coming at a cost.
Cognitive biases - these are systematic flaws in how we think. We don’t look for informationthat proves us wrong. We estimate that events we can easily recall (perhaps because they happened more recently, or frequently appear in the media) are more likely to occur than they are. We’re overconfident. Particularly relative to our level of expertise. And a whole bunch more. The plot thickens: it’s only our friends and colleagues who are biased. (Author’s caution: don’t become fallacy (wo)man).
Heuristics - we use rules of thumb to make decisions under conditions of uncertainty (which turns out to be almost every single decision we make). This is both a blessing and a curse. It’s quicker, useful when we don’t have much information we could deal with explicitly. But also by its nature requires us to make generalisations that might not turn out to be true. It pays to notice that we are using them because they can be wrong.
Counterfactual reasoning - what might have happened otherwise? You could push the paramedic out of the way and do the CPR yourself, but you’ll likely do a worse job. So even if you stop the patient from dying, your (counterfactual) impact is likely small, if not negative. Doctors wanting to make a big difference will do more good if they travel to a developing country where their patients might not have received quality healthcare if not for them.
Bayesianreasoning - assigning a probability before an event, receiving evidence, and updating the probability you assigned. This is beneficial insofar as it forces us to think probabilistically. Moreover, it allows us to account for competing evidence and promotes a nuanced view, thus avoiding a simplistic black and white application of ‘good and bad’ outcomes.
Expected value - the probability of each outcome multiplied by the value of each outcome. For example, a 50% chance of winning $100 is worth $50 to you, and you should be willing to invest up to $50 for a chance to win (but no more). Applied example: If you have two mutually exclusive ways of helping people, should you take the 10% chance of helping 1 billion people, or the 90% chance at helping 1 million (an equal amount as the billion, and holding all else equal)? Without using expected value, this is a nearly impossible question to evaluate.
Time value of money -we’d prefer to have $100 today than tomorrow, or in a year’s time. In a real sense, this implies that money today is worth more than money in the future. This preference comes from the opportunities we have to invest it at (say) 5% interest and gain $5 in that year. How much less you’d be willing to receive now than in a year, is the discount rate (which is 5% in this example).
Prisoner'sdilemma/ Tragedy of the commons - a problem in game theory that explains a lot of real world problems. In this situation, there are two players choosing between cooperating and not cooperating. You both do well if you both cooperate. But if you don't cooperate while the other player tries to cooperate, you do even better. If both of you don't cooperate then neither of you do well. But if you're the chump who tries to cooperate while the other player doesn't, you get screwed worst of all. Examples of this include countries individually benefitting economically from not limiting their carbon emissions, to everyone’s detriment; athletes using performance enhancing drugs to be more individually competitive, when all athletes would be worse off if everyone did it; everyone in society benefits from paying taxes to get roads, but each citizen would rather have the road and not pay taxes herself; etc..
Revealed preferences - talk is cheap, and our actions reveal more about our preferences than we’d like to believe. For instance, although I might say that Marmite is my favourite spread to put on toast, I actually buy Vegemite (#justaustralianthings). As such, my actions revealwhat I really prefer. In another example, while Tesla might survey thousands of Australians, of which 60% say they’d pay extra for a greener car, when the car actually becomes available, only 10% follow through with their purchase.
Typical mindfallacy - the mistaken belief that others are having the same mental experiences that we are. As the links show, the differences go surprisingly far. Detailed questionnaires have shown that when some people 'see' a zebra in their mind's eye, they're simply recalling the idea of a zebra, whilst others can count the stripes!
The value of your time - You could use your time to earn more money, or do something else that you value. Even if driving to the other side of town saves you $10 on groceries, if it takes you an extra hour, you’re implicitly valuing your free time at $10 per hour. This might be a good deal if you only earn $6 an hour with the company you started after seeing the story of “This One Single Mom Makes $1000 Per Hour - Google HATES Her”. On the other hand if your friend Grace can convert her free time into $50 an hour at work, then for her driving the extra hour is a poor use of time. What’s more, if both Grace and yourself are volunteering alongside each other in a soup kitchen when you’d be happy to work there for $10 an hour, Grace should consider working instead, donating the money she earns, and employing 5 people like you to run the kitchen per hour she is at work.
Fundamental attribution error - attributing others' actions to their personality, but your actions to your situation. You see someone kick the vending machine and assume it must be because they are an angry person. But when you kick the vending machine, it’s because you were hot and thirsty, your boss is an asshole, and the vending machine stole your last goddam dollar.
Aumann's agreement theorem - if two rational agents disagree when they both have exactly the same information, one of them must be wrong. The 'same information' qualifier is crucial; very often disagreement is due to differences in information, not failure of rationality. So you should take disagreement seriously if you have reason to believe the other person has significant information that you don't.
Bikeshedding - substituting a hard and important problem for an easy and inconsequential one. When designing a nuclear plant, Parkinson observed that the committee dedicated a disproportionate amount of time to designing the bikeshed - which materials should it be made of, where should it be located etc. These are the kind of questions that everyone is clever enough to contribute to, and where everyone likely wants to have their opinions heard. Unfortunately, these might be discussed at the cost of some of crucial detail, like how to prevent the power plant from killing everyone.
Meme - the social equivalent of a gene, except instead of encoding proteins, they represent rituals, behaviours, and ideas. They self-replicate (when they’re passed from one person to another), mutate, and some are selected for in a population - these are the ones that get transmitted. Like selfish genes, the most useful memes don’t necessarily get reproduced. Rather, it is those that are most likely to have a trait that is selected for. Memes are interesting because of the predictions they allow you to make. For instance, there should be few major social movements in society that don’t have the meme “find (or create) other like-minded people to join the movement”.
Algernon's Law - IQ is nearly impossible to improve in healthy people without tradeoffs. If we could get smarter by a simple mechanism, like upregulating acetylcholine, which had no negative side effects (in the ancestral environment), then perhaps evolution would have upregulated acetylcholine already. We could equate this with the ‘efficient market hypothesis’ of improving brain function.
Social intuitionism - moral judgements are made predominantly on the basis of intuition, which is followed by rationalisation. For example, slavery was previously a social norm and thought to be acceptable. When asked a question (“is slavery good?”), proponents of slavery introspected an answer (“yes”), and confabulated a reason (“because they’re not actually human” or similar atrocious falsehoods). As this example shows, blind faith in our intuitions can be harmful and counterproductive, because they can be easily corrupted by immoral social norms. It wasn’t just that proponents of slavery got the wrong answer: they had — and we still have — the wrong process.
Apophenia - the natural tendency of humans to see patterns in random noise. E.g. hearing satanic messages when playing songs in reverse.
Goodhart's law/ Campbell’s law - what gets measured, gets managed (and then fails to be a good measure of what it was originally intended for). If we use GDP as a measure of prosperity of a nation, and there are incentives to show that prosperity is increasing, then nations could ‘hack’ this metric. In turn, this may raise GDP while failing to improve the living conditions in the country.
Moral licensing - doing less good after you feel like you've done some good. After donating to charity in the morning, we’re less likely to hold the door open for someone in the afternoon.
Chesterton's fence - if you don't see a purpose for a fence, don't pull it down, lest you be run over by an angry bull which was behind a tree. If you can’t see the purpose of a social norm, like whether there is any value to having family unit, you shouldn’t ignore it without having thought hard about why it is there. Asking people who endorse the family unit why it’s important and noticing their reasoning is flawed often isn’t good enough either.
Peltzman effect - taking more risks when you feel more safe. When seatbelts were first introduced, motorists actually drove faster and closer to the car in front of them.
Semmelweis reflex - not evaluating evidence on the basis of its merits and instead rejecting it because it contradicts existing norms, beliefs and paradigms. Similar to the status quo bias (which one might choose to combat with the reversal test).
Bateman's principle - the most sexually selective sex will be that which has the most costs from rearing the offspring. In humans, women have more costs associated with raising offspring (maternal mortality, breastfeeding, rearing) and are the more selective sex. In seahorses, the opposite is true: male seahorses carry the offspring in their pouch, and are more selective with those they choose to mate with.
Hawthorne effect - people react differently when they know they are being observed.
Bulverism - dismissing a claim on the basis of how the opponent got there, rather than a reasoned rebuttal. For example, "but you're just biased!" or “of course you’d believe that, you’re scared of the alternative.”
Flynn effect -IQ has increased by 3 points per decade since around the 1930s. It is hotly debated why this is happening, and whether the trend will continue.
Schelling point - a natural point that people will converge upon independently. For instance, if you and I have to meet in Sydney on a particular day, and we don’t know when or where, we might go to where we normally meet, or fall back on meeting at the Opera House at 12PM. This is an important effect in situations where coordination is essential but explicit discussion is difficult. Furthermore, participants may not even realise it’s happening. For example, “I don’t want to live in a society where genetic enhancement of children increases the gap between the rich and poor. Unfortunately, there’s no clear place at which to say ‘wait, we’re about to reach that society now, let’s stop enhancement!’ Perhaps my comrades and I should instead object against any genetic manipulation at all, including selecting for embryos without cystic fibrosis (even if we wouldn’t mind that particular selection occurring).”
Local vs global optima – see the carefully crafted image by yours truly below. In short, you might need to make things worse in order to get to a global optimum - the best possible place to be. If you want to earn more, you might have to sacrifice the number of hours you can work in the short term in order to take a course which will allow you to increase your income in the future.
Anthropic principle - you are given a sleeping pill which will wake you up twice if heads, and once if tails. You wake up. What is the chance that today is the day you wake up twice? Similarly, what is the chance that we’re in one of the only universes that is capable of supporting life?
Arbitrage - taking advantage of different prices between markets for the same products.
Chaos theory - “Chaos: When the present determines the future, but the approximate present does not approximately determine the future.”
Ingroup and outgroup psychology - this doesn’t just explain phenomena like xenophobia, but also the left-right political divide. Our circle of concern is probably expanding over time.
Red Queen hypothesis - organisms need to be constantly evolving to keep up with the offenses and defenses of their predators and prey, respectively. This is probably the effect we should thank for the existence of sex.
Schelling's segregation - even when groups only have a mild preference to be around others with a similar characteristic – say, a preference for playing baseball – neighbourhoods will segregate on this basis.
Pareto improvement - a change that makes at least one person better off, without making anyone worse off.
Occam’s razor/ law of parsimony - among competing hypotheses, the one with the fewest assumptions should be selected.
Regression toward the mean - if you get an extreme result (in a normal distribution) once, additional results are likely to be closer to the average selection. If one trial suggests that health supplement x is amazingly better than all the others, you shouldn’t put all your faith in that result.
Cognitive dissonance - holding two conflicting beliefs causes us to feel slightly uncomfortable, and to reject (at least) one of them. For instance, holding the belief that science is a useful way to discover the truth would conflict with another belief that vaccines cause autism. In you held both of these beliefs, one would have to be discarded.
Coefficient of determination - how well a model fits or explains data (i.e. how much of the variance it explains).
Godwin's Law - as an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches 1. At this point, all hope of meaningful conversation is lost, and the discussion must stop.
Commitment and consistency - if people commit to an idea or goal, they are more likely to honour the commitment if establishing that idea or goal is consistent with their self-image. Even if the original incentive or motivation is removed after they have agreed, they will continue to honour the agreement. For example, once someone has committed themselves to a viewpoint in an argument, rarely do they change ‘sides’.
Affective forecasting - predicting how happy you will be in the future contingent upon some event or change is hard. We normally guess that the magnitude of the effect will be larger than it actually is, because 1) we’ve just been asked to think about that event, which makes it seem more important than it actually is in our everyday happiness, and 2) we underestimate our ability to adapt.
Fermi calculation (aka ‘back of the envelope calculation’) - involves using a few guesses or known numbers to come to an educated guess about some value of interest. To illustrate, one could approximately calculate the number of piano tuners in Chicago by plugging a few estimates like population, percentage of houses with a piano, and piano tuner productivity into a calculation.
Fermi’s paradox - is the problem we confront when we run a Fermi calculation on the number of habitable planets in our galaxy and realise how weird it is that we seem to be alone. It leads to some interestingconclusions.
I’d love to hear any suggestions for other concepts to add to our cognitive toolkits in the comments.
Co-written by Brenton Mayer and Peter McIntyre. Thanks to Daniel D’Hotman for reviewing an earlier draft. Most importantly, we've come across these ideas thanks to many clever people in the effective altruism movement trying to understand the world so that they can improve it. The post has been extensively updated based on some excellent comments - thanks to those contributors too.
[/fusion_text][/fusion_builder_column][/fusion_builder_row][/fusion_builder_container]