52 Concepts To Add To Your Cognitive Toolkit

Edit: On the basis of the success of this article, we started Conceptually to send you a free concept every week to your inbox (but you can also just read them on the website). It actually explains and applies the concepts (unlike below), and due to the spacing effect, will make it more likely you can implement them. Basically, it’s better in every single way, so head on over to get started. 

Get started

Concepts change the way we think. Before we knew about evolution, we couldn’t explain much of what was going on around us; for example, how there came to be such a diversity of species on earth. We’ve collated a list of some concepts that have changed the way we think about the world that might be useful additions to your cognitive toolkit.

Not all of them are useful – in fact, you’ll find some to be downright useless. People seem to be at different stages in their patterns of thinking: perhaps at your stage #7 will be too obvious or inapplicable, while #26 will be just the idea you need to help answer some problems you’ve been thinking about. We wrote this list because we’re sure us-of-the-past would have appreciated such a list. We hope you do too. We’re always on the hunt for concepts and the like – if any come to mind, please leave them in the comments.

There really is so much more to be said on most of these points than we have here. However, we’ve tried to keep things brief, simplifying where necessary, while providing lots of links for those who are interested in reading further. Looking at links will be of particularly high value for this post, as many of these concepts will seem obvious and scarcely worth saying when you simply read their definition. It’s only in seeing their application and how that changes your thinking that you can realise their value.

  1. Signalling/ countersignalling – the idea that an action conveys information to someone about the actor. Buying an expensive wedding ring conveys that you don’t think you’re about to run off immediately after the wedding (lest you lose three months of your salary). Countersignalling is when the information you’re trying to signal is so obvious that you need not have signalled in the first place. For instance, Warren Buffett doesn’t need to drive around in a Ferrari to signal how rich he is – everybody already knows that – and not driving around in a Ferrari differentiates him from the plebs that do need to drive one. Note: Buffett still lives in his house in Omaha that cost $31,000 back in 1958, despite being worth billions, but possibly for reasons other than countersignalling.
    Buffett's house.

    Buffett’s house.

  2. Nudge – the manner in which choices are presented to a target audience can have dramatic effects on their chosen action – even without changing the incentives which might typically motivate an individual to make a given choice. You can install News Feed Eradicator to decrease the likelihood of your future self spending all day scrolling through Facebook. The tax office can write letters saying that 90% of people who work the same job as you have completed their tax return already. Your electricity company can install an electricity meter which constantly reminds you how many watts you’re chewing through at a given time. Male urinals are sometimes made with a small and shoddy picture of a housefly in the centre of them as a target to reduce, err, puddles.
  3. Marginal thinking – what are extra resources worth? If you have no bananas, and you get a banana, it’s probably worth more to you than if you already had one million bananas and you get an extra. This is an important concept because it might lead to the realisation that not all the bananas hold the same value to you. Slightly less trivially, what matters with your charitable donation is what happens with that donation, not the average of what the charity achieves. Extra donations to distribute bed nets could do more once there are already the distribution lines established, so it could achieve more. Conversely, it might be already distributing at capacity, and extra donations buy bed nets for $10 instead of $5. Fortunately, the Against Malaria Foundation will still be able to distribute bed nets at exceptional efficiencies, and is a great target for your festive givings.
  4. System 1 & System 2 thinking – when we’re making decisions, we use two different systems. System 1 is fast and subconscious, often described as our ‘gut feeling’. It has an edge in social situations or when time is a limiting factor. System 2 is slower and more methodical, better with models and numbers, and deliberate.
    System 1 and System 2

    System 1 and System 2

  5. Comparative vs. absolute advantage – if you are better than me at earning an income from your job and cleaning the house, you have an absolute advantage in both these activities. Does that mean that both of us are better off if you do both? Not necessarily. Suppose you earn $100/hour at work, 10 times more than my $10/hour and further that you clean the house approximately twice as fast as I do. Of these two jobs, I’m ‘least bad’ at cleaning the house, so it might be (depending on the social costs that this arrangement incurs) that both of us would be better off if you paid me $25 an hour to clean your house. Generally, people should do the action that they’re the least bad at, thus working to their comparative advantage.
  6. Efficient market hypothesis – there are many people approximately as clever as you who value approximately the same things you do. It’s unlikely you’ll be walking down the street and stumble on $100 000 sitting on the ground which no one else has picked up. In the same way, it’s unlikely you’ll be able to choose a company on the stock market that will do 100 times better than the average company which no one else has already found and invested in (driving the cost of parts of the company (shares) up). This is the same reason why you might have a hard time finding a car park that is (i) free, (ii) right next to work, and (iii) somewhere you can park in all day. Even though such car parks do exist, over time word gets out, and they are occupied in the short term or monetised in the long term. Everything is a market. Thus, when taking the efficient market hypothesis into account, you should 1) look for the things you value in places that other people have systematically failed to look, and 2) think that if something looks too good to be true, it probably is.  
  7. Illusion of transparency – we tend to overestimate how much our mental state is known by others, a fact not helped by the ambiguity of the English language. “Elizabeth Newton created a simple test that she regarded as an illustration of the phenomenon. She would tap out a well-known song, such as “Happy Birthday” or the national anthem, with her finger and have the test subject guess the song. People usually estimate that the song will be guessed correctly in about 50 percent of the tests, but only 3 percent pick the correct song. The tapper can hear every note and the lyrics in his or her head; however, the observer, with no access to what the tapper is thinking, only hears a rhythmic tapping.” – Wiki. This phenomenon applies equally well to explanations of novel concepts and the interpretation of emotional states.
  8. Opportunity cost – when we choose to take one option, we are implicitly not taking another. If you only have enough room for one meal and your favourite is the Pad Thai, you choose it. But this means you can’t also get the Massaman curry (your second favourite). Economists call this your opportunity cost — your choice minus the benefits of the next best alternative. Opportunity costs are everywhere and form a critical part of decision making. If you’re not donating to the very best charity, you’re not helping others as much as possible. Furthermore, if you don’t spend your time in the best way possible, that is coming at a cost.
  9. Cognitive biases – these are systematic flaws in how we think. We don’t look for information that proves us wrong. We estimate that events we can easily recall (perhaps because they happened more recently, or frequently appear in the media) are more likely to occur than they are. We’re overconfident. Particularly relative to our level of expertise. And a whole bunch more. The plot thickens: it’s only our friends and colleagues who are biased. (Author’s caution: don’t become fallacy (wo)man).
  10. Heuristics – we use rules of thumb to make decisions under conditions of uncertainty (which turns out to be almost every single decision we make). This is both a blessing and a curse. It’s quicker, useful when we don’t have much information we could deal with explicitly. But also by its nature requires us to make generalisations that might not turn out to be true. It pays to notice that we are using them because they can be wrong.
  11. Counterfactual reasoning – what might have happened otherwise? You could push the paramedic out of the way and do the CPR yourself, but you’ll likely do a worse job. So even if you stop the patient from dying, your (counterfactual) impact is likely small, if not negative. Doctors wanting to make a big difference will do more good if they travel to a developing country where their patients might not have received quality healthcare if not for them. 
  12. Bayesian reasoning – assigning a probability before an event, receiving evidence, and updating the probability you assigned. This is beneficial insofar as it forces us to think probabilistically. Moreover, it allows us to account for competing evidence and promotes a nuanced view, thus avoiding a simplistic black and white application of ‘good and bad’ outcomes.
    image09

    Bayes’ Theorem

  13. Expected value – the probability of each outcome multiplied by the value of each outcome. For example, a 50% chance of winning $100 is worth $50 to you, and you should be willing to invest up to $50 for a chance to win (but no more). Applied example: If you have two mutually exclusive ways of helping people, should you take the 10% chance of helping 1 billion people, or the 90% chance at helping 1 million (an equal amount as the billion, and holding all else equal)? Without using expected value, this is a nearly impossible question to evaluate.
  14. Time value of money -we’d prefer to have $100 today than tomorrow, or in a year’s time. In a real sense, this implies that money today is worth more than money in the future. This preference comes from the opportunities we have to invest it at (say) 5% interest and gain $5 in that year. How much less you’d be willing to receive now than in a year, is the discount rate (which is 5% in this example).
  15. Prisoner’s dilemma/ Tragedy of the commons – a problem in game theory that explains a lot of real world problems. In this situation, there are two players choosing between cooperating and not cooperating. You both do well if you both cooperate. But if you don’t cooperate while the other player tries to cooperate, you do even better. If both of you don’t cooperate then neither of you do well. But if you’re the chump who tries to cooperate while the other player doesn’t, you get screwed worst of all. Examples of this include countries individually benefitting economically from not limiting their carbon emissions, to everyone’s detriment; athletes using performance enhancing drugs to be more individually competitive, when all athletes would be worse off if everyone did it; everyone in society benefits from paying taxes to get roads, but each citizen would rather have the road and not pay taxes herself; etc..Prisoner's Dilemma
  16. Revealed preferences – talk is cheap, and our actions reveal more about our preferences than we’d like to believe. For instance, although I might say that Marmite is my favourite spread to put on toast, I actually buy Vegemite (#justaustralianthings). As such, my actions reveal what I really prefer. In another example, while Tesla might survey thousands of Australians, of which 60% say they’d pay extra for a greener car, when the car actually becomes available, only 10% follow through with their purchase.
  17. Typical mind fallacy – the mistaken belief that others are having the same mental experiences that we are. As the links show, the differences go surprisingly far. Detailed questionnaires have shown that when some people ‘see’ a zebra in their mind’s eye, they’re simply recalling the idea of a zebra, whilst others can count the stripes!
  18. The value of your time – You could use your time to earn more money, or do something else that you value. Even if driving to the other side of town saves you $10 on groceries, if it takes you an extra hour, you’re implicitly valuing your free time at $10 per hour. This might be a good deal if you only earn $6 an hour with the company you started after seeing the story of “This One Single Mom Makes $1000 Per Hour – Google HATES Her”. On the other hand if your friend Grace can convert her free time into $50 an hour at work, then for her driving the extra hour is a poor use of time. What’s more, if both Grace and yourself are volunteering alongside each other in a soup kitchen when you’d be happy to work there for $10 an hour, Grace should consider working instead, donating the money she earns, and employing 5 people like you to run the kitchen per hour she is at work.
  19. Fundamental attribution error – attributing others’ actions to their personality, but your actions to your situation. You see someone kick the vending machine and assume it must be because they are an angry person. But when you kick the vending machine, it’s because you were hot and thirsty, your boss is an asshole, and the vending machine stole your last goddam dollar.
  20. Aumann’s agreement theorem – if two rational agents disagree when they both have exactly the same information, one of them must be wrong. The ‘same information’ qualifier is crucial; very often disagreement is due to differences in information, not failure of rationality. So you should take disagreement seriously if you have reason to believe the other person has significant information that you don’t.
  21. Bikeshedding – substituting a hard and important problem for an easy and inconsequential one. When designing a nuclear plant, Parkinson observed that the committee dedicated a disproportionate amount of time to designing the bikeshed – which materials should it be made of, where should it be located etc. These are the kind of questions that everyone is clever enough to contribute to, and where everyone likely wants to have their opinions heard. Unfortunately, these might be discussed at the cost of some of crucial detail, like how to prevent the power plant from killing everyone. image05
  22. Meme – the social equivalent of a gene, except instead of encoding proteins, they represent rituals, behaviours, and ideas. They self-replicate (when they’re passed from one person to another), mutate, and some are selected for in a population – these are the ones that get transmitted. Like selfish genes, the most useful memes don’t necessarily get reproduced. Rather, it is those that are most likely to have a trait that is selected for. Memes are interesting because of the predictions they allow you to make. For instance, there should be few major social movements in society that don’t have the meme “find (or create) other like-minded people to join the movement”.
    Dawkin invented the meme

    Dawkins invented the meme

  23. Algernon’s Law – IQ is nearly impossible to improve in healthy people without tradeoffs. If we could get smarter by a simple mechanism, like upregulating acetylcholine, which had no negative side effects (in the ancestral environment), then perhaps evolution would have upregulated acetylcholine already. We could equate this with the ‘efficient market hypothesis’ of improving brain function.
  24. Social intuitionism – moral judgements are made predominantly on the basis of intuition, which is followed by rationalisation. For example, slavery was previously a social norm and thought to be acceptable. When asked a question (“is slavery good?”), proponents of slavery introspected an answer (“yes”), and confabulated a reason (“because they’re not actually human” or similar atrocious falsehoods). As this example shows, blind faith in our intuitions can be harmful and counterproductive, because they can be easily corrupted by immoral social norms. It wasn’t just that proponents of slavery got the wrong answer: they had — and we still have —  the wrong process.
  25. Apophenia – the natural tendency of humans to see patterns in random noise. E.g. hearing satanic messages when playing songs in reverse.
VegeMighty

Jesus?

  1. Goodhart’s law/ Campbell’s law – what gets measured, gets managed (and then fails to be a good measure of what it was originally intended for). If we use GDP as a measure of prosperity of a nation, and there are incentives to show that prosperity is increasing, then nations could ‘hack’ this metric. In turn, this may raise GDP while failing to improve the living conditions in the country.
  2. Moral licensing – doing less good after you feel like you’ve done some good. After donating to charity in the morning, we’re less likely to hold the door open for someone in the afternoon. 
  3. Chesterton’s fence – if you don’t see a purpose for a fence, don’t pull it down, lest you be run over by an angry bull which was behind a tree. If you can’t see the purpose of a social norm, like whether there is any value to having family unit, you shouldn’t ignore it without having thought hard about why it is there.  Asking people who endorse the family unit why it’s important and noticing their reasoning is flawed often isn’t good enough either
  4. Peltzman effect – taking more risks when you feel more safe. When seatbelts were first introduced, motorists actually drove faster and closer to the car in front of them.
  5. Semmelweis reflex – not evaluating evidence on the basis of its merits and instead rejecting it because it contradicts existing norms, beliefs and paradigms. Similar to the status quo bias (which one might choose to combat with the reversal test).
  6. Bateman’s principle – the most sexually selective sex will be that which has the most costs from rearing the offspring. In humans, women have more costs associated with raising offspring (maternal mortality, breastfeeding, rearing) and are the more selective sex. In seahorses, the opposite is true: male seahorses carry the offspring in their pouch, and are more selective with those they choose to mate with.
  7. Hawthorne effect – people react differently when they know they are being observed.
  8. Bulverism – dismissing a claim on the basis of how the opponent got there, rather than a reasoned rebuttal. For example, “but you’re just biased!” or “of course you’d believe that, you’re scared of the alternative.”
  9. Flynn effect -IQ has increased by 3 points per decade since around the 1930s. It is hotly debated why this is happening, and whether the trend will continue.
  10. Schelling point – a natural point that people will converge upon independently. For instance, if you and I have to meet in Sydney on a particular day, and we don’t know when or where, we might go to where we normally meet, or fall back on meeting at the Opera House at 12PM. This is an important effect in situations where coordination is essential but explicit discussion is difficult.  Furthermore, participants may not even realise it’s happening. For example, “I don’t want to live in a society where genetic enhancement of children increases the gap between the rich and poor. Unfortunately, there’s no clear place at which to say ‘wait, we’re about to reach that society now, let’s stop enhancement!’ Perhaps my comrades and I should instead object against any genetic manipulation at all, including selecting for embryos without cystic fibrosis (even if we wouldn’t mind that particular selection occurring).”
  11. Local vs global optima – see the carefully crafted image by yours truly below. In short, you might need to make things worse in order to get to a global optimum – the best possible place to be. If you want to earn more, you might have to sacrifice the number of hours you can work in the short term in order to take a course which will allow you to increase your income in the future.
    Local vs Global Optima Graph

    Local vs. Global Optima. We can often get stuck in a local optimum – it’s a tradeoff between exploration and exploitation.

  12. Anthropic principle – you are given a sleeping pill which will wake you up twice if heads, and once if tails. You wake up. What is the chance that today is the day you wake up twice? Similarly, what is the chance that we’re in one of the only universes that is capable of supporting life?
  13. Arbitrage – taking advantage of different prices between markets for the same products.
  14. Chaos theory – “Chaos: When the present determines the future, but the approximate present does not approximately determine the future.”
    Chaotic pendulum

    A double rod pendulum animation showing chaotic behaviour. Starting the pendulum from a slightly different initial condition would result in a completely different trajectory.

  15. Ingroup and outgroup psychology – this doesn’t just explain phenomena like xenophobia, but also the left-right political divide. Our circle of concern is probably expanding over time.
  16. Red Queen hypothesis – organisms need to be constantly evolving to keep up with the offenses and defenses of their predators and prey, respectively. This is probably the effect we should thank for the existence of sex.
  17. Schelling’s segregation – even when groups only have a mild preference to be around others with a similar characteristic – say, a preference for playing baseball – neighbourhoods will segregate on this basis.
  18. Pareto improvement – a change that makes at least one person better off, without making anyone worse off.  
  19. Occam’s razor/ law of parsimony – among competing hypotheses, the one with the fewest assumptions should be selected.
  20. Regression toward the mean – if you get an extreme result (in a normal distribution) once, additional results are likely to be closer to the average selection. If one trial suggests that health supplement x is amazingly better than all the others, you shouldn’t put all your faith in that result.
  21. Cognitive dissonance – holding two conflicting beliefs causes us to feel slightly uncomfortable, and to reject (at least) one of them. For instance, holding the belief that science is a useful way to discover the truth would conflict with another belief that vaccines cause autism. In you held both of these beliefs, one would have to be discarded.
  22. Coefficient of determination – how well a model fits or explains data (i.e. how much of the variance it explains).
  23. Godwin’s Law – as an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches 1. At this point, all hope of meaningful conversation is lost, and the discussion must stop.

Godwin's Law

  1. Commitment and consistency – if people commit to an idea or goal, they are more likely to honour the commitment if establishing that idea or goal is consistent with their self-image. Even if the original incentive or motivation is removed after they have agreed, they will continue to honour the agreement. For example, once someone has committed themselves to a viewpoint in an argument, rarely do they change ‘sides’.
  2. Affective forecasting – predicting how happy you will be in the future contingent upon some event or change is hard. We normally guess that the magnitude of the effect will be larger than it actually is, because 1) we’ve just been asked to think about that event, which makes it seem more important than it actually is in our everyday happiness, and 2) we underestimate our ability to adapt.
  3. Fermi calculation (aka ‘back of the envelope calculation’) – involves using a few guesses or known numbers to come to an educated guess about some value of interest. To illustrate, one could approximately calculate the number of piano tuners in Chicago by plugging a few estimates like population, percentage of houses with a piano, and piano tuner productivity into a calculation.
  4. Fermi’s paradox – is the problem we confront when we run a Fermi calculation on the number of habitable planets in our galaxy and realise how weird it is that we seem to be alone. It leads to some interesting conclusions.

image12

I’d love to hear any suggestions for other concepts to add to our cognitive toolkits in the comments.

Co-written by Brenton Mayer and Peter McIntyre. Thanks to Daniel D’Hotman for reviewing an earlier draft. Most importantly, we’ve come across these ideas thanks to many clever people in the effective altruism movement trying to understand the world so that they can improve it. The post has been extensively updated based on some excellent comments – thanks to those contributors too. 

By | 2017-05-19T05:00:36+00:00 December 30th, 2015|Rationality|53 Comments

About the Author:

I’m the Director of Coaching at 80,000 Hours, a non-profit that provides advice to talented graduates on how to have an impactful career; and a Founding Director of Effective Altruism Australia, a non-profit that has raised $700,000 for evidence-based, high impact global health charities.

53 Comments

  1. Pablo Stafforini December 31, 2015 at 10:24 am - Reply

    Excellent list. For more useful concepts, see this thread.

  2. Monica Anderson December 31, 2015 at 11:03 am - Reply

    Selectionism – The only sources of useful novelty in the universe are processes of variation and selection.

    Examples are the evolution of species in nature, the cars available for us to purchase, or ideas in the minds of people.
    Neural Darwinism is the theory that in our minds, millisecond to millisecond, there is an evolution-like competition between ideas. The best ideas breed with each other, generating new variant ideas to be evaluated. Every sentence you speak is the result of a competition between thousands of candidate sentences over hundreds of generations.

    Ref: Gary Cziko: “Without Miracles”, Donald T Campbell, Gerhard Edelman, and numerous books by William Calvin.

  3. Monica Anderson December 31, 2015 at 11:25 am - Reply

    Model Free Methods – An alternative to Reductionist Science to use when Reductionist Models cannot be created or used.

    Reductionism is the use of Models, such as formulas, equations, theories, hypotheses, scientific models, naïve models, or computer programs. Models are simplifications of our rich reality that allow reasoning and computation to occur.

    In so-called “Bizarre Systems” such models cannot be made or used. Examples are understanding of human language, the global economy and stock market, the function of the brain as a whole, and the totality of human physiology. Our mundane everyday world is also a bizarre system. Science can only be applied to small parts of reality at a time.

    Model Free Methods (MFM) allow us to create systems with limited powers of prediction in situations involving Bizarre Systems. Big Data, and Neural Networks (especially Deep Learning) are examples of situations where Model Free Methods are often used. MFMs can be partially immune to missing, incorrect, ambiguous or otherwise dirty input data. They can be self-repairing and can provide desirable emergent effects such as abstraction, saliency, autonomous reduction, emergent robustness, disambiguation, and self-repair. They are AntiFragile systems, sine most of these methods have mechanisms to learn from their mistakes.

    Over a dozen primitive MFMs have been identified so far. Generate-and-test, enumeration, remember failures, remember successes, table lookup, pattern matching, mindless copying, adaptation, evolution, narrative, consultation, and markets are such primitive methods which can be combined in many ways to generate an infinite number of Model Free Systems (at various levels) with these properties.

    The Mind is a Model Free System. It learns from its mistakes.

  4. Elliot December 31, 2015 at 11:57 am - Reply

    A few of these are tangentially referenced in some items on your list, but I thought they deserved their own item.

    -There’s an evolutionary explanation for most important things about how we think and behave.
    -Incentives matter
    -Happiness treadmill
    -Almost everyone’s behavior is dominated by status-seeking and status-signaling
    -We are often wrong (sincerely) about why we do things
    -Many things in life depend on your most successful attempt(s), not your average

  5. Romeo Stevens December 31, 2015 at 12:19 pm - Reply

    Beware what you wish for

    Biases
    Affect heuristic
    Attribution Substitution
    Apophenia
    Approach/avoid
    Ambiguity avoidance/effect (variance avoidance)
    Hard work avoidance
    Low-status avoidance
    Halo effect/affiliation
    Availability heuristic
    Focusing effect
    Fundamental attribution error (braitenberg vehicles)
    Introspection illusion
    Narrative bias
    Neurodiversity (typical mind fallacy)
    Nominal fallacy
    Representativeness heuristic
    Scope insensitivity
    Semmelweis effect
    Sunk costs
    Survivorship bias
    Status quo/absurdity
    Selection effects
    Confirmation bias
    Base rate neglect
    Proximity bias
    Temporal sensitivity
    Incoherent discount rate
    Geographic sensitivity

    Data biases
    Misweight because came first
    Misweight because familiar
    Misweight because simpler
    Misweight because of source

    Momentum vs lightness
    Bias gradients
    Show vs Is
    Be vs Do
    Us vs Them
    Now vs later
    Talk vs Bet
    Prestige vs Results
    Endorse vs Record evals
    Diversify vs Focus
    Clump vs Spread
    Participate vs Support
    Dramatic vs Marginal

    Hindsight and prehindsight
    Identity-Consistency

    Business analysis concepts
    Bottleneck (Theory of Constraints)
    Prerequisite tree
    Critical path
    Diffusion of innovation
    Gartner hype cycle
    Key Uncertainties
    Market segmentation
    Moat
    Network effects
    Proxy measure ratios
    Scenario planning (matrices)

    Communication concepts
    Affordance
    Ask-guess-tell culture
    “Because”
    Body language energy
    Open vs closed posture, taking up space
    Check-in
    Common knowledge
    Denotation and Connotation
    Dominance, communality, or reciprocity relationships
    Framing/Reframing

    Love languages
    Service
    Touch
    Words of affirmation
    Gifts
    Quality time

    Mean world syndrome
    Non-violent communication
    Pointing at the moon
    Straussian reading

    Dual process theories
    Back chaining vs forward chaining
    Effectuation
    Deduction vs induction
    Farmer vs forager
    Fox vs hedgehog
    Near vs far
    Open vs closed modes
    Exploration neglect
    System 1 vs system 2
    Systems vs goal
    Zero vs positive sum thinking

    Economic concepts
    80:20 rule, low hanging fruit
    Ammortization
    Arbitrage
    Comparative advantage
    Critical thresholds in nonlinear systems
    Efficient markets
    Elasticity
    Externalities
    Frictional costs
    Information asymmetry
    Marginal Thinking
    Diminishing returns
    Increasing returns
    Moral hazard
    Net present value
    Opportunity cost
    Pareto improvement
    Path dependence
    Principal agent problem/Lost purposes
    Production-possibility frontier
    Key trade-off analysis (synthesis of form)
    Regression to the mean
    Risk tolerance
    Stated and revealed preference
    Time/money value of time/money
    Tragedy of the commons
    Variance

    Epistemic concepts
    A priori, a posteriori
    Counterfactual simulations
    Epistemological rigor->methodological rigor (expert on experts)
    Essentialism
    Evidence/update proportionality
    Fully general counterargument
    Kayfabe
    Leaky abstraction
    Levels of abstraction
    Moral uncertainty
    Moral trade
    Natural kind (cutting at the joints)
    Ontological crisis
    Positive vs normative
    Post hoc ergo propter hoc
    Rationalism/empircisim
    Reference class forecasting (Inside view/outside view)
    Regression to the mean
    Universal prior (epistemic luck)
    Upstream/downstream
    Wastebasket taxon

    Evolutionary psychology concepts
    Adaptation executor not fitness maximizer
    Dunbar’s number
    Handicap principle
    Moloch
    r/K selection
    Red queen

    Game theory concepts
    Expected value
    Free riders
    Nash equilibrium
    Precommitment
    Prisoner’s Dilemma
    Habit formation concepts
    Affordance (brain shaped, overlearning)
    CAR model
    Fragile vs antifragile (hormesis)

    Math concepts
    Derivatives
    Graphs
    Breadth vs depth first search
    Degree
    Directed graph
    Weighted edges
    Over/underdetermined
    State machine

    Optimal brainstorming
    Internal censor
    OODA loop
    Premature decisions
    Premature sharing

    Permuting concepts
    Inversion
    Locked vs unlocked dials
    Locked to each other (unblending variables)
    Redescribing to self or others (rubber ducking)
    Separation and recombination (both append and insert)
    Structured entropy injection

    Political concepts
    Arrow’s impossibility
    Median voter theorem
    Overton window
    Public choice theory
    Realpolitik

    Programming concepts
    Best/worst/average case
    Causal graph
    Functional programming (mental heuristics)
    Greedy algorithm
    Inner loop/outer loop
    Parallelization
    Type error

    Psychotherapy concepts
    Archetype
    Big five personality traits
    Cognitive behavioral therapy
    Internal family systems
    Justifiability
    Liking vs wanting
    Locus of control
    Medicalization (deprivation of agency)
    Moral Foundations theory
    Subject object distinction
    Superstimulus
    Workbooks

    Rationality concepts
    5 second skills
    Aether variables
    Attribution substitution
    Brainstorming questions/problems rather than answers
    Calibrated scaffolding (i.e build as much scaffolding as necessary and no more)
    Chesterton fence
    Cognitive dissonance avoidance
    Impression management
    Confabulation
    CoZE
    Dependency hell (goal or aversion factoring)
    Endorse on reflection
    Exploration neglect
    Schlep blindness
    Low status blindness
    High variance blindness
    Focused grit
    Holding off on solutions
    Increasing marginal utility of attention (monkey mind)
    Inner simulator
    Instrumental/terminal values
    Making beliefs load bearing
    Napkin math
    Offline training
    Proving too much (non-falsifiability)
    Schelling fence
    Self-signaling/self trust
    Signaling/Countersignaling/Meta Contrarianism
    Stealing cheat codes (opposite of not invented here)
    Straw/steelman
    System 1 can’t do math
    Taboo words
    TDT
    Trivial inconvenience
    Ugh fields
    Urge propogation
    Valley of bad X (tends towards hedgehogging)
    Value of Information (xkcd chart)

    Self Help/Motivation concepts
    Can’t bullshit yourself
    Carrot and stick
    Decision fatigue (microdecisions)
    Deliberate practice
    Feedback loop tightness
    Hamming question
    Lowering cognitive overhead (trivial inconveniences)
    Mental contrasting
    Moral licensing
    Motivation equation
    Next action
    Pomodoro distraction log
    Power-law world vs Normal world
    Self care
    Stoicism
    Control Calibration
    Daily Plan
    Daily Review
    Meditation
    Negative visualization
    Self Denial
    Trigger Action Plan
    Zeigarnik effect

    Statistics concepts
    Common distributions (bimodal, normal, log-normal, power law, etc.)
    Confidence Interval
    Confusion matrix
    Dimensionality reduction/Clusters in thing space
    Effect size
    Forest plot
    Funnel plot
    Good vs bad proxies
    Local and global maxima, hill climbing
    Power
    Precision and accuracy
    Regression
    Sample size
    Searching under the streetlight
    Signal/noise
    Tail miscalibration/model uncertainty
    Type I and II errors

    Sticky concepts
    Inception
    Memetics
    Pattern completion
    Simplicity, unexpectedness, concreteness, credibility, emotions, stories
    Virality coefficient
    Why/How/What hierarchy

    Structured analytic techniques
    Optimal brainstorming
    Cross impact analysis
    Key assumptions check
    Trend analysis
    Alternative hypothesis generation
    Scenario/evidence matrix
    Foxy elimination
    Challenge
    Red team
    What if
    Premortem

  6. Romeo Stevens December 31, 2015 at 12:20 pm - Reply

    Your comment system deletes formatting -_-

  7. Taymon A. Beal December 31, 2015 at 3:23 pm - Reply

    Two that I’m a fan of are competing access needs and memetic prevalence debates.

  8. Rainmount December 31, 2015 at 4:26 pm - Reply

    Great article, but I hope you’ll fix a few confusing mistakes.

    #14: Time value of money: “money today is worth less than money in the future”
    This is backwards. Money today is worth more.

    #15: Prisoner’s Dilemma: “both parties not cooperating leads to the worst outcome of those available”
    Second worst. The worst outcome is cooperating while the other party defects. Unless you mean socially worst outcome, but that’s not a theorem.

    #20 Aumann’s Agreement Theorem: “if two rational agents disagree, at least one of them is wrong, and it might be you”
    This can’t be Aumann’s agreement theorem because it’s equally true for irrational agents.

    That’s all I have time for now. Thanks for making this list.

  9. Tyler Hansen December 31, 2015 at 5:53 pm - Reply

    Great list! I have a few concepts that I think you’d find useful:

    1. Finite Games. People have titles because everyone competing for it agrees that they won the competition for it. If you understand the process that grants the title, you know a massively important constraint on the title-holder’s behavior. For example, someone who has the title “Police Captain” will almost never do things like criticize the police, advocate lowering policing budgets, show blatant disrespect for the law, or publicly criticize a respected superior officer. The kind of person that does that sort of thing doesn’t win that kind of title (and if they have one now, they often won’t have one later).

    2. Positive Bias. Looking for things that your model says should happen doesn’t test your model – looking for things that your model forbids does. This is something that people are not naturally good at. See: the 2-4-6 game.

    3. Systems Thinking, especially in high-intuition contexts like social systems. When thinking about something, you have two choices – either you look at the thing as a whole, or you break it down into simpler parts that interact in ways you comprehend. Systems Thinking is making this choice repeatedly on complicated things and the simpler pieces you define. The payoff is that this way of approaching problems consistently leads to being less surprised about what happens when you make changes to complex systems. For instance, NASA projects that spend more of their budget in “Systems Engineering” consistently have less severe budget overruns. Budget overruns don’t map exactly to “system behaves in an unexpected way”, but it’s close enough to convince me of the value.

    4. Personality continuums. There’s often a “best” place to be in terms of assertiveness, amount of eye contact, directness, laziness, and basically any other measure that you can put people on. People will very often make advice based on which direction is better for them, or which direction is better for most people they interact with. This is why there’s contradictory advice on how to do very many things. Personally, I often interpret advice as saying “I’m the kind of person who’d be better off taking this sort of advice”.

    5. Illegibility, as discussed in length in Seeing Like a State.

  10. Matīss Apinis December 31, 2015 at 9:35 pm - Reply

    Suggestions:
    * map and territory;
    * terminal and instrumental goals/values (http://lesswrong.com/lw/l4/terminal_values_and_instrumental_values);
    * epistemic, instrumental, computational rationality;
    * time value of discovery (http://goodreads.com/quotes/7269886-think-of-a-discovery-as-an-act-that-moves-the ; although it’s an instance of counterfactual reasoning)
    * prisoner’s dilemma.

    • Matīss Apinis December 31, 2015 at 9:46 pm - Reply

      Neglected prisoner’s dilemma in the list by thinking about it as the tragedy of the commons.

  11. Peter Garbett January 1, 2016 at 1:10 am - Reply

    Cannot find any mention of sunk costs; people attribute value to things they have put a lot of effort into, not by assessing if they represent the best way forward.

  12. DK January 1, 2016 at 2:02 am - Reply

    Nice article. Some broken links: Aumann’s agreement theorem, Parkinson’s law of triviality. Also the expected value example is in error (10% of 1 billion is 100 million; 90% of 1 million is 900,000; they are not equal).

  13. GregH January 1, 2016 at 3:01 am - Reply

    Neoliberalism: the concept that markets are the basis for pretty much everything, and that the role of government is to define and maintain “free” markets for the advantage of various institutions. To this end, the idea of markets needs to be extended into every realm of human thought.

    https://en.wikipedia.org/wiki/Neoliberalism

    I thought this list was useful, but I think you owe Daniel Kahneman a citation.

  14. Jimmy January 1, 2016 at 3:46 am - Reply

    Goodhart’s law wikipedia link is broken. Correct URL: https://en.wikipedia.org/wiki/Goodhart%27s_law

  15. AdamR January 1, 2016 at 7:35 am - Reply

    Don’t forget the Dunning-Kruger effect:

    https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect

  16. Josh Rehman January 1, 2016 at 7:57 am - Reply

    I particularly like Aumann’s agreement theorem (that disagreement should be taken seriously because one of you might be making a mistake), and (sadly) Social Intuitionism.

    Two of my own:

    The Loyalty vs Principle Tradeoff. You can value your people, or value your principles, but eventually you have to choose. Loyalty seems to win in times of stress (e.g. war) and principle in times of peace. Arguments for and against surveillance fall clearly along these lines, for example.

    The Injustice Threshold. People will accept injustice below a certain threshold, especially if they believe they were somehow at fault. This effect is successfully exploited at scale especially by large commercial financial institutions. If you steal $10 from a million people you’re a genius, but if you try to take $10M from one person you’ll go to jail.

  17. rodrigo January 1, 2016 at 9:06 am - Reply

    Nazis. Have all a happy new year.

  18. Ed January 1, 2016 at 11:13 am - Reply

    Fantastic! Thank you.
    Negative capabilities and
    Whiggish or teleological historical errors are concepts I adore.

  19. Francis Kim January 1, 2016 at 12:26 pm - Reply

    An area I do not know much about – great write up though, thank you.

  20. Peter Donis January 1, 2016 at 1:56 pm - Reply

    The bike shed example is somewhat misstated. In the original (fictional) story from the book Parkinson’s Law, the issue is not that people looking at the design of a nuclear plant spend too much time looking at the bike shed design and not enough looking at things like nuclear safety. The issue is that the committee trying to decide whether various projects should be funded at all spends only about two and a half minutes in approving an expenditure of $10 million on a nuclear reactor, but spends about forty-five minutes arguing about the design of a bike shed, with the possible result of saving some $300. (They then spend an hour and a quarter arguing about whether to provide coffee for monthly meetings of the administrative staff, which amounts to a total annual expenditure of $57, and refuse to make a decision at all, directing the secretary to obtain further information so they can decide at the next meeting.)

    The point being that “bikeshedding” is not (just) about what parts of a project to pay attention to, but which projects to pay attention to. Spend more time and effort paying attention to projects where there is more value at stake.

  21. Peter Donis January 1, 2016 at 2:02 pm - Reply

    The discussion of Aumann’s Agreement Theorem might be a bit too brief. The theorem actually says that if two rational agents disagree *when they both have exactly the same information*, one of them must be wrong. The qualifier is important; it means that you should take disagreement seriously *if* you have reason to believe the other person has significant information that you don’t. That isn’t always the case.

    • Elliot January 1, 2016 at 10:59 pm - Reply

      Peter, I think we can get around the requirement of common priors in Aumann’s Agreement theorem if we think about things properly. See Scott Aaronson’s description:

      “if you’re really a thoroughgoing Bayesian rationalist, then your prior ought to allow for the possibility that you are the other person. Or to put it another way: “you being born as you,” rather than as someone else, should be treated as just one more contingent fact that you observe and then conditionalize on! And likewise, the other person should condition on the observation that they’re them and not you. In this way, absolutely everything that makes you different from someone else can be understood as “differing information,” so we’re right back to the situation covered by Aumann’s Theorem.”

      ….more at http://www.scottaaronson.com/blog/?p=2410

      • Peter Donis January 2, 2016 at 11:34 am - Reply

        I’m not talking about the requirement of common priors; I’m talking about “differing information”. More precisely, “differing information” plus non-monotonicity of reasoning (in other words, a proposition that is established based on priors plus some set of information, can still end up being falsified by further information). I believe that monotonicity is a (possibly implicit) assumption of the theorem.

        It’s also probably worth noting that the assumption of “rationality” in the theorem, while it is not quite the idealized Spock-like caricature that is often presumed, is a very strong assumption. Most, if not all, humans do not meet the assumption; we cannot be relied upon to do correct Bayesian updates in all cases, or even in most cases. That considerably weakens the case for “taking disagreement seriously”. If two people really are idealized Bayesian reasoners, and each one knows the other is, then any disagreement is serious (meaning, should cause a Bayesian update) because both of you are (supposed to be) using the same reasoning process, and the same reasoning process starting from the same premises should reach the same conclusions. But for real humans, there is no reason to think we are all using the same reasoning process, or even closely related reasoning processes. That’s why it’s not enough just to know that someone disagrees with you; you have to know why, so you can estimate how close their reasoning process is to yours, and therefore how much of a reason you have to update based on disagreement.

        • Elliot January 2, 2016 at 10:25 pm - Reply

          Hi Peter. I don’t get your objection about differing information. The theorem explicitly deals with differing information. This differing information is what is condensed into the probability estimates that the two people send back and forth. The reason that treating differing priors as differing information was so mindblowing to Scott Aarsonson is that the theorem is designed to handle differing information.

          I don’t think monotonicity of reasoning as you describe it is an assumption of the theorem. It seems like the non-monotonicity of reasoning is a pretty standard view.

          Re: your concern about how you can figure out how rational the person you just met is — I think this is more interesting. It seems how close their reasoning is to yours isn’t the exact criteria — it should be how close both of your processes is to an ideal Bayesian process. If your reasoning processes differ equally from the ideal but in different ways, then you do want to weight their probability estimates the same as your own. Also interesting is how you establish how honest the other person is..

          • Peter Donis January 4, 2016 at 5:34 pm

            Monotonicity is not an explicit assumption, as far as I can see; but it is implicit in a key step in the proof. At least, I think “monotonicity” is the correct term for what is being assumed. The key step is that, once two people, A and B, exchange information, the new set of states of the world that is compatible with the information they have must be the intersection of A’s set before the exchange, and B’s set before the exchange. Since this intersection is unique, A and B must have the same set of states of the world compatible with their information after the exchange–which is equivalent to saying they must have the same probability estimates. But this argument depends crucially on the assumption that new information can only “shrink” the set of states of the world that are compatible with a person’s current information–it can never “grow” it, by adding states to the “compatible” set that were previously considered incompatible. That is the implicit assumption that I am referring to as “monotonicity”, and which I do not think holds in the real world.

            > “It seems how close their reasoning is to yours isn’t the exact criteria — it should be how close both of your processes is to an ideal Bayesian process.”

            In principle, yes. I’m not sure what the impact of this would be in practice.

  22. Lizz January 2, 2016 at 2:04 am - Reply

    It’s funny that you’d include both local vs global optima and Algernon’s law in the same post, since one is evidence of the other’s falsehood – If you understand local vs global optima, you should be able to understand that Algernon’s law doesn’t hold up to scrutiny.

    Evolution, involving incremental changes to the genetic code, has a very very very difficult time moving from a local optimum to the global optimum, because oftentimes the intermediate iterations necessary to transition from point A (a stable, functional, reasonably fit organism) to point Z (an even more stable, more functional, more fit organism) are themselves LESS fit than point A, and are thus out-competed.

  23. Jim Dignity January 2, 2016 at 3:34 am - Reply

    Spontaneous order
    Value of information
    Positive vs negative rights
    Affordances
    Requisite variety

  24. David Orban January 2, 2016 at 4:00 am - Reply

    Daniel Dennett’s “Intuition Pumps and Other Tools for Thinking” is a wonderful compendium of how to think better.
    https://www.goodreads.com/book/show/18378002-intuition-pumps-and-other-tools-for-thinking

  25. Couts Moseley January 2, 2016 at 4:20 am - Reply

    Subjective Value — the fact that nothing has worth in and of itself but reflects the relevance to the goals of the chooser. A misunderstanding or ignorance of the nature of economic value is behind seemingly compassionate but ultimately destructive ideologies.

  26. Trakums January 2, 2016 at 4:25 am - Reply

    Please remove Godwin’s Law — it is entirely unhelpful.

  27. Charley January 2, 2016 at 6:25 am - Reply

    48. Godwins law seems like a poor choice, and not in keeping with the serious and interesting tone of the rest of the list.

  28. Andy McKenzie January 2, 2016 at 6:48 am - Reply

    Great list! If you’re interested, please check out my list of which cognitive biases we should trust: http://lesswrong.com/lw/csf/which_cognitive_biases_should_we_trust_in/

  29. Nebu January 2, 2016 at 8:54 am - Reply

    I suspect all of your links which contain % are broken, due to a double escaping. For example, your link to Chesterson’s Fence is https://en.wikipedia.org/wiki/Wikipedia:Chesterton%2527s_fence when it should be https://en.wikipedia.org/wiki/Wikipedia:Chesterton%27s_fence and your link to Algernon’s Law is http://www.gwern.net/Drug%2520heuristics when it should be http://www.gwern.net/Drug%20heuristics

  30. 27chaos January 2, 2016 at 8:58 pm - Reply

    Berkson’s paradox
    Simpson’s paradox
    Duhem Quine thesis
    Granularity/levels of analysis
    Renormalization

    • Romeo Stevens January 10, 2016 at 9:19 am - Reply

      Awesome, I had never heard of Berkson’s paradox. Thanks!

  31. Karthik January 2, 2016 at 11:24 pm - Reply

    I would add “Transaction Cost”. For example, if a gym is on the way home, a person would probably be more likely to/frequently workout. A reverse example may be doing taxes, especially for the first time. A requisite amount of knowledge may be required to even start taxes.

  32. Beautiful Freak January 3, 2016 at 2:27 am - Reply

    I’ve noticed a behavior I call the Breadcrumb Effect. Sometimes it happens that if you toss a handful of breadcrumbs to a group of birds, they will all choose a single morsel to fight over and ignore the rest. Humans are like that too. Some people say that women don’t notice a guy until he’s seen with another woman, and then rate him according to the level of jealousy she provokes. That’s probably sexist, but art collecting works that way. In business, products are differentiated to compete in established categories, but the differences are superficial, so potential demand for the truly new goes untested, a sea of bread. (Think of phones with buttons competing before the iPhone came along.) The psychology of it, I think, relies on a perception of scarcity, almost a preference for the conditions of scarcity. It’s easier to have a strategy in life if there are a limited number of things to pursue, fewer choices. It’s easier if we can see what everybody else wants, and then try to get it for ourselves instead. That kind of tunnel vision comes automatically, so we ought to remind ourselves frequently that we’re not seeing all the possibilities, and that the urge to fight over breadcrumbs might signify there’s bread everywhere.

  33. Anna January 3, 2016 at 9:07 am - Reply

    I disagree with what seems to be a key premise of the article you posted about the Algernon Argument: the idea that, if evolution hasn’t produced higher IQ’s, there must be an unfavorable trade-off in producing higher IQ’s. We already know this isn’t true, just from that fact that IQ has a natural distribution with a small number of high-IQ people occurring in our population with no physiological problems. Why aren’t there more of them? Because we lack a strong enough selective pressure to favor higher IQ. People with average intelligence do just fine. The exceptionally high-IQ are rare and their procreation depends on many other factors not related to IQ. Evolution doesn’t make us optimal, it only makes us good enough to get by. We are as smart as our ancestors needed to be in order to produce offspring who produce offspring.

  34. Chris Leong January 4, 2016 at 10:48 pm - Reply

    Ask culture vs guess culture – http://www.thewire.com/…/2010/05/askers-vs-guessers/19730/

    Growth vs fixed mindset (despite the face Scott Alexander disagrees with it https://www.brainpickings.org/…/01/29/carol-dweck-mindset/)

    Superweapons – http://slatestarcodex.com/…/12/weak-men-are-superweapons/

    Phatic and anti-inductive – http://slatestarcodex.com/…/the-phatic-and-the-anti…/

    Least convenient possible world – http://lesswrong.com/…/the_least_convenient_possible…/

    Epistemic learned helplessness – http://squid314.livejournal.com/350090.html

    Chinese robber fallacy – http://slatestarcodex.com/…/cardiologists-and-chinese…/

    PETA effect – http://slatestarcodex.com/2014/12/17/the-toxoplasma-of-rage/

    Bingo cards – http://squid314.livejournal.com/329561.html

    Falsifiability – https://en.wikipedia.org/wiki/Falsifiability

    Hume’s Is-Ought distinction – http://plato.stanford.edu/entries/hume-moral/#io

    Socratic method – https://en.wikipedia.org/wiki/Socratic_method

  35. Poker Chen January 5, 2016 at 9:23 am - Reply

    If it’s only one thing, I propose that the following phenomena be added:

    There exists mistakes of disbelief along the lines of “Why did I get cancer when it’s only 1 in 100,000?”, or “Why did I lose my house to a flash flood?” The question should have been “What are the chances of me getting any disease of low probability? Or, losing my house to any cause?”, because the person would have said the exact same thing if any single one of the low-probability events occur. The “correct” underlying quality being examined is health and well-being, but was disguised by the event X. This is related to Peltzman effects and pattern-seeking behaviour (in life), and how people consistently underestimate the ‘clumpiness’ of random data.

    Is that a known effect, or should I call it “Blame the tree you drove into” (誚樹忘林)?

  36. Greg January 5, 2016 at 10:28 pm - Reply

    Typo: Dawkin -> Dawkins

  37. pargo January 7, 2016 at 1:12 pm - Reply

    Loved this list, would be interested in seeing another like it!

  38. Stephen Warren January 8, 2016 at 2:53 am - Reply

    I’m curious about the value of time. My pay rate is only relevant to the question of the $10 saving trip across town if I in fact have the option of working an extra hour. What if I don’t? If I make $20 an hour but can’t work extra then what’s that got to do with whether or not I ought to spend an hour saving $10. How do I compare an hour saving $10, or $20, or whatever, to, say, reading about 52 concepts?

  39. Tim January 8, 2016 at 12:25 pm - Reply

    I think I would have added some concepts from optimization: energy minimization, constraints, attractor states. Great list, thanks.

  40. Timo January 21, 2016 at 5:41 am - Reply

    Reductionism – the idea that there are many levels of description but only one reality. It may be obvious to some, but it had to be learned.

  41. RS Love March 3, 2016 at 7:25 am - Reply

    Hi,

    Very stimulating and thought-provoking list. Naturally, I was biased to determine if Moore’s Law was listed given its profound impact on just about everything we’ve seen in technology advancement in the 21st century. For further consideration or amusement, Asimov’s Three Laws of Robotics, Feynman Diagrams, Any sufficiently advanced technology is indistinguishable from magic – Arthur C. Clarke and naturally again, atomic theory which is now morphed into the Standard Model which focuses on Higgs-Boson and other unseen sub-atomic particles. Not sure which discipline generates the most interesting models and concepts; theoretical physicists or behavioral economists. In that case, the outlier or Black Swan concept stands alone.

    RS Love
    Palo Alto, CA

  42. E. Oxenford August 17, 2016 at 9:44 am - Reply

    Excellent list. One observation: The example used in “Social intuitionism” commits exactly the error it purports to illustrate.

    • Peter McIntyre August 20, 2016 at 8:49 am - Reply

      Wait, so you’re saying that slavery wasn’t morally wrong? Otherwise I’m not sure what you’re meaning.

  43. Armando September 25, 2016 at 4:42 pm - Reply

    A series of unfortunate events led me to become stronger. Both by studying and by training myself in rationality.

    These are the list of rules I wrote to myself, as remainders against my own stupidity.

    Here are the lists:

    Against Akrasia

    • Deciding in Advance

    It is tempting to delay the action of choosing as much as possible. Some plans might call for delayed action, but never for delayed choosing. That is mere hesitation. If you need extra-information to make a decision and you know which information you need, you can decide in advance what you would do once you had said extra information. Never delay a choice.

    • Hesitation is always easy, rarely useful

    When confronted with the time to act, hesitation might manifest in one form or another. Sometimes it manifests as mere laziness, other times as an overwhelming fear and more. Notice whenever you are hesitating and act immediately. If you had already decided, within a calmer state of mind, that this was the right course of action, defer to your past self, ignore your mental doubts and just act.

    • Notice Procrastination

    Procrastination is an un-winnable mental battle. You have to be able to notice whenever you are procrastinating. When you do notice you are procrastinating, just go do whatever it is you are delaying immediately and don’t think anymore. No, seriously, GO!

    • Trivial Inconveniences

    Humans let go of many decisions and even benefits when an inconvenience, however trivial, stands in our way. Our mind is averse to effort and it finds a way to convinces us to procrastinate it. This is how some European countries raised their organ donation rate to almost 100%. A law was simply passed that made everyone a donor by default unless they filled out a simple form at a government office. Beware of trivial inconveniences and stop delaying.

    • Trivial Distractions

    When engaged in a task that requires effort we are prone to becoming easily distracted by our immediate environment as a way to procrastinate. Something as simple as answering an instant message or quickly cheking a website can trigger a large episode of procrastination. Notice these impluses and ignore them. Turn of mobile devices if necessary and, if you find yourself already within a procrastination episode disengage it as soon as you are aware of it.

    Basics

    • Agency

    Roles, emotions, impulses and biases can highjack your train of thought making you believe you are thinking and acting by your own volition when in fact you are not. You must always keep in mind to question whether you are acting with Agency or if something else is motivating your cognition.

    • Patience

    Our own subjective experience makes us believe our problems and issues are bigger than they really are and we are programmed to seek immediate resolutions. Our brains are hardwired for instant gratification and long-term gains are not intuitively understood. You must abide to hard statistical reasoning and always think long-term. For this you must embrace patience and if you realize you have no agency at the moment, you must wait until you have a clear head, overcoming the need for an immediate resolution.

    • The Sunk-Cost Fallacy

    This economics fallacy runs deeper into human nature than I previously suspected. Quirrel’s insight of “Learning to lose” and even the buddhist teaching of “Letting go” are both instances of overcoming the fallacy. Learn to accept reality as it is, for throwing a tantrum won’t change anything and keep in mind that cost already spent are sunk. Never let a sunk cost dictate your behaviour.

    • Make your beliefs pay rent

    You must subtract the words from beliefs and notice which experiences they predict or, better yet, prohibit. Never ask what to believe but what to anticipate. You must ask what experiences constrain the belief, what event would definitely falsify it. If the answer is null, it is a floating belief. If you are equally good at explaining any outcome you might have powerful rhetoric powers but ZERO knowledge

    • Learning from History

    The History of the world is filled with mistakes that get repeated over and over. Subtle flaws in human reasoning are responsible for this cyclical phenomenon. Every mistake contains a hidden lesson to be learned, as most of them get repeat albeit in a different fashion depending on the time period and cultural background. The Elan Vital seemed plausible to someone as smart as you. Short-term history, like the personal life experiences of people around you and biographies also contain lessons to be learned. It is very important to grasp the stupidity of ordinary people to see what follies of theirs you have imitated and what others you could avoid. Be Genre Savvy and avoid the clichés of history.

    • Shut up and do the Impossible!

    Always play to win. If your mind believes something is impossible you might try only for the sake of trying. You can succeed in the endeavor of making and extraordinary effort without winning. You must try your best under the assumption that the problem is solvable. You must also keep in mind that whenever you feel like you have maxed out, in all probability you still have a lot more to give if only you focus and muster the required mental power. It is recommended to never give up until, at the very least, you have actually tried for 5 minutes, by the clock.

    • Heuristic Decisions

    Whenever you are hesitant about a decision, force yourself to decide in advance. If you can guess your answer, then, in all probability, your System-1 has already decided. Once an idea enters your mind it stays there for we rarely change our minds. Don’t accept intuitions as definitive answers and make an actual effort to think for yourself.

    • Fallacious Wisdom

    Every human culture is permeated with wisdom and advice. The elderly and the experienced are seen as valid sources of lessons. However, since the human mind is biased to generate fallacious causal explanations to connect random events into a coherent story, a lot of this wisdom is false and biased. Lessons imparted by wise old men and successful icons might be product of their minds cleverly rationalizing a story that ignores the factor of luck, giving more weight to merit and hard work. There are hidden powerful lessons hidden amid the sea of fallacies but remember: ”A powerful wizard must learn to distinguish truth from among a thousand plausible lies”.

    • The premortem

    When planning, imagine a worst-case scenario as something that has already happened and then force yourself to come up with an explanation as to why that happened. This was originally envisioned as a technique for minimizing bias when making collective decisions, but is also quite useful for an individual. The idea is to put our uncanny capacity for rationalization and causal storytelling to rational use and realize all the perils that our own subjective experience ignores.

    • Uncertainty as a Virtue

    System-1 constantly scans for signs of certainty in the body language of others and it is biased to believe wherever certainty appears while uncertain. This has translated in a culture that penalizes uncertainty and incentivizes an appearance of certainty. Even in their private mental life, people reject uncertainty and look for certainty even where there might be none. Inadequate appreciation of the uncertainty of the environment inevitably leads economic agents to take risks they should avoid. However, optimism is highly valued, socially and in the market; people and firms reward the providers of dangerously misleading information more than they reward truth tellers. This might be good for social relations but terrible for gaining accurate beliefs. Given that we live in an uncertain world, an unbiased appreciation of certainty is a cornerstone of rationality. Be wary of certainty and realize uncertainty is a virtue not a disvalue.

    Emotional Bias

    • Notice emotionally biased thoughts and impulses

    You must keep diligent watch over your own mental process to notice whenever agency is gone and emotion or intuition is biasing your perception. Whenever you realize you are emotionally biased you must say out loud “I notice I am emotionally biased” and restrain all courses of action until you have a clear head. Sneaky emotions like anger, jealousy and helplessness can make you think you are thinking and acting by agency, which is why you must notice it and restrain yourself.

    • Noticing Motivated Cognition

    Beyond regular emotional bias, there are other instances in which you have no agency but you believe you do. A flaw in your self-concept or a desire to confirm a belief can too constrain your mind and motivate to think and act within a biased reasoning space. The only defense against motivated cognition is Meta-cognition. Always observe your mental processes and ponder if there are any hidden motivators behind them.

    • Removing Emotional Complexity

    Our own subjective experience makes us see out life problems as bigger than they really are. Most of these issues are seen as nothing more than moments in hindsight some time later. This is because we add layers of emotional complexity and lose sight of what is really there. To be able to see the situation for what it is you must adopt a reductionist perspective, trying to see it with external eyes. See it as simple as you can as if you were assessing the situation of a stranger.

    • Not one unusual thing has ever happened

    Feelings of surreality and bizarreness are products of having poor models of reality. Reality is not inherently weird. Weirdness is a state of mind, not a quality of reality. If at any point some truth should seem shocking or bizarre, it’s your brain that labels it as such. If the real world seems utterly strange or surreal to your intuitions — then it’s your intuitions that need to change. The map is not the territory.

    Reasoning

    • Base-rate neglect

    Human minds can’t intuitively grasp statistical reasoning and because of that we live in constant self-deception. System-1 ignores statistical facts and instead focuses on its own causal and emotional reasoning, which gives people’s System-2 the impression of accurate judgment. For a most accurate Bayesian reasoning, you must do your best to anchor your judgments and predictions to the base-rate (the mean) and from there adjust with other information, always being wary of the quality of the evidence.

    • Regression to the mean

    This is a basic law of nature that the human mind is inherently designed to overlook and deny. Extraordinary events in any given scenario are mostly statistical anomalies. Statistics show that extraordinary events regress to the mean. System-2, designed for causal reasoning rather than statistical, creates fallacious narratives to explain the causes behind extraordinary events, creating biased beliefs that generate false predictions. When these predictions inevitably turn out to be false, a new causal explanation is generated to maintain the false belief. Always remember that regression to the mean is a basic law of nature when making judgments. Regression to the mean has an explanation but not a cause so avoid the temptation to accept causal narratives as facts.

    • Occam’s Razor

    Occam’s Razor indicates that the simplest explanation is the most probable one. Adding complexity to a model can lead to biases like the conjunction fallacy, however, this is not intuitive. Since humans have trouble inferring the particular from the general, we are prone to believe more in a more detailed model, even though that makes it less probable.

    Always remember that in science complexity is costly and must be justified by a sufficiently rich set of new and (preferably) interesting predictions of facts that a simpler model or existing theory cannot explain.

    Remove layers of complexity and remember that the length of a verbal statement in itself is not a measure for complexity. Ponder what must be true in order for the theory to be true to uncover hidden complexity.

    • Overconfidence

    People are prone to be inherently overconfident in their predictions and their model of the world. The origin of this biased perception is mostly that System-1 gets its subjective feeling of confidence from the cognitive ease associated with the coherence of an explanation. We are designed for causal explanations, not statistical reasoning, which makes System-1 suppress feelings of uncertainty. To avoid overconfidence, be wary of the coherence of a story and focus instead on the quality of the evidence. Remember to make beliefs pay rent

    • Mysterious Answers

    When thinking about a mystery and trying to devise a solution, the probable solution must make the mystery less confusing. It is not enough that the proposed hypothesis is falsifiable; it must also destroy the mystery itself. Because if it doesn’t, then why should your hypothesis have priority over any other?

    • Notice Confusion

    Notice whenever an explanation doesn’t feel right. Pay close attention to this tiny note of confusion and bring it to conscious attention. Never try to force the explanation for that would be a rationalization. Say out loud “I notice that I am confused” and understand that either the explanation is false or your model is wrong. Explanations have to eliminate confusion.

    • Defer to more rational selves

    A technique for making rational decisions or detaching from your own opinion is to model what someone else would do in that situation. Is it recommended that the model you use is of someone you believe to be more rational than you. This forces your mind to make a decision from a fresher perspective.

    • Rationality vs. Rationalization

    Rationality gathers evidence before proposing a possible solution. Rationalization flows in the opposite direction. It gets fixated on a conclusion and then looks for arguments to justify it. This is why the power behind hypothesis lies in what they predict and prohibit. It is important to avoid rationalizations and make beliefs, models, explanations and hypothesis pay rent.

    • Positive Bias and Confirmation Bias

    Our intelligence is designed to find clever arguments in favor of existing views, which makes us inherently biased towards confirmation rather than falsification. It takes conscious effort to look for evidence against out thoughts and beliefs. Even people who are looking to test whether a belief is true or not fall for this bias in the shape of the Positive Bias, where they design experiments to confirm rather than falsify their theory.

    You must make a conscious effort to overcome the aversion against conflicting evidence and learn to look towards the darkness. You must make your beliefs pay rent.

    • Optimistic Bias

    Humans are optimistic to the point of it being a cognitive bias. System-1 makes people feel at less risk of experiencing a negative event when compared to other people. This is born from people being too trusting to their own subjective experience and neglecting base rates. People rationalize away risks and defeats in order to keep a coherent narrative within their own minds. You have to be overly pessimistic when dealing with hard statistics and remember you are not exempt from risks. Statistics are always more accurate than your own subjective experience.

    • Update incrementally

    Even when accepting conflicting evidence as true, our intuitions tend to remain the same as they were. This means people generally fail to update there beliefs based on the new incoming evidence. Conflicting evidence then is rationalized away in order to maintain the mental status quo and retain the belief.

    Avoid this mistake by making an effort to update based on new evidence. Every piece of evidence must shift your belief upwards or downwards in regards to probability. Stop and reflect when receiving new evidence to avoid action gar thinking the same way as before.

    • Do not avoid a belief’s weakest points

    People who start doubting a cherished belief tend to do so from a strong perspective, one where they already are proficient at counter-arguing in order to reaffirm the belief. When pressed to question from a weak point of the belief, people flinch-away in pain and stop thinking about it.

    In order to successfully doubt, you must try to find the weakest points in a belief. Deliberately think about whatever hurts the most. Don’t rehearse standard objections whose standard counters would make you feel better. Ask yourself what smart people who disagree would say to your first reply, and your second reply. Whenever you catch yourself flinching away from an objection you fleetingly thought of, drag it out into the forefront of your mind and confront it.

    • The Outside View (Adjusting predictions from the baseline)

    There is a prevalent tendency to underweight or outright ignore prior statistics when making predictions which leads to many mistakes in forecasting. This is especially true when people have specific information about an individual case, as they don’t feel that statistical information can add any more value to what they already know. Truth is statistical information has more weight than specific details. A specific instance of this is The Planning Fallacy. You must adopt the outside view diligently. Check prior statistics and adjust any further evidence from the baseline.

    • The illusion of validity

    A cognitive bias that makes people overestimates his or her ability to interpret and predict accurately the outcome when analyzing a set of data. System-1’s pattern-seeking tendency looks for consistent patterns in data sets, looking for a causal explanation where there might be none. Then System-2 uses its impressions to form beliefs and make false predictions. Repeated exposure to this phenomenon in an uncertain environment can make a person fall into a self-deception of expertise, like political forecasters and stock traders. To avoid this bias, make beliefs pay rent, pay close attention to base-rates and assess the uncertainty present in the environment you are trying to make predictions on.

    Assessing Risk

    • Loss Aversion and Risk-seeking behavior

    For the human mind, losses are more prominent than gains. This has an evolutionary origin as organisms that treat threats as more urgent than opportunities have a better chance at survival and reproduction. The “loss aversion ratio” has been estimated in several experiments and is, on average, usually in the range of 1.5 to 2.5. Loss Aversion is the source of a wide array of cognitive biases that make people take irrational choices when assessing risk and so one must diligently use System-2 to fight it. Even though Loss Aversion is a prominent fact of human cognition, there are certain instances where humans are biased to be risk seeking. In general, humans are Loss-Averse when assessing risk in the realm of possible gains but jump into risk-seeking behavior when assessing loses. Economic logic dictates the opposite. We should be Risk-Seeking when facing possible gains and Loss-Averse when faced with loses. A failure to abide to this rule results in long-term costs.

    • Biased to the reference point

    Intuitive evaluations and judgments are always relative to a reference point. This is why room-temperature water feels cold after being in warm water and vice versa. The reference point can be the status quo or expectations of things System-1 feels entitled to. Anything above the reference point is perceived as a gain and everything below as a loss. This can bias perception, creating risk-averse or risk-seeking behaviors depending on the situation. Be wary of this as other people set reference points cleverly anticipating this human bias. A very tangible example of this phenomenon is The Endowment Effect, where System-1 makes an object feel more valuable merely because you own it. The reference point here is ownership itself, when trying to get rid of the object the pain of losing it is evaluated instead of its market value. This bias must be fought when dealing with economic transactions.

    • Adjusting to the real probability

    System-1 emotional intuitions get biased when presented with probabilities. We fail to anchor our emotions to the actual number. There are two specific effects one must be wary about:

    The Possibility Effect: This is when highly improbable outcomes are overweighed by System-1. This is due loss aversion. When faced with a pair of “losing choices”, System-1 is biased to prefer to gamble for huge gains, even at the expense of terrible loses, than accept a loss. This causes a 5% chance to be disproportionately weighted against a 0% chance.

    The Certainty Effect: This is when highly probable outcomes are underweighted by System-1. This is due to risk aversion. When people are faced with a certain but inferior choice opposed to a highly probable and superior one, their intuitions are biased to pick the certain option. For instance, a 5% difference between a 95% choice and a 100% feels like too much a risk. Whatever the probability presented, we must adjust our decision to said probability according to the expected utilities. This is completely counterintuitive and must be enforced by System-2.

    • Denominator Neglect

    When dealing with probabilities, the human mind can neglect the denominator of the probabilities in question, leading to fallacious assessments. This is caused by your mind paying attention only to the numerators and ignoring the denominators when comparing probabilities. How a probability is framed can lead to vivid imagery biasing judgment. An example of this is an experiment where people had to choose to draw a red marble at random between two Urns (A: 10 marbles, 1 red|B: 100 marbles, 8 red). People chose B most of the time because they focused on the numerator, 8, ignoring that A had a higher chance of success. Denominator neglect leads to the overweighting of rare events. System-1 generates this bias because of the confirmatory bias of memory. Thinking about that event, you try to make it true in your mind. A rare event will be overweighed if it specifically attracts attention. Separate attention is effectively guaranteed when prospects are described explicitly (“99% chance to win $1,000, and 1% chance to win nothing”). Obsessive concerns (the bus in Jerusalem), vivid images (the roses), concrete representations (1 of 1,000), and explicit reminders (as in choice from description) all contribute to overweighting.

    • Broad Framing as opposed to Narrow framing

    For risk assessment, humans tend to have a form of mental accounting labeled Narrow Framing, where they separate complex issues and decisions into simpler parts that are easier to deal with. The opposite frame is labeled Broad Framing, where issues and decisions are viewed as a single comprehensive case, with many options. As an example, imagine a longer list of 5 simple (binary) decisions to be considered simultaneously. The broad (comprehensive) frame consists of a single choice with 32 options. Narrow framing will yield a sequence of 5 simple choices. The sequence of 5 choices will be one of the 32 options of the broad frame, which means the probability of it being the most optimal is very low. Issues and problems have to be judged with a Broad Framing perspective to avoid biased judgments.

    • Narrow framing: Short term goals

    People often adopt short term goals but, to assess their status in succeeding at said goal, they usually use an immediate goal, which leads to errors in judgment. This is an example of Narow Framing easier to grasp with an example. Cabdrivers may have a target income for the month or year but the goal that controls their effort is typically a daily target of earnings. This is easier to achieve in some days than in others, like rainy days where taxis aren’t empty for long as opposed to pleasant weather. The daily earnings goal makes them stop working early when there is a lot of work and makes them work endlessly when fares are low. This is a violation to economic logic. Leisure time, when measured in fares per hour, is much more “expensive” on rainy days than pleasant weather days. Economic logic would dictate that they should maximize their profit on rainy days and treat themselves to leisure on lazy days, where it is “less expensive”. Short-term goals must never be measured by immediate goals, the Broad Framing view must be adopted in order to maximize utility.

    • Narrow Framing: Moral Issues

    When faced with possible moral issues that require a financial decision such as altruistic donations or budget allocation, System-1 substitutes the main economic question with a simpler, emotional one: How strongly do I feel about this issue? Once the mind is armed with the answer of this question, the monetary figure is generated by matching the intensity of the feeling with an actual figure. This is mostly caused by judging the moral issue in isolation. In order to avoid this Narrow Framing bias, a Broader Frame has to be adopted and the issue has to be compared with other possible decisions, even if it feels counter-intuitive.

    • Narrow Framing: Risk Policy in Financial Decisions

    The most effective way to avoid Narrow Framing when making financial decisions is to adopt a Risk Policy and abide to it besides emotional incentives to do otherwise. Several financial biases come from Narrow Framing, one of the most relevant being The Disposition Effect, which is the tendency of investors to sell assets whose value is raising while keeping the ones that are dropping. This comes from the framing of the decision, as it Is perceived as a choice between the pain of experiencing the loss of selling a losing asset versus the joy of selling a winning one.

    • Fear of feeling Regret as a source of risk-aversion

    A major bias in decision making comes out of people anticipating the possibility of feeling regret if a negative outcome were to occur and avoiding said decision in favor to a safer one where no regret would be evoked. The effect of this bias is stronger when the possible negative outcome is the product of action rather than inaction. This happens because when people model themselves, they expect to have a stronger emotional reaction (including regret) to an outcome that is produced by action than to the same outcome produced by inaction. This translates into a general aversion to trading increased risk for some other advantage, which can affect institutional risk-taking policies. To counter this bias, regret must not be factored when making a decision about risk.

    • Emotional Framing

    Losses and gains are processed differently by System-1 depending on how they are framed, which evokes different emotional reactions, resulting in biased choices. Options that are economically equivalent are processed as being different due to emotional bias. Losses evoke stronger negative feelings than costs. This bias can be exemplified with credit surcharges labeled as possible cash discounts. Places with “permanent” 2×1 promotions do the same thing, framing it as a discount when in reality you are forced to order 2. This heavily affects forecasts. For instance, a surgery labeled with 90% chance of recovery gets much more acceptance than one framed as having a 10% fatality rate, even though they represent the same probability. Reframing is effortful and System 2 is normally lazy. Unless there is an obvious reason to do otherwise, most of us passively accept decision problems as they are framed and therefore rarely have an opportunity to discover the extent to which our preferences are framebound rather than reality-bound. To avoid this bias, you must always consider reframing the problem to eliminate emotional bias.

    • Mental Accounting

    To better be able to grasp and evaluate economic outcomes, people categorize their assets on separate accounts even though they might be the same kind of resource. This generates behaviors and decisions that drift away from economic logic. Mental Accounting biases judgment to a reference point, in this case the particular mental account, and behavior is dictated by it. Psychological pain and loss-aversion are then evoked from the mental account depending on its size which biases the final decision. This is why people are more willing to pay larger sums of money out of their credit cards than their cash. In order to avoid Mental Accounting assess economic decisions using economic logic. This means paying attention to emotional incentives (like loss-aversion and pain) in order to ignore them, acquiring a Broad-Framing perspective to compare the decision to others and analyzing whether System-1 has different mental accounts for the economic resource in question.

Leave A Comment