'Antifragile: Things That Gain from Disorder' by Nassim Nicholas Taleb
'Antifragile: Things That Gain from Disorder' by Nassim Nicholas Taleb

I always struggle with Taleb’s books. On the one hand, they are full of insights and interesting ideas; on the other, they are poorly structured and full of tangents and mean, spiteful, and largely unnecessary attacks against various groups of people (e.g., economists, academics, etc.). Taleb, at least from his writing, strikes me as the classic brilliant asshole. I’m not sure if I’d ever want to work with him, but kept at arms length, his books do offer lots of interesting learning.

This book in particular is tough to get into. The first ~40% is just (a) defining and repeating the definition of antifragile over and over again and (b) vicious insults hurled towards people he dislikes. The last ~60% of the book covers a variety of interesting topics, though they all feel a bit disconnected. This book could’ve delivered all the same information in a much smoother and more accessible package if he just worked with his editors, but he prefers to hurl insults at them too, assuming he knows better than them—and the book suffers for it.

If you can get passed his abrasive personality, then you’ll find a number of interesting ideas here. It’s worth mentioning that I don’t fully agree with many of those ideas, but all of them made me stop and think, which to me is a huge win from a book.

I took a lot of notes while reading. Here are a few of the highlights:

Asymmetries and anti-fragility

Asymmetries

Taleb talks a lot about asymmetries where you have one of the following:

  • The potential for a small upside but almost unlimited downside
  • The potential for almost unlimited upside, but only small downsides

Obviously, we should prefer the latter. However, many systems end up using the former. For example, many financial instruments are built around:

  • The potential to make small percentage gains over time
  • But there is also a small risk of being completely wiped out by a Black Swan event

Even if the upside was moderate or large, if there is a nontrivial risk of having all of your gains wiped out by a catastrophic loss, it’s usually not worth it. Instead, we should seek systems where:

  • The maximal loss is limited to something small
  • Rare Black Swan events can lead to massive wins

In such systems, you can lose 1,000 times, but if you win once, it more than makes up for it.

Asymmetries with options

One of the main places where you have such asymmetries is with “options,” where you have the option of doing something (e.g., buying stock), but not the obligation.

  • If things look good you use (“exercise”) the option
  • If things don’t look good, you do nothing

The key point is that with such options, it isn’t necessary to try to guess the future (e.g., predict how stocks will rise or fall). You just need to look at what has happened and make a decision.

As long as the options are free or cheap, you can do this over and over again, with very limited and small losses, and you only need one of those options to work out well to get a potentially huge win.

Options are antifragile

When you have options of this sort, the more volatility there is, the better you do.

  • The more frequent swings downward do very little damage
  • The more frequent swings upward make it more likely that you’ll win big

In a sense, you are “antifragile,” getting stronger the more random variation / damage you take.

Taleb also discusses how most natural systems—those that have survived for a long time—are inherently antifragile. For example, the human body:

  • When you do damage to your body through exercise, you get stronger.
  • Of course, the damage must be within some limit (if I drop a 10,000 lb barbell on you, you probably won’t get stronger).
  • We should strive to build systems that mimic this property.

What doesn’t kill me…

We often hear the expression “what doesn’t kill me makes me stronger,” but Taleb offers a possible alternative interpretation:

  • Perhaps the reality is that you’re already stronger, and it just kills all the weaker people around you.
  • So it seems like you came out stronger, and on average, the overall population does end up stronger, but no individual has actually improved.
  • If anything, it may have left you with scars and weakened you!

Variation and stability

Variation is not the same as risk

The example Taleb uses is that of a taxi driver vs a typical office worker:

  • Most people would say the taxi driver has less “stability” (has more risk) than the office worker due to the randomness of taxi fares (some days you do well, some days you get no rides at all).
  • But on a long-enough timeline, the taxi worker is likely to get a fairly steady stream of business with few major interruptions, whereas the business worker may be steady for a while, but then face a huge interruption: getting laid off.
  • There’s very little randomness in the office worker’s life, but when something does happen, it’s a massive, catastrophic event (a “Black Swan” event).
  • Whereas the tax driver has lots of randomness, but, supposedly, less risk of Black Swan events.

Variation can make systems more stable

Taleb argues that, somewhat counterintuitively, a small amount of constant variation is often necessary to stabilize and smooth out a system.

  • Example: forest fires.

    • With no fires, the amount of dead leaves grows over time, and when a fire does break out, it’s a massive and catastrophic one (again, a “Black Swan” event).
    • With small, periodic fires, the leaves never build up, and the forest is more stable overall.
  • Reducing variation destabilizes a system.

    • If you clamp down too hard on variation, as humans tend to do, you actually end up destabilizing that system.
    • Clamping down on the banking or currency system to minimize small fluctuations makes everyone hyper-sensitive to small fluctuations, so when one eventually happens, everyone overreacts, and the result is catastrophic. Whereas if there had always been small fluctuations, everyone would’ve been used to it, and another small fluctuation would’ve done no damage.
    • In other words, noise can be used to stabilize a system.
    • Side note: you see this in distributed systems programming too, with using “jitter” to avoid the “stampeding herd” problem (e.g., small outage leads to all other systems retrying; if they all retry simultaneously, it’ll make the outage worse, so you add random noise to retry intervals so the retries all happen at different times).

Iatrogenics

Our desire to intervene too much is especially problematic in some cases, such as medicine. This leads to the idea of iatrogenics, which is when an attempt to treat something (e.g., medically) ends up doing damage (perhaps more damage than the original issue). Many modern disciplines, such as governments and medicine, do not take iatrogenics into account as much as they should.

Example:

  • Taleb describes an experiment where 200 people went to the doctor and about half of them got a recommendation for a tonsillectomy.
  • The remaining half then went to another doctor and again, about half of them got a recommendation for a tonsillectomy.
  • Repeat this again and again, and there can be no doubt that a large percentage of those tonsillectomies are totally unnecessary.
  • And, of course, an operation or procedure always carries a risk of harm.

One of the major gotchas with iatrogenics is that humans are biased towards action and intervention:

  • Taking an action to resolve a situation is often rewarded
  • Taking inaction to resolve a situation (even if inaction is the best decision!) is rarely rewarded
  • So whenever we see a problem, we are biased to do something to solve the problem, and rarely take into account the negative side effects that may happen from our intervention, some of which are worse than the original problem.

Example: frequent visits to the doctor are harmful.

  • That’s because a doctor can easily be fooled by noise—e.g., your blood pressure randomly happens to be higher on some visit—and may end up prescribing treatments you don’t need (the action bias says, we can’t do nothing, right?).
  • But medication may have all sorts of harmful side effects, and if you don’t need that medication, it’s a bad idea for you to take it!
  • The conclusion is that doctors should mainly be used only for severe cases, such as a car accident or deadly disease, where the risk of iatrogenics is low, as with no action, the patient is likely to suffer or die.
  • We should not use medicine on marginal or healthy people, as that will do more harm than good.

Skin in the game

Taleb argues that only results from practitioners—those that have “skin in the game” and real world experience—should be believed. Anyone who writes about topics they learned solely through research and deduction/theories/derivation should be ignored (note: he does exclude some disciplines, such as physics and pure mathematics from this analysis).

  • The reasoning is that if you have nothing to lose from your predictions and theories, then you may put out a lot of bullshit, and lead people astray, which will harm them, but not you (a negative asymmetry).
  • He talks of the economists and stock analysts who put out all sorts of theories that are totally wrong, leading people to making terrible financial decisions, but those analysts face no negative consequences as a result.
  • The solution is to pay less attention to what people say and more attention to what they do: that is, does the research follow their own advice in their own life? If not, that says much more than any paper or research.

Finance vs academia

Taleb says he believes people who work in the financial industry are more noble than those to those who work in academia. His explanation for this contrarian stance is that:

  • The main way to succeed in the financial industry is to make money by taking smart actions (e.g., by buying stocks)
  • The main way to succeed in academia is via posturing, reputation, schmoozing, politics, connections, fancy speeches and writing, etc.

In other words, he sees the market as an unbiased measure of merit/ability, whereas academia is all about artificial and biased measures.

Theory vs practice

Taleb argues that education (presumably he mainly means university education) and purely theoretical work doesn’t advance society. Instead, he argues that society mostly advances through trial and error by practitioners.

Examples:

  • Architecture. For a long time, most architecture was developed not through mathematics, but by rules of thumb and heuristics that had been proven effective over many years.
  • Jet engine. Apparently, it was developed through trial and error and for a long time, we had no real understanding of how it worked (I guess flight in general could be tossed into this bucket too).

In other words, Taleb argues that theory typically follows practice. And in many cases, theory isn’t all that necessary:

  • You can learn to ride a bike without knowing the theories of physics
  • You can learn to cook amazing food without knowing the chemistry of taste

In fact, the empirical evidence and phenomenology are more reliable, as they always stay the same, while our theories change all the time.

  • In the past, we had a theory that people with more muscle mass kept fat off better because the muscles burned more calories
  • Nowadays, we have a theory that weight lifting makes you more sensitive to insulin
  • In the future, there may be yet another theory
  • The theories keep changing, but the empirical result is the same—more muscle means less fat—and that’s what really matters

Negative knowledge vs positive knowledge

There is a really compelling discussion of the idea that negative knowledge (knowledge of what isn’t) is more robust than positive knowledge (e.g., knowledge of what is).

  • Seeing a single black swan is enough to disprove the theory that there are no black swans.
  • Seeing 1 million white swans is not enough to confirm the theory that there are no black swans.

Neither type of knowledge is perfect (e.g., it’s possible that the black swan you saw was an optical illusion), but you should have far more confidence in negative claims than positive ones.

Other random topics

The book covers a number of other somewhat disconnected topics.

Now is the most dangerous time in history

Despite all the reports that show that crime, war, poverty, etc. are at all-time lows, Taleb believes now is the most dangerous time in history.

  • A catastrophic Black Swan event (e.g., nuclear war) wouldn’t even be an outlier at this stage.
  • The key thing to remember, is that almost every time there is a catastrophe, it is always bigger and worse than anything that came before.
  • So looking at our past now and being prepared for the catastrophes we saw before is irrelevant, as future Black Swans will be bigger and different in ways we cannot guess.
  • The ethical dilemma he brings up is if some amount of “noise” (e.g., small scale conflicts to avoid big ones)—and therefore, some amount of sacrifice—is worth it to prevent the bigger disasters.

The tails matter more than the average

Taleb brings up the idea that what matters for society is not the middle (the average), but the tails (the extremes).

  • Society advances when we get more people towards the tails: the crazy ones that are imaginative, brave, and come up with the incredible ideas that revolutionize everything.
  • If you just build a society that favors the average at the expense of the extremes, he claims we never move forward.

Discoveries before their time

There’s an interesting discussion of how many innovations were “discovered” long before we figured out how to apply them.

Example 1:

  • The Mayans apparently never knew how to use the wheel…
  • Even though we found Mayan children’s toys that did have wheels!
  • They had the technology in front of their eyes, but never figured out how to apply it to the rest of life.

Example 2:

  • Even though we’ve had wheels for thousands of years now, it was only recently that someone figured out to put wheels on suitcases.
  • Before that, everyone painfully lugged luggage in their hands or on their backs, even though a better solution was right in front of us.

The Lindy effect

Some things that have been around longer are generally more robust and have a longer life expectancy (which is the opposite of how people age, where older people have shorter life expectancies).

  • A book that is still popular after 40 years is likely to remain popular another 40 years.
  • A book that survived 100 years is likely to survive another 100 years.
  • Books we’ve been reading for 1,000 years are far more likely to remain relevant for another 1,000 years than something that just came out last week.

In other words, survival of something shows you that there is inherent value there.

When new ideas are compared to old (or new books to old), the burden of proof is on the new ideas.

  • The old ones have some degree of proof simply because they have survived a long time.
  • It’s up to the new ones to show you they are worth something.

This is useful in picking books (the classics will typically be better than new stuff!) but even more so when picking between artificial options and natural ones. Natural items have accumulated millions of years of survival value, so there is a very high burden of proof to claim anything artificial or man-made wants is better.

  • Highly-padded shoes with giant soles vs going barefoot: the burden of proof is on the shoes (and there is little to no research showing shoes are better).
  • Allowing an injury to swell naturally vs using anti-inflammatories (e.g., ice or ibuprofen): the burden of proof is on the anti-inflammatories (and again, there is little to no research showing those are better).

Other thoughts

This book gets 5 stars for the ideas—not because I agree with all of them, but they do all make you think—but 1 star for the nasty attitude and meandering structure. Reducing how I feel about a dense book of several hundred pages to a single digit on a 5-point scale, that somehow works out to 4 stars.

Rating

4 out of 5