
I always struggle with Taleb’s books. On the one hand, they are full of insights and interesting ideas; on the other, they are poorly structured and full of tangents and mean, spiteful, and largely unnecessary attacks against various groups of people (e.g., economists, academics, etc). Taleb, at least from his writing, strikes me as the classic brilliant asshole. I’m not sure I’d ever want to work with him, but kept at arms length, his books do offer lots of interesting learning.
This book in particular is tough to get into. The first ~40% is just (a) defining and repeating the definition of antifragile over and over again and (b) vicious insults hurled towards people he dislikes. The last ~60% of the book covers a variety of interesting topics, though they all feel a bit disconnected. This book could’ve delivered all the same information in a much smoother and more accessible package if he just worked with his editors, but he prefers to hurl insults at them too, assuming he knows better than them—and the book suffers for it.
If you can get passed his abrasive personality, then you’ll find a number of interesting ideas here. It’s worth mentioning that I don’t fully agree with many of those ideas, but all of them made me stop and think, which to me is a huge win from a book.
I took a lot of notes while reading. Here are a few of the highlights:
-
Taleb talks a lot about asymmetries where you have the potential for a small upside but almost unlimited downside or the opposite, with the potential for almost unlimited upside, but only small downsides. Obviously, we should prefer the latter. However, many systems end up using the former. For example, many financial instruments are built around the potential to make small percentage gains over time, but there is also a small risk of being completely wiped out by a Black Swan event. Even if the upside was moderate or large, if there is a nontrivial risk of having all of your gains wiped out by a catastrophic loss, it’s usually not worth it. Instead, we should seek systems where the maximal loss is limited to something small, but rare Black Swan events can lead to massive wins. In such systems, you can lose 1,000 times, but if you win once, it more than makes up for it.
-
One of the main places where you have such asymmetries is with “options,” where you have the option of doing something (e.g., buying stock), but not the obligation. If things look good you use (“exercise”) the option. If things don’t look good, you do nothing. The key point is that with such options, it isn’t necessary to try to guess the future (e.g., predict how stocks will rise or fall). You just need to look at what has happened and make a decision. As long as the options are free or cheap, you can do this over and over again, with very very limited and small losses, and you only need one of those options to work out well to get a potentially huge win.
-
When you have options of this sort, the more volatility there is, the better you do. That’s because the more frequent swings downward do very little damage but the more frequent swings upward make it more likely that you’ll win big. In a sense, you are “antifragile,” getting stronger the more random variation / damage you take. Taleb also discusses how most natural systems—those that have survived for a long time—are inherently antifragile. For example, when you do damage to your body through exercise, you get stronger. Of course, the damage must be within some limit (if I drop a 10,000 lb barbell on you, you probably won’t get stronger), but we should strive to build systems that mimic this property.
-
Taleb says he believes people who work in the financial industry are more noble than those to those who work in academia. His explanation for this contrarian stance is that, for the most part, the only way to succeed in the financial industry is to make money by taking smart actions (e.g., by buying stocks), whereas in academia, the main way to succeed is via posturing, reputation, schmoozing, politics, connections, fancy speeches and writing, etc. In other words, he sees the market as an unbiased measure of merit/ability, whereas academia is all about artificial and biased measures. As with much of what Taleb says, there is some truth here—academia really does involve a lot of BS—but it overstates the merit of the market (e.g., based on his own book “Fooled by Randomness,” how much of market success is accidental?) and understates the merit of important works of scholarship.
-
We often hear the expression “what doesn’t kill me makes me stronger,” but Taleb offers a possible alternative interpretation. Perhaps the reality is that you’re already stronger and it just kills all the weaker people around you. So it seems like you came out stronger, and on average, the overall population does end up stronger, but no individual has actually improved. If anything, it may have left you with scars and weakened you!
-
Randomness/variation is not the same as risk. The example Taleb uses is that of a taxi driver vs a typical office worker. Most people would say the taxi driver has less “stability” (has more risk) than the office worker due to the randomness of taxi fares (some days you do well, some days you get no rides at all). But on a long-enough timeline, the taxi worker is likely to get a fairly steady stream of business with few major interruptions, whereas the business worker may be steady for a while, but then face a huge interruption: getting laid off. There’s very little randomness in the office worker’s life, but when something does happen, it’s a massive, catastrophic event (a “Black Swan” event). Whereas the tax driver has lots of randomness, but, supposedly, less risk of Black Swan events. I get the purpose of this analogy—variation is not the same as risk, and some variation can actually be good for a system—but the analogy breaks down badly when you consider things like Uber and self-driving cars, either of which could be a Black Swan event for taxi drivers. So does Taleb’s idea merely not take into account all types of Black Swan events? Is he underestimating these extremely rare events just as much as everyone else? Or was this a single bad analogy?
-
Following on the previous idea, Taleb argues that, somewhat counterintuitively, a small amount of constant variation is often necessary to stabilize and smooth out a system. For example, small, periodic fires in a forest are better than no fires at all, as in the latter case, the amount of dead leaves grows over time, and when a fire does break out, it’s a massive and catastrophic one (again, a “Black Swan” event). If you clamp down too hard on variation, as humans tend to do, you actually end up destablizing that system. Clamping down on the banking or currency system to minimize small fluctuations makes everyone hyper-sensitive to small fluctuations, so when one eventually happens, everyone over reacts, and the result is catastrophic; whereas if there had always been small fluctuations, everyone would’ve been used to it, and another small fluctuation would’ve done no damage. In other words, noise can be used to stabilize a system. Side note: you see this in distributed systems programming too, with using “jitter” to avoid the “stampeding herd” problem (e.g., small outage leads to all other systems retrying; if they all retry simultaneously, it’ll make the outage worse, so you add random noise to retry intervals so the retries all happen at different times).
-
Another contrarian view: despite all the reports that show that crime, war, poverty, etc are at all-time lows, Taleb believes now is the most dangerous time in history. He believes that a catastrophic Black Swan event (e.g., nuclear war) wouldn’t even be an outlier at this stage. The key thing to remember, is that almost every time there is a catastrophe, it is always bigger and worse than anything that came before. So looking at our past now and being prepared for the catastrophes we saw before is irrelevant, as future Black Swans will be bigger and different in ways we cannot guess. The ethical dilemma he brings up is if some amount of “noise” (e.g., small scale conflicts to avoid big ones)—and therefore, some amount of sacrifice—is worth it to prevent the bigger disasters.
-
We need to take into account the idea of “iatrogenics” more, which is when an attempt to treat something (e.g., medically) ends up doing damage (perhaps more damage than the original issue). Many modern disciplines, such as governments and medicine, do not take iatrogenics into account as much as they should. For example, Taleb describes an experiment where 200 people went to the doctor and about half of them got a recommendation for a tonsillectomy. The remaining half then went to another doctor and again, about half of them got a recommendation for a tonsillectomy. Repeat this again and again, and there can be no doubt that a large percentage of those tonsillectomies are totally unnecessary. And, of course, an operation or procedure always carries a risk of harm.
-
One of the major gotchas with iatrogenics is that humans are biased towards action and intervention. Taking an action to resolve a situation is often rewarded; taking inaction to resolve a situation (even if inaction is the best decision!) is rarely rewarded. So whenever we see a problem, we are biased to do something to solve the problem, and rarely take into account the negative side effects that may happen from our intervention, some of which are worse than the original problem. For example, frequent visits to the doctor are harmful. That’s because a doctor can easily be fooled by noise—e.g., your blood pressure randomly happens to be higher on some visit—and may end up prescribing treatments you don’t need (the action bias says, we can’t do nothing, right?). But medication may have all sorts of harmful side effects, and if you don’t need that medication, it’s a bad idea for you to take it! The conclusion is that doctors should mainly be used only for severe cases, such as a car accident or deadly disease, where the risk of iatrogenics is low, as with no action, the patient is likely to suffer or die. We should not use medicine on marginal or healthy people as that will do more harm than good.
-
Taleb brings up the idea that what matters for society is not the middle (the average), but the tails (the extremes). That is, society advances when we get more people towards the tails: the crazy ones that are imaginative, brave, and come up with the incredible ideas that revolutionize everything. If you just build a society that favors the average at the expense of the extremes, he claims we never move forward. Again, there’s some truth here, but the reality with most innovation is that it has very little to do with lone, crazy geniuses. The hero narrative is a convenient storytelling device, but just about all new ideas are actually built from old ones, and the process is usually incremental. There’s a long history of “multiple discovery,” where the same idea is discovered independently at nearly the same time (e.g, Newton and Liebniz discovering Calculus) that sometimes makes new discovery seem like an almsot inevitable outcome of the environment, rather than the work of a lone genius. We certainly need a society where it’s safe to try crazy new ideas, but I’m not sure that’s the same as saying only the tails matter.
-
There’s an interesting discussion of how many innovations were “discovered” long before we figured out how to apply them. For example, the Mayans apparently never knew how to use the wheel… even though we found Mayan children’s toys that did have wheels. They had the technology in front of their eyes, but never figured out how to apply it to the rest of life. Similarly, even though we’ve had wheels for thousands of years now, it was only recently that someone figured out to put wheels on suitcases. Before that, everyone painfully lugged luggage in their hands or on their backs, even though a better solution was right in front of us.
-
Taleb argues that only results from practitioners—those that have “skin in the game” and real world experience—should be believed. Anyone who writes about topics they learned solely through research and deduction/theories/derivation should be ignored (note: he does exclude some disciplines, such as physics and pure mathematics from this analysis). The reasoning is that if you have nothing to lose from your predictions and theories, then you may put out a lot of bullshit, and lead people astray, which will harm them, but not you (a negative asymmetry). He talks of the economists and stock analysts who put out all sorts of theories that are totally wrong, leading people to making terrible financial decisions, but those analysts face no negative consequences as a result. The solution is to pay less attention to what people say and more attention to what they do: that is, does the research follow their own advice in their own life? If not, that says much more than any paper or research.
-
Taleb argues that education (presumably he mainly means university education) and purely theoretical work doesn’t advance society. Instead, he argues that society mostly advances through trial and error by practitioners. For example, he talks about how much of architecture, for a long time, was developed not through mathematics, but by rules of thumb and heuristics that had been proven effective over many years. The jet engine was apparently developed through trial and error and for a long time, we had no real understanding of how it worked (I guess flight in general could be tossed into this bucket too). In other words, Taleb argues that theory typically follows practice. And in many cases, theory isn’t all that necessary: e.g., you can learn to ride a bike without knowing the theories of physics and you can learn to cook amazing food without knowing the chemistry of taste. In fact, the empirical evidence and phenomenology are more reliable, as they always stay the same, while our theories change all the time. For example, in the past, we had a theory that people with more muscle mass kept fat off better because the muscles burned more calories; nowadays, we have a theory that weight lifting makes you more sensitive to insulin; in the future, there may be yet another theory. The theories keep changing, but the empirical result is the same—more muscle means less fat—and that’s what really matters. I agree wholeheartedly with Taleb that trial and error plays a MASSIVE role in all innovation and discovery. The systems we deal with in the real world are complicated and few people can simply think their way to solutions that can deal with all this complexity (this is another reason the lone genius and outliers being responsible for innovation isn’t accurate; it’s less about genius and more about effort). That said, I suspect Taleb is cherry picking his examples, and that many innovations followed directly from theoretical results. Moreover, many experiments would’ve never been tried in the first place without a theoretical framework that hints at what experiments to try. So, my guess is that both practitioners and theoreticians are critical to advancing society, and obsessing too much with one or the other is not helpful.
-
There is a really compelling discussion of the idea that negative knowledge (knowledge of what isn’t) is more robust than positive knowledge (e.g., knowledge of what is). For example, seeing a single black swan is enough to disprove the theory that there are no black swans. On the other hand, seeing 1 million white swans is not enough to confirm the theory that there are no black swans. Neither type of knowledge is perfect (e.g., it’s possible that the black swan you saw was an optical illusion), but you should have far more confidence in negative claims than positive ones.
-
Things that have been around longer are generally more robust and have a longer life expectancy (which is the opposite of how people age, where older people have shorter life expectancies). For example, a book that is still popular after 40 years is likely to remain popular another 40 years. A book that survived 100 years is likely to survive another 100 years. And books we’ve been reading for 1,000 years are far more likely to remain relevant for another 1,000 years than something that just came out last week. In other words, survival of something shows you that there is inherent value there.
-
Following from the last concept, when new ideas are compared to old (or new books to old), the burden of proof is on the new ideas. The old ones have some degree of proof simply because they have survived a long time. It’s up to the new ones to show you they are worth something. This is useful in picking books (the classics will typically be better than new stuff!) but even more so when picking between artificial options and natural ones. Natural items have accumulated millions of years of survival value, so there is a very high burden of proof to claim anything artificial or man-made wants is better. For example, highly-padded shoes with giant soles vs going barefoot; the burden of proof is on the shoes (and there is little to no research showing shoes are better)! Similarly, allowing an injury to swell naturally vs using anti-inflammatories (e.g., ice or ibuprofen); the burden of proof is on the anti-inflammatories (and again, there is little to no research showing those are better). I agree with all of this, but I think Taleb takes it a little too far with his love of “the ancients” (as he calls them). He seems to revere them as if they were somehow better than us simply because they lived a long time ago, which is NOT the same as revering ideas that survived a long time.
-
Random, but interesting tidbit on diet: we all accept that the stress of exercise helps make you healthier, but what about the stress of different types of diets? For example, intermittent fasting seems to make you healthier. Perhaps the stress of not eating for periods of time (or not eating certain types of food, such as meat) are good for you? Perhaps that’s how we evolved (e.g., famine and feast cycles). Or some combination of factors: e.g., you need to exercise right before you eat, as we evolved in a world where we always had to chase down (or at least walk long distances) to find food.
OK, phew, that’s a lot of notes!
In short, this book gets 5 stars for the ideas—not because I agree with all of them, but they do all make you think—but 1 star for the nasty attitude and meandering structure. Reducing how I feel about a dense book of several hundred pages to a single digit on a 5-point scale, that somehow works out to 4 stars.
Rating: 4 stars
Yevgeniy Brikman
If you enjoyed this post, you may also like my books. If you need help with DevOps, reach out to me at Gruntwork.