
Wow. This book is massive in scope, but still manages to have a high density of profound ideas. It jumps all over the place—history, science, biology, religion, government, art, economics, and much more—so it took me a LONG time just to gather my notes and thoughts (I read this book a few months ago, but am only posting my review now).
I’ve noticed many other reviewers try discount this book because they don’t agree with some of the (admittedly controversial) ideas in this book, which largely misses that the entire point of the book is to consider these ideas and have precisely these types of discussions and disagreements! Whether or not you agree with the ideas, the book is well worth reading and thinking about deeply, as it touches on the future of all of humanity.
Some of the powerful ideas I came across in the book (note: they will seem to be in a somewhat random order, as the book does jump around a lot!):
-
In the past, within a human lifetime, the world was pretty static: the world you were born into was very similar to the one you died in. This is no longer the case. We cannot count on death to separate us from a future that is completely unrecognizable. We will likely live to see times that we can barely comprehend. The goal of this book is to make us consider what may happen in the future so that we can at least be a little prepared for it.
-
The biggest problems of the past were typically famine, war, and plague. These may well be solved problems soon. They aren’t solved yet, of course, but in the modern era, these are very minor problems compared to the past. What happens if, in the next 50 years, none of these is a major cause for concern?
-
Almost all major religions are based around death. They tell you what will happen when you die and how you should behave because of it. So what happens to these religions if, in the future, humans could live more or less forever? That is, what if we become “amortal,” where aging and disease are no longer a concern? What will be the role of religion (if any) then?
-
If humans did become amortal, we’d likely be much more paranoid and anxious. Today, we know we’re going to die, so we willing to take on risk, as we know life could be cut short at any moment anyway. But if you knew you could live for hundreds or thousands of years, you might be much more risk averse for fear of cutting your life span so drastically short. What a counter-intuitive result!
-
Happiness, as best as we can tell, is biochemical in nature. If our goal is to make the greatest number of people happy, perhaps we should worry less about social reform and more about biology and chemistry. Sounds like a call for the Soma of Brave New World :)
-
Our technology is giving us the powers of Greek gods. In fact, some of our powers are already greater! E.g., Our ability to communicate instantly across the planet is greater than any Greek ever imagined, even for their Gods. And yet, despite our new found powers, we’re not much different than the primates we were thousands of years ago. God-like powers and monkey-like abilities (and morality) may not be a good combination.
-
There’s no good way to stop technology from advancing rapidly, even if we wanted to. In most cases, we can’t predict where it will lead, or how it will jump from one use case to another. For example, plastic surgery was originally developed to help soldiers injured or disfigured in war; but once that technology was developed, it was only a matter of time until it spread to other use cases, such as enhancing the appearance of celebrities. Perhaps some day, we’ll develop some sort of prosthetics that allow paraplegics to walk; but if we do, it’ll only be a matter of time until that same technology can be used to make people with normal leg functionality run or jump faster, such as soldiers. Every technology has positive and negative uses, and there’s no way to allow the former without also enabling the latter.
-
Fun fact: the modern day lawns we love to have around houses originated with the French aristocracy, where the lawn was used to show off wealth, as grass took a lot of time to cultivate and maintain, but served no practical purpose (i.e., could not be eaten or sold).
-
Emotions are algorithms. That is, emotions such as fear and anger are little algorithms, encoded in biochemistry, that detect certain inputs (e.g., a tiger), and program in specific responses (e.g., adrenaline rush for fight or flight). These algorithms evolved because they proved advantageous to survival.
-
When computers become more intelligent than humans, how will they treat us? What if they treat us the same way we treat other animals on the planet that are less intelligent than us (e.g., the way we treat cows, dogs, wolves)?
-
The “problem of the mind” is that we can’t know if other people (or animals or computers) are conscious. The only thing we can really know is that we are conscious. But everyone else could, in theory, be, for example an algorithm designed to look like consciousness, but not actually be conscious. Alan Turning did develop the Turing test for AI, but that only determines if an AI can fool us into believing it has consciousness; it does not prove that AI actually is conscious (side note: this is similar to how Turing was forced to spend much of his life trying to fool people into believing that he was straight). Interesting note: we know of no algorithm that requires consciousness.
-
The ability to organize larger and larger groups of people gives humans amazing powers that no other animal has. And we organize around myths and fictional ideas, such as government, money, and religion. The myths of the past sometimes seem ridiculous to us (e.g., the many pagan gods), but they were still useful; likewise, the myths of today will seem ridiculous to the people of the future (e.g., we may be ridiculed for how we govern ourselves). But while these fictions are useful, they are tools we use to accomplish our goals; they are not the goals themselves. We get in trouble when we forget that! E.g., When we sacrifice people’s lives for money or for a country, we are sacrificing reality for fictions.
-
Some of the most powerful myths of today revolve around businesses and brands, which we treat as if they were real people. We say “Google” built a self-driving car or the “USA” built the nuclear bomb. In the past, those “brands” were often tied to god: “god” created the earth or Pharaoh built the pyramids.
-
The ultimatum game shows that humans are not purely “rational actors.” In the game, there are two people. One person gets $100 and decides how to split that money with the other person. They can propose any split they want (e.g., 100/0 or 50/50 or anything else). The other person can either accept the offer, or reject it, but if they reject it, neither of them gets any money. A purely rational person would accept ANY offer, for even $1 is better than nothing. But in real experiments, many people reject the offer unless it feels “fair” (e.g., is somewhat close to 50/50). It seems like humans have an egalitarian moral built in; the same may even be true of other animals (e.g., primates) that have done these experiments.
-
Monotheism is a fairly primitive religion: it’s a bit like a small child believing the entire universe revolves around them. Humanism is a newer religion: it is based around the idea that man, not god, gives meaning to life. It is the belief in ourselves and that what matters is people and their lives and happiness.
-
Bureaucracy turns groups of people into algorithms.
-
Our beliefs of the past were based around zero-sum games. For example, most religions said that humans should evenly split the pie or, work hard now, in return for a bigger pie in sky after death. This zero-sum mentality made sense when each generation lives in more or less the same world as the previous generation. By contrast, modernity assumes—even depends on—continuous growth. We expect the economy to grow, populations to grow, businesses to grow, and new technologies to make that exponential growth possible.
-
Voting and democracy only works as long as you share enough commonalities with the other voters. If the other voters are too different from you, you won’t accept the result of the vote! Here’s an example that is not in the book, but I found it quite powerful in my own thinking: imagine that you lived in a country of 1,000 people in the 21st century, and suddenly, 2,000 people moved in who were from the 1st century. Your country is a democracy, and those 2,000 people, who are now the majority, suddenly start voting based on their ancient beliefs: e.g., they strip women of voting rights, bring back slavery, etc. Would you accept the outcome of that vote? I very much doubt it. You’d probably feel like you’ve lost your country and beliefs. Is this what is spurring all the nationalist and populist movements we’re seeing in the modern world? That is, most democracies were originally built in countries that were fairly homogenous ethnically/culturally, but with globalism, new ethnicities and cultures are moving in, and the original residents want to stop it before the population becomes so different from them that they “lose” their country.
-
Art is always interpreted and valued based on the fashions of the time. Take a $5M piece of modern art from today, ship it back 500 years, and the exact same piece of art will likely be seen as worthless.
-
There’s a lot of science emerging that suggests that free will may not exist at all. For example, we can use brain scanning techniques to predict what choice a person will make before that person is aware of choice they will make. If you’re consciously making choices—if you really have free will—it seems like that should be impossible! Therefore, instead of free will, perhaps all we really have is (a) determinism and (b) randomness. The determinism comes from our environment and genes. The randomness comes from the seemingly random nature of how subatomic particles work (quantum mechanics).
-
One argument in favor of free will is that you have a “desire” for something: e.g., a desire to push the left button instead of the right button. But a desire for something is not the same as choosing it. Sure, you may feel that you want to push the left button, and eventually, you even act on that desire and push that left button, but here’s the key question: where did the desire come from? Did you choose to want to push the left button? Or did that happen automatically? As an analogy, consider another experiment that has been done: the robo rat. We hooked up electrodes to a rat’s brain so that we could remotely trigger pleasure and pain in the rat. Using this remote, we could control the rat as it goes through a maze, making it feel pleasure when it turns the way we want it to go, and pain when it turns the other way. The rat, of course, doesn’t know it has electrodes attached to it. All it knows is that it feels good facing the left hallway and not good about the right hallway. To the rat, it might feel like it has a desire to go left. But that desire isn’t free will, is it?
-
Consider what happens when we use technology or drugs to control our will. E.g., we already have the robo rat and drugs that help with ADHD. What happens when these become far more powerful and far reaching (e.g., see, “The Red: First Light”)? Who is “you” in that case? Are you a human? Cyborg? Who is making the decisions? Do you really have free will if your will is being modified by a machine or drug?
-
You are not one person. For most people, our conscious experience makes us think we’re just one person (as in, one soul), but here again, experiments show the truth is more complicated. In the “split brain,” patients who had their corpus callosum severed (either accidentally or as a necessary part of a medical procedure) were asked questions other verbally or visually. The same person would give two totally different responses about their desires; and would be totally unaware of the desires of the “other” side!
-
Modern AI has made us realize that there is a separation between intelligence and consciousness. We are currently able to build many intelligent algorithms that can play chess, read an x-ray, or drive a car. However, we are not able to build any algorithms that have consciousness. Perhaps humans are also a collection of intelligent algorithms, a bit like your phone has a collection of apps.
-
When our technology evolves enough that we can use it to upgrade (not just fix) our bodies, social inequality will grow to an unprecedented scale. The rich will be able to make themselves prettier, stronger, and live longer—or even become amortal—while the poor will continue to suffer and lead short lives as before. Such a difference would be unthinkable; I suspect it took far less than that to trigger the French revolution.
-
We no longer understand the algorithms our software engineers create. For example, Google search was developed by huge team, and no one person can tell you how it all works. Similarly, most machine learning algorithms develop themselves, and no one can really tell you how or why the algorithm is making the decisions it does.
-
The end of the book looks into “dataism,” which the author believes may be the next major leap for humanity after humanism. The idea is to see the world as a data processing system. Communism is centralized data processing system; democracy and capitalism are distributed data processing systems; taken to its extreme, each human is a data processing chip in a massive distributed system. We take in data, process it in some way, and output more data. Dataism is the desire to free all of that data. To be honest, I didn’t fully grok this concept. Perhaps the idea here is that, in the future, data will be all that matters. E.g., if we develop technology similar to Star Trek replicators, where we can rearrange atoms at will, then materials will be of no value; all that will be valuable is the information (data) that tells you how to rearrange those atoms to get something useful (e.g., into earl grey, hot).
Phew! So much to think about. So many excellent questions. Go read the book and then let’s get some beer and spend a long time discussing it :)
As always, I’ve saved my favorite quotes:
“Each and every one of us has been born into a given historical reality, ruled by particular norms and values, and managed by a unique economic and political system. We take this reality for granted, thinking it is natural, inevitable and immutable. We forget that our world was created by an accidental chain of events, and that history shaped not only our technology, politics and society, but also our thoughts, fears and dreams. The cold hand of the past emerges from the grave of our ancestors, grips us by the neck and directs our gaze towards a single future. We have felt that grip from the moment we were born, so we assume that it is a natural and inescapable part of who we are. Therefore we seldom try to shake ourselves free, and envision alternative futures.”
“Every day millions of people decide to grant their smartphone a bit more control over their lives or try a new and more effective antidepressant drug. In pursuit of health, happiness and power, humans will gradually change first one of their features and then another, and another, until they will no longer be human.”
“Fiction isn’t bad. It is vital. Without commonly accepted stories about things like money, states or corporations, no complex human society can function. We can’t play football unless everyone believes in the same made-up rules, and we can’t enjoy the benefits of markets and courts without similar make-believe stories. But stories are just tools. They shouldn’t become our goals or our yardsticks. When we forget that they are mere fiction, we lose touch with reality. Then we begin entire wars `to make a lot of money for the cooperation’ or ‘to protect the national interest’. Corporations, money and nations exist only in our imagination. We invented them to serve us; why do we find ourselves sacrificing our life in their service.”
“No clear line separates healing from upgrading. Medicine almost always begins by saving people from falling below the norm, but the same tools and know-how can then be used to surpass the norm.”
“In the past, censorship worked by blocking the flow of information. In the twenty-first century, censorship works by flooding people with irrelevant information.”
“Centuries ago human knowledge increased slowly, so politics and economics changed at a leisurely pace too. Today our knowledge is increasing at breakneck speed, and theoretically we should understand the world better and better. But the very opposite is happening. Our new-found knowledge leads to faster economic, social and political changes; in an attempt to understand what is happening, we accelerate the accumulation of knowledge, which leads only to faster and greater upheavals. Consequently we are less and less able to make sense of the present or forecast the future. In 1016 it was relatively easy to predict how Europe would look in 1050. Sure, dynasties might fall, unknown raiders might invade, and natural disasters might strike; yet it was clear that in 1050 Europe would still be ruled by kings and priests, that it would be an agricultural society, that most of its inhabitants would be peasants, and that it would continue to suffer greatly from famines, plagues and wars. In contrast, in 2016 we have no idea how Europe will look in 2050. We cannot say what kind of political system it will have, how its job market will be structured, or even what kind of bodies its inhabitants will possess.”
“You want to know how super-intelligent cyborgs might treat ordinary flesh-and-blood humans? Better start by investigating how humans treat their less intelligent animal cousins. It’s not a perfect analogy, of course, but it is the best archetype we can actually observe rather than just imagine.”
“Traditionally, life has been divided into two main parts: a period of learning followed by a period of working. Very soon this traditional model will become utterly obsolete, and the only way for humans to stay in the game will be to keep learning throughout their lives, and to reinvent themselves repeatedly.”
Rating: 5 stars
Yevgeniy Brikman
If you enjoyed this post, you may also like my books. If you need help with DevOps, reach out to me at Gruntwork.