Intro. [Recording date: April 27, 2023.]
Russ Roberts: Today is April 27th, 2023, and my guest is consultant and author Luca Dellanna. This is Luca’s second appearance on EconTalk. He was first here in February of 2022, talking about compulsion, self-deception, and the brain.
Our topic for today is his book, Ergodicity, a word I suspect many of you listening have never heard of. Despite the strangeness of the title, I think it’s an incredibly interesting concept, an incredibly important concept, and a lovely book. And, while Luca concedes that many of the ideas in his book come from others, Nassim Taleb and Ole Peters, for example, he has managed to write a superb and completely accessible treatment of a very complicated subject.
Luca, welcome back to EconTalk.
Luca Dellanna: Thank you, Russ, for having me again here.
1:33
Russ Roberts: So, let’s start with your cousin, who is a skier. And, in your book you talk about that when you were 14 you would ski with your cousin, who was six. And, what happened?
Luca Dellanna: Yes. He was very, very fast and much better than me because you grew up in the French Alps, in a place where you start skiing probably before you start reading. And, he was very good. He eventually did the world championship for his age bracket. And then very sadly, one injury after the other; he had to end his professional career. And, from here, one lesson I learned is that it’s not the fastest skier who wins the race, but the fastest one of the one who finishes the race.
Russ Roberts: Meaning that the skiers that we see are the ones who have escaped injury, or serious injury, and are allowed to continue their career. This is a obvious point, that the skiers who win world championships as adults have obviously avoided career-ending injuries. But, I think we tend to think of those as just bad luck. Some people get lucky, they don’t get hurt badly, and they manage to keep skiing. Other people get bad luck, and they are unable to compete after a while because of the damage to their body.
But in fact, I think you have something more profound to say.
Luca Dellanna: Yes, apart from some factors, of course–genetics and everything–there is also something that has to do with time horizons and risk-taking. You might optimize the way that you ski to win the race, and that will lead you to take too many risks that might actually–on one side, they are what brings you to win the race; but on the other side, they also we what can bring you to have a career-ending injury. And of course, that’s bad. And, if instead you want to win a championship, you of course need to manage your risks a bit differently.
And, one thing which is very tricky is that we think, ‘Yes, Luca, but there are a lot of instances in life in which winning or performing well at a task is something that you do over a very short timeframe.’ And, my answer is: It looks so, but to participate in that race, or in the task, you had to have some level of skill; and to get at that level of skill, it took you a long-term practice. And, unless you had the time and you avoided injuries so that you could train for long enough, you wouldn’t be able to be in that race in which you can participate as if it were a short-term endeavor.
Russ Roberts: And, one of the names for this is survivorship bias. And, the way to think about survivorship bias is that we don’t see the people who have lost, who have dropped out, who have been damaged, who have been harmed, who became addicted to drugs because of their obsession with winning every race, who took unhealthy risks. And, one answer to that is, ‘Well, but skiing is dangerous, and it’s always risky. So, are you saying I shouldn’t ski?’ How do I understand the lesson from your cousin?
Luca Dellanna: Well, the lesson is more evident when we talk more concretely about what you should do. What’s the optimal level of risk-taking to maximize your wins?
And, the answer depends: how long do you want your career to be? How long is the period that matters?
Let me make a very concrete example. Imagine that my cousin is an excellent skier and wins 20% of the races where he participates, but he also takes lot of risks, and he break his legs in 20% of the races in which he participates. And, now a question: How many races is he expected to win on average? And the answer is: It depends by how many races he runs. If he does one single race, the average amount of races that he wins is 0,2 [0.2–Econlib Ed.]: one race multiplied 20%, makes 0,2. However, if he makes two races, during the second race, he can only participate if he didn’t break his leg during the previous race, which means that he has only an 80% chance of participating to the second race, and the 20% chances of winning multiplied 80% chances of participating makes 0,16 [0.16–Econlib Ed.] expected wins for the second race.
And, if average it over two races, 0,2 + 0,16 makes average of 0,18 [0.18–Econlib Ed.] races won. So, we see that the expected number of races that my cousin wins depends on how long is the time horizon.
So, we can reverse that, and depending on the time of horizon that you want to have, different strategies yield the different–sorry, the same strategy yields different outcomes, and therefore different strategies might be optimal for different time horizons.
Russ Roberts: I just would point out that Luca is Italian; he is recording this from Turin, Italy. And, in Italy, and I think in Europe generally, when you want to have a decimal point, you put a comma. But, in America and elsewhere it’s a point, not a comma. So, it’s 0.2 or 0.16, not comma, but that’s a little bit of translation for the non-Italian listeners.
7:57
Russ Roberts: I think the other way that you share that idea is very powerful. If I tell you you have a 20% chance of winning your races and you run 10 races, and I say, ‘So, how many races are you expected to win?’ the answer of course seems obvious: it’s two. But, that assumes that you finish all of the nine before you get to the 10th. And, we often just take that for granted.
And, if I would put the biggest lesson of this book into a single phrase, it would be: You only get the average if you’re allowed to continue to play. Even then, you might not get the average because in a small number of plays, you just might be lucky or unlucky, but you have no chance of getting to the average reliably if you’re not allowed to play the game.
And, this is a obvious point, but I have found it profoundly helpful in thinking about risk, uncertainty, and decision-making. The fundamental lesson–and Taleb says it his way, and you say it a little bit differently–Taleb says you need to avoid ruin. You mention ruin also, but you also say, you need to avoid game-over. If you can’t play the game, you’re not going to win. And again, this is a cliché. It’s a truism. You could say, ‘Everybody knows that.’ But, not everybody remembers it. And, the purpose of this conversation, and I think of your book, is to help you remember it.
Luca Dellanna: Exactly. And, the reason we sometimes fail to remember is because, again, of survivorship bias. We look at people around us which are wildly successful, and if we aim to get the success, the reality is that we need to take risks. And, not just the kind of risk which is good, like, which only has upside and very low downside. But if we aim to be, for example, the richest person in the world, or the most successful entrepreneur in the world, we need to take risks which come with the possibility of game-over. And that, of course, means that the more we aim to be number one, the more we need to take risks that will decrease our most likely outcome. And, that is a trade-off we need to be aware of.
And, for most people, what we want is not to be number one, but to have a very good distribution of results so that we have a very good chance of ending up, maybe not number one, but in the top 10% of people. And, that’s a different strategy.
Russ Roberts: And, you might be listening at home saying, ‘Well, I don’t want to be number one: that’s too much. I don’t take those kind of risks.’ But, the fact is, is that: if you want to be successful, it’s not just number one, if you want to be successful, even not as successful as the top 10%, but top 20–forget what proportion–you just want to be successful, much of life–and this is, I think, a profound lesson–much of life has the characteristics of the kind of risks we’re talking about.
11:23
Russ Roberts: And, let’s turn to Russian roulette, which is again seemingly an irrelevant experience that most of us have never even thought about, and I think it’s a very useful way to think about risk and life. So, explain Russian roulette and the kind of mistakes people make when they think about it.
Luca Dellanna: Yeah, so Russian roulette is a gambler’s game in which–of course don’t try it at home–but you take a gun and you put only one bullet in a barrel that maybe has the place for six bullets; and then you randomly spin the barrel so that you don’t know whether when you shoot the gun there will be the bullet leaving the gun. And, you point to the gun at you, and basically you have one in six chances of dying, or five of six chances of surviving and winning a prize. And, the question is, what’s the average expected win from playing Russian roulette?
And, the naive answer is five-sixths of the reward. For example, if the reward is $6,000 for playing, when you play it, you have a five times in six of surviving and winning. So, you will think, ‘Of those $6,000, I expect to collect $5,000.’ And, that is true only if you play Russian roulette once.
But, if you play Russian roulette many times, the average outcome is not that you win $5,000 per times you play, but the average outcome is that you end up dead.
And, there are a lot of situations which play out like Russian roulette–not that you end up dead, but that you end up in some form of game-over. Maybe you invest in something that goes bankrupt, or maybe some negative event happens–your co-founder commits fraud, or the market changes, or you invest in something and then Covid comes and then your business goes–like, a lot of things.
Russ Roberts: The example you give in the book that I like is: You might have a very ambitious, demanding job, and it pushes you to work long hours; and you’re good at it, and you’re fine most of the time. But, there comes a stretch, perhaps, where you jeopardize your marriage, or your mental health, or your physical health because you’re pushing so hard. And, most of the time–five-sixths, maybe it’s even higher–you’re successful: you get promoted, your salary goes up. But, if you’re going to play for the long haul–which is a really good idea, I think for most of us–you better be aware of those other risks. Which means that there are some times when you should take a weekend off, or not work 18 hours that day, and so on. Each time that you’re challenged to meet a deadline, and you work those 18 hour days, and you think, ‘Well, just this time,’ but if you continually do that, you’re playing Russian roulette.
Luca Dellanna: Exactly. And, that’s particularly true because, for example, if you have a problem with your marriage, it’s very likely to be irreversible. And, when you have a risk of ruin which is irreversible, you cannot average them out.
For example, in a relationship, it’s true that the more time you spend with a person, the more you can deepen the relationship and solve problems, but if you have a problem that causes the relationship to end, then you cannot recover it.
And, this irreversibility is what differentiates risks that you can take and risks that you be very must be very careful from taking.
Russ Roberts: It’s a another way to say ruin or game over. Irreversibility means you can’t repair it by relying on the law of large numbers.
So, even though the game is in your favor–and this is, I think, the reason these kinds of risks are so seductive–Russian roulette, five out of six; you could even play one with 99 out of 100–a much bigger chamber for the bullets–and you think, ‘Well, it’s a long shot. I’m okay.’ But, the problem is that you will not get that average return if you lose once. One loss. It’s not like, ‘Oh, okay, I lost. I had a bad day.’ No. It’s over. You’re out of the game. You’re dead, or your marriage is ruined, or you’re fired because under the pressure to perform, you cut corners, and do things that are, say, unethical.
It’s a very deep idea that actually I use all the time when I think about my own risks that I face. It’s a very simple idea. The idea is that the average return is not all that counts. It matters whether you can continue to play. But somehow that’s sometimes just too hard to remember. So, try to remember it, folks.
Luca Dellanna: Exactly. And, this is a good objective to aim for. One, sometimes when we talk about risk management, we try to balance the risk, and think about average cost–like: what’s the cost, what’s the benefit, how much it’ll cost me to manage this risk? And, you try to see what’s the cost-benefit.
But, another approach is just, is to ask yourself how can you maximize the time that you remain in the game, or in case of games which are bounded, for example, maybe your business or your career–you are not interested to have it for longer than 50 years–you want nevertheless to maximize the chances that for those 35, 50 years, you can stay in the game. And, that’s another perspective to risk management.
Russ Roberts: I would just mention that I took this job here as President of Shalem College with a five-year contract. I think by law in Israel you’re not allowed to be the President of a college for more than 12 years. So, one of the things that the Board of my college should worry about is that I might make decisions that the consequences are going to be in 15 years, or even past my current contract–because they don’t know that I’m going to stay and be happy with renewing. And to me, an ethical decision is to treat your situation–in this case, my job–as if I had a lifetime contract and I would stay, quote, “forever,”–as long as I live. And certainly, the Board should be aware of the incentives that I face. If I’m not behaving that way, they should keep an eye on me, because there are many, many things I can do to push risk into the future versus the present. And, there’s many amounts of effort I can take to make the long run successful versus the shorter/medium run. And, my natural incentive will be to avoid incurring costs that only benefit the college in the long run. But that would be wrong. So, I want to be aware of that incentive.
So, you can both think of it both as a ethical tool and a management tool. For me personally, I should act as if I have skin in the game beyond my formal tenure, and the Board should be aware that that might be hard for me. I like to think it’s not, but they should be aware of that.
Luca Dellanna: Exactly. And, there are plenty of situations in which you also have the reverse. I’m thinking, for example, of some consulting companies where almost part of the model is expecting that the employee stays for just a few years, and therefore there is a pressure to making them work overtime, and taking risk with the health, with the marriage which won’t materialize during the tenure that they’re expected. So, there should be, for example, the ethics from the side of the company to behave as if you’re expecting your employees to stay for their whole career and you couldn’t replace them up to some measure, for example.
So, this is a very good framework that you can apply in lot of situations where you have two parties with two different time horizons, and you want each party to care about the time, or not to jeopardize the time horizon of the other.
20:03
Russ Roberts: And now let’s move to the distinction between individuals and population. So, if you play Russian roulette once, you’re most likely going to survive and win a prize. If 100 people play at once, you’re going to get roughly 16, 17% of people are going to die, because you’ve now have the law of large numbers at play.
And, part of this concept–we haven’t named it yet, by the way; I’ve loved it so far, we have not used the E-word, or the non-E-word–part of the power of this concept is to think about populations versus individuals. So far, we’ve talked about short-term versus long-term. You only get the long-term returns invoking the law of large numbers if you stay in the game. When you start talking about populations versus individuals, you see that same kind of potential difference.
Luca Dellanna: Yeah, exactly. Like, one way that you can invoke the law of large numbers is by having a population.
So, for example, Russian roulette, if played by an individual, has this problem of irreversible losses that move the expected average win from, let’s say, $5,000 to $0, over long-term. But, if you see it from the point of view of an hypothetical company that employs Russian roulette players, and they can just hire new people when someone dies, for them Russian roulette will still almost always have an expected win of $5,000.
Russ Roberts: Yeah, the only problem is that, as you mentioned before, if they get a reputation for overworking their employees, they may have trouble getting new players to go through this process.
But, certainly you’re right. I think many consulting firms, many law firms, and others put extremely high demands on a very short period of time for their employees. They do push the costs into the future. But, for the ones who burn out or have mental health or physical health issues in the meanwhile, so they understand that’s part of the cost of doing business. Those don’t turn out; they can’t handle it. The others survive, thrive, at least in the short run.
But, I think it’s a very useful way for thinking about society-wide risks when you’re making a distinction between the risks to one person versus the risk to the society or the world.
And, right now, a lot of us–we’ve done a bunch of episodes on ChatGPT, and if you think that artificial intelligence [AI] threatens humanity’s existence, even with a very small percentage, it is prudent to be extremely cautious with respect to it.
Luca Dellanna: Exactly. I think that in particular artificial intelligence is slightly different because there is the problem of competition between countries, for example. So, you can say for example, ‘In my country I have the power to be very cautious, but what if this means that an adversarial country then gets very powerful AI and can take over?’ So, it’s a bit more complex in the case of AI.
But it applies to a lot of other risks where you don’t have competition. I’m thinking, for example, about virus labs: What if you have, for example, some kind of research in which 99% chance you make some medical advancement, and 1% chance you might cause a deadly pandemic? The chances of the 1%, they keep accumulating so far that you might think, ‘One percent in one year, it’s nothing.’ But, if you ask yourself, ‘What about my lifetime?’ the cumulative percent possibilities that you have a pandemic, they become very high. Now, I don’t remember the number, but I think it’s in the 50-to-70% range. And so, we want to think about the long term.
24:11
Russ Roberts: And also, there’s the additional point that, when you’re talking about low-risk activities–low-risk in terms of probability but not in terms of outcome–again, it’s hard to keep these distinct: low chance of a bad outcome but when the outcome does occur, it’s very bad. One of the problems with those kind of processes, either in your personal life or in a social setting, societal setting, is that the first time you play Russian roulette, you probably are going to be okay. And, each time that you’re okay, it lulls you into thinking, ‘I’m safe.’
I’ll give you a trivial example. In Jerusalem, there’s a serious fine for jaywalking. I decided when I moved here that I would not jaywalk at all–not because of the fine, but because the traffic patterns here in Jerusalem and the intersections where you’re typically crossing as a pedestrian are very unusual. It’s a little bit like when you go to England the first time, and the traffic is coming from a different side. That’s another even more dramatic example of, you should not jaywalk, should not go against the light when you visit England; and even when you move there for a while.
So, here in Jerusalem I just don’t jaywalk even when I can see, ‘Oh, there’s no cars coming,’ because I realized early on that I’m not aware fully of where the cars can come from, and what I think looks safe actually isn’t.
Now, if you didn’t follow that rule, most of the time you’re not going to get hit by a car. Because, you do look around. But, there’s going to be a time–and again after the passage of days and then weeks, you don’t get hit–you start to think, ‘Well, I guess I’m pretty good at this.’
You’re not. You probably are not better at it than you used to be. You’ve just been lucky. And so, you’ve actually put yourself in quite a bit of danger.
And, there are many, many things like that in life where the daily interaction of you with the risk, especially when it’s unlikely or very low probability, can fool you into thinking that you’ve mastered the danger.
Luca Dellanna: Exactly. And, if I can jump on this, there is a very good framework to understand the risks when you have these low-chance events which can have a very big problem.
In manufacturing risk management, there is this principle called the pyramid of risk, which is an idea that comes from the 1930s, from a German engineer who realize that, in general, for each deadly accident, there are a few accidents in which injury was provoked. And, for each in injury accident, there are a few incidents in which no injury was caused. And, for each accident with no injury, there are a few near-misses, where something fails[?] but no one is injured.
And, they create this pyramid shape, where you have the deadly incident at the top, very narrow, and lot of near-misses at the bottom.
And, the principle is: You do not evaluate whether your behavior is safe based on the injuries on the top of the pyramid. But, you look at the bottom of the pyramid. If, when you cross your street you have some near-misses–for example, a car that honks at you–that’s a signal that you should treat as if you were hit by that car to adjust your behavior. And of course, there are some limitations, but it’s more useful than not, this framework.
28:01
Russ Roberts: That’s a fantastic example. I used to be–I used to share an office at Washington University with Dick Mahoney. Dick was CEO [Chief Executive Officer] of Monsanto, and they have a lot of chemical factories around the world. They’re dangerous, and they have a lot of safety procedures put in place. And, he had a rule that–this is not exactly your point but it’s related–if anyone died in a factory, the manager of the factory had to be in his office in 24 hours. And, the factories are all over the world; the office is in St. Louis, Missouri. So, you could imagine a horrible tragedy: a worker is killed in an accident, and now the manager gets on an airplane and flies to Missouri. And, I think this is true–it could be he exaggerates, but I doubt it–he told me that when the manager would walk into his office, he would ask him, ‘Why did you kill that worker?’
It’s a horrible question. And of course, your first reaction is, ‘I didn’t kill him. It was just bad luck.’
And, Dick would say, ‘But, surely there was a procedure you could have put in place that might have prevented his death.’
And so, it’s not exactly the same point. And then, of course, the question is, ‘What do you need to do now to prevent the next one?’
But, this idea of near misses is quite profound. When one of those things does happen, you are jarred and scared. But then it quickly fades away, unless you make a mental effort to pay attention to it. So, I think it’s a lovely example. [More to come, 29:55]