Subscribe now and watch my free trend following VIDEO.

Ep. 197: Jack Horner Interview with Michael Covel on Trend Following Radio

Jack Horner
Jack Horner

My guest today is Jack Horner, the world renowned paleontologist. Horner was the technical advisor for all of the “Jurassic Park” films. He is most famous for discovering and naming Maiasaura, providing the first clear evidence that some dinosaurs cared for their young.

The topics are dyslexia and the process of learning.

In this episode of Trend Following Radio we discuss:

  • Passion
  • Why you’d break open a dinosaur egg rather than hold onto it as precious material
  • How Horner attributes his way of thinking to dyslexia
  • The shapeshifting dinosaur hypothesis
  • “Chickasaurus” and the idea of dinosaurs and chickens intersecting
  • The accuracy of the “Jurassic Park” dinosaurs
  • Whether the T. Rex was a scavenger v. predator
  • How Horner came to his way of thinking

Listen to this episode:

Jump in!

More:

Michael Covel and Jack Horner
Michael Covel and Jack Horner
Jack Horner's Book
Jack Horner’s Book
Jack Horner's Book
Jack Horner’s Book

Short & Sweet

It always feels good to get the short and sweet positive feedback emails:

Thank you Michael! You’re the best author and speaker out there! Great Job!

Thank you,
David Chen

Thanks! I hope I can continue to do well into the future. The past is a sunk cost.


How can you move forward immediately to Trend Following profits? My books and my Flagship Course and Systems are trusted options by clients in 70+ countries.

Also jump in:

Trend Following Podcast Guests
Frequently Asked Questions
Performance
Research
Markets to Trade
Crisis Times
Trading Technology
About Us

Trend Following is for beginners, students and pros in all countries. This is not day trading 5-minute bars, prediction or analyzing fundamentals–it’s Trend Following.

Ep. 195: The Passenger with Michael Covel on Trend Following Radio

The Passenger with Michael Covel on Trend Following Radio
The Passenger with Michael Covel on Trend Following Radio

Please enjoy my monologue The Passenger with Michael Covel on Trend Following Radio. This episode may also include great outside guests from my archive.

Listen to this episode:

Jump in!

Ep. 194: Dan Ariely Interview with Michael Covel on Trend Following Radio

Dan Ariely
Dan Ariely

Subscribe to Trend Following Radio on iTunes

My guest today is Dan Ariely, a professor of psychology and behavioral economics at Duke. He has given great TED Talks with millions of views. Covel and Ariely discuss irrationality and rationality on today’s show, including how we make decisions (with often poor processes).

The topic is his book Predictably Irrational: The Hidden Forces That Shape Our Decisions.

In this episode of Trend Following Radio we discuss:

  • The irrationality of fundamentals in equity markets
  • The wisdom of crowds, constraints and where else our money can go
  • The awarding of the Nobel Prize to professor Shiller and Fama–two famed professors with very different outlooks–and whether it’s irrational or not
  • Macroeconomics vs. microeconomics
  • Lessons learned during his life-threatening burns
  • Why people lie
  • Why the freedom to do whatever we want and change our mind is the shortest path to making bad decisions
  • How 2008 became a constructive tool for Ariely
  • Why Bubbles are some of the most imprecise factors out there
  • Ben Bernanke

In this episode of Trend Following Radio:

  • Why many of our decisions are actually irrational
  • How the wisdom of crowds is often wrong
  • Macroeconomics vs. Microeconomics
  • Why people lie and cheat
  • How the freedom of choice relates to bad decisions
  • Why economic bubbles are not precise factors

Mentions & Resources:

Listen to this episode:

Jump in!

[toggle Title=”View Full Transcript”]MICHAEL: Today on the show I have Dan Ariely. He is a professor of psychology and behavioral economics at Duke. He has some great talks on TED; millions of people have watched his talks. His book, a bestseller, was called Predictably Irrational. I love the way Dan thinks. Enjoy this conversation.

So Dan, I’ve got to ask you out of the gate, you’ve got to have way too much fun in this career of yours.

DAN: I am having lots of fun.

MICHAEL: Seriously, this is not fair. You get to observe the world and say, “Well gosh, everyone looks at it this way, but are they really thinking rationally, or irrationally?” Quite fun. I’m jealous.

DAN: Well, you’re welcome to join us.

MICHAEL: Let me ask, let me jump right in. My world is somewhat in this Wall Street world, this trading world, and I know a lot of your work has crossover appeal, so to speak. So here we are, right now, equity markets all-time highs, in the Federal Reserve we all trust. It looks pretty rational to me, even five years removed from the great crisis – and I want you to talk about that, too, the lessons, how that helped make behavioral economics more to the forefront – but here we are right now at all-time highs. Everything looks pretty rational. Just trust the system. What could go wrong?

DAN: Yeah, we’ve been in this situation before, no?

MICHAEL: Well, yes, this is true. But how do you look at it right now, from your perspective, your lens?

DAN: There’s a couple of questions of where the value of the S&P 500 or whatever index you want to choose, where is it coming from? The moment you believe that it’s rooted in deep asset values, then you can have a rational story for it. But we know that nobody really calculates the value of companies based on their P&L and based on their assets and so on.

The other theory you could have is you say “Oh, it’s the wisdom of crowds. There’s so many small people in this field, everybody’s so intelligent, everybody knows so much, and the average of their opinions are so accurate and precise, this has to be correct.”

Or you could say there are a ton of cases in which lots of people are wrong. And not only that, there’s a ton of places in which people think short-term and not long-term.

And then there’s another issue, which is constraints. If you think about all the money in the markets and you say “Where else could this money go?”, and if it has nowhere else to go, then what could the markets do but appreciate? But not because the fundamentals are better, not because the outlook is better, not because it’s a long-term discounted value for the future, but just because of current constraints.

I think if you look at those four explanations – people value stocks seriously, everybody else is really smart, then you say everybody could be not so smart and just following the herd, and money has nowhere to go – for me, the last two explanations are much more appealing, and I think there’s much more truth to the problems with the markets. Now, of course, if the markets are high, you might want to participate in the joy of the markets increasing, but it doesn’t mean that fundamentally, it’s rational.

MICHAEL: Dan, when you look at this – I guess this spring/summer, when they made the announcements – so here you are, you’re in this world where the word “rational” and “irrational” has become so much a part of your daily life, and then I see the Nobel Prize Committee hand out the Nobel Prizes to two very smart men, but on very different sides of the spectrum. It just seems to add to the irrational aspect. I don’t know how you viewed that, the awarding of prizes to Professor Shiller and Fama, and very different outlooks on the world.

DAN: Yeah, I think it was really a hoot to see them both coming. Fama, of course, is one of the big believers in the rational market and that the markets are rational and perfect and ideal and so on, and Shiller for a long time has basically – he was one of the early predictors of the financial crisis in terms of housing markets, and he talks about bubbles and he talks about the fact that markets cannot necessarily get people to behave better, and that people are short-term thinking and so on.

I don’t know if the Nobel Committee was known for its sense of humor or they thought that was a joke, but this was not the forefront. I think they really wanted to give it to Shiller, if I had to speculate. I think if they had to strengthen – I don’t know what the Nobel Prize is for, but if it’s looking forward and saying “Which part of economics do we want to strengthen, and which part of the prizes do we feel embarrassed that we’ve given before?” – giving prizes in the past for derivatives and so on – I think they really should’ve given it to Shiller, but it was probably too difficult for them to undermine economics to such a degree. At the end of the day, the people who are giving the Nobel Prize in economics are connected to economics, so they probably don’t want to spit in the well they’re drinking from too much.

But I think that this is a time to really question economics in a deep sense, and I don’t think we’ve done enough of that.

MICHAEL: Let me let you elaborate. When you say question it, most people look at the TV and they say, “Look, there’s Ben Bernanke; he’s the head of the Fed; the markets have gone straight up. He’s a very smart man, and he studied the crashes in ’29, ’30, whatnot, and our markets right now are doing great. Why should I not trust this economist?”

DAN: Most of my work is in microeconomics. I look at individual behavior. Macroeconomics, of course, is much more complex, because macroeconomics depends on so many other things. And the truth is, we understand macroeconomics very little.

But think about something like bubbles. Bubbles are one of the most predictable phenomena we have. You could take 20 people, you put them in a room and you sell them some asset – it could be a stock, it could be a fictitious asset, whatever it is – and observing bubbles is the first thing that you see. It’s really incredibly easy. And it’s hard to know from within a bubble whether you’re in a bubble or not. You can only know at the end of the day.

Look, if you look at macro theories, these are nice theories, but at the end of the day, we have very little evidence for them. If somebody asked me to build a bridge based on a theory that is at the quality of macroeconomics, I would be incredibly worried. And we’re building, of course, much more than bridges. If somebody asked me to go into surgery based on a theory that was as sound as the theories we have on the relationship between what the Fed is doing and what the real economy is doing, I would be incredibly worried.

So we have this general explanation that might capture some of the variance and some of the essence, but they’re so imprecise that I think we should be incredibly leery of them. This is a little bit the question of how do you want to bet.

You know what’s called Pascal’s Wager for the existence of God? Basically, the idea is that if you feel God doesn’t exist, don’t worry about it. But if you think there’s 0.0001%, a tiny percent that God exists, you might as well behave as if you believe, because the penalty, multiplied by infinity, if you go to heaven or hell, is so big that it’s worth it.

I think the economy is a little bit like that, that we are taking these bets, and the payoffs are very, very non-symmetrical. If Bernanke is right, we’re going to do okay, but if he is wrong, we’re going to really, really pay for it for a very, very long time. You saw the amount of financial devastation we had by these mistakes. I think it way outweighs the benefit that we got from taking this risk.

So I think we need to understand how economics is important for the way we function. We need to study it to a much higher degree. We need to understand it, and we need to have something like an FDA procedure for approving which economic theories we’re going to let dictate our lives. In the same way that before, you put a medication up and say “Yes, let’s give people this medication or this hip” or whatever it is, I think the same thing should be held for theories, and before we let them rule our lives, we should make sure that they pass some threshold of proof that they’re actually useful for us.

MICHAEL: Go back to the fall of ’08. Why, from your perspective, your world, your work in behavioral economics, why did 2008 become such an instructive tool for you?

DAN: I think there are two reasons for that. The first one is that I think for me, the financial crisis was really not about the housing market. It was a financial crisis due to conflicts of interest. I’ve done lots of work in conflicts of interest, and it turns out that when you see bad behavior – you see a CEO behaving badly, or some banker, something happening – people tend to point fingers and say “These are just bad people. We have good people, we have bad people. We are good; some other people are bad. And as long as you eliminate the bad people, everything would be okay.”

But we’re finding that this is not the case. We’re finding that lots of people can bend reality to a small degree, look at things from their perspective, and be biased in their worldview. Interestingly, this is something that every sports fan knows very well. Every sports fan knows that if a referee calls a call against your team, you feel the referee is vicious, blind, stupid, something. You can’t help but see the game from your subjective, desirable point of view.

And the same thing happens in other places. If I gave you a ton of money to see mortgage-backed securities, of course you would see them. And I’m not saying you would lie; you would just see them as better than they are. There’s lots of other things that make it even more likely. You don’t see the victims, everybody else is doing it, and so on. I think this is important because the conflicts of interest in Wall Street are just tremendous – I’m sure you know it as well as everybody in the field – and we haven’t really done anything to fix that.

The second thing is that I think it was a very different crisis than the .com boom, for example, and bust. The reason is in the .com, you could tell yourself that “I should’ve seen it.” You could tell yourself, “Yes, you know what? It was overvalued and over-hyped, and I shouldn’t have got into it.” I think 2008 feels much more like a conspiracy to people. It feels much more as daytime robbery, and because of that, it created tremendous loss of trust. This is tremendously sad, because if you look from 2008 to now, the markets have done – we had this tremendous variance.

But what happened is that lots of people, as things were getting down, lost their trust in the market and got out, and a lot of those people are people that needed that money for retirement. They said “I can’t risk it anymore,” they got out of the market, and actually we’ve not enjoyed any of the increases.

So it’s been a tremendously sad sequence that eroded the trust between people and banking in a very sad way, and I think it’s really terrible that none of the fixes have really aimed at rebuilding trust. I think that until we rebuild trust – and the markets could go on, but the individuals who really need these institutions are not going to get any value from them.

MICHAEL: It seems like instead of rebuilding trust, it almost seems like the government has offered a bribe, and the bribe is “Hey, don’t mind that you don’t receive interest income anymore; don’t mind that derivatives were a huge part of 2008. Don’t mind that Bear Stearns and Lehman Brothers went under, but we saved Goldman Sachs and Morgan Stanley. Don’t mind any of that stuff that a rational person would probably observe and say ‘hey, what’s going on?’ Just look at the stock market. It’s going up. Trust us.”

DAN: It’s really terrible. And all the fines the government is giving to financial institutions I think are actually not helping, because they just seem like cover-up. “Okay, so these people are stealing lots of money and then they get to give some of it away.” The story about how much Goldman Sachs got back from its investment in AIG, I think is now going to go away for a long time.

The level of misuse and abuse of trust and funds I think is going to hurt the American people for a long time. It’s not going to hurt the bankers, because again, there’s some money that has to go into the banking system, and they just take their share. Basically imagine that there’s some money in the world that has to go through banking, and every time it does, you get to get your 1% offer. It’s a really good business to be in if what you want to do is maximize your profit.

MICHAEL: I think J.P. Morgan made money 63 out of 63 days in the first quarter. I want a part of that business.

DAN: Yep. And the other sad part of it is that the government has made it more difficult to start new banks. I think that, at least for me, there’s a realization that banks are more important than I thought they were. I always realized that banks were important, but I think now I realize they’re even more important for the functioning of society. When lending slowed down and so on, the economy basically had such difficulty.

But there’s no real competition between banks. I mean, think about it: if you had this model of the world, of supply/demand and competition, and you said companies should basically compete, and they compete so fiercely until the production cost and the revenues are almost equal. So you would basically say – this is the logic for competition – you say you increase competition, everybody’s competing for the consumer; the companies would have to reduce their price to be competitive, otherwise the consumer would switch. And eventually what will happen is companies will charge people their production costs and they will make almost no money. Company 1 is selling you a widget, Company 2, they compete and so on.

In all the industries in the world, I think the industry that fits best this theoretical prediction is banking. It’s really easy to switch from one financial institution to another, they’re basically all selling the same thing, there are many of them, they are spending all their time competing. You would predict that their profitability would go down to zero, but that’s not really what we’re seeing. They make a ton of money, which of course gets us to question very, very deeply this model of competition, and is this really a free market?

And I think it’s not. I’m not talking just about the interest rates fixing and so on and the collusion that is going on. That’s of course illegal. But even in the level of non-collusion, or non-explicit collusion, I think there’s all kinds of things that are just very, very rotten in this industry.

MICHAEL: Building on your point, though, about the banks, if you look at another industry, let’s say like airlines – and I was on flights quite a bit this year – generally what we see over the decades is improvement in airline safety. We learn from our mistakes and things get better. And that doesn’t happen in the banking world, for whatever reason. We can look at those reasons really fast – power, money, politics. But banking doesn’t get better; it just seemingly gets more to the edge of the cliff after each crisis.

DAN: Yeah, and I think there is a couple of good reasons for it and a couple of terrible reasons. The good reason is that it’s really hard to learn in banking. Think about an airline. In an airline, of course, if something catastrophic goes off, people get injured, people die, and the airline learns very quickly. It’s a singular event; you could put your finger on it and you can figure what it is. But also, things that slightly go wrongly, you can figure out what has gone bad and you can learn from it.

In banking, it is not really clear how you learn. Like what lessons can you learn? If you believe that the world is rational and you believe that everybody else is doing the same thing, then it’s really tough to learn about mistakes, because you said, “Oh, this is just the rational thing to do. This is what should’ve happened. This is a reflection of all the knowledge in the world. Let’s keep on doing this.” The learning cycle is very, very tough. Think about it: under what conditions would you figure out that the investment strategy you had was wrong?

And then the other thing is that – think about something simple, like going through a red light. Imagine that you think that going through a red light has a 1% chance that you will die and 99% chance that you will go safely, and one day you decide to take the risk. We know it’s a 1% event. You went through it and nothing bad happens. What do you think the next day? You say “Oh, it was a 1% chance and I luckily passed it”? Or do you say “Oh, it must mean that the risk is really half a percent,” and let’s do it again and again and again?

So there’s a vicious cycle that when you don’t know the probabilities very well, you just estimate them, and they are low probabilities, the more you experience something, the more it gives you the wrong feedback, as if the probability is actually lower. So people could take tremendous risk, if you think about the London Whale or things like that, and not realize the risk that they are taking.

And because we don’t really understand risk very well, basically they’re in a psychological notion that they’re actually taking a reasonable risk, when in fact they’re taking a tremendous and highly devastating risk, and once they get a huge failing, of course, it’s too late to learn from it. It’s like an accident that happens.

So I think that if you think about the environment and you say what environment can people really learn from, and what environments are there that make it very, very tough to learn, I think the marketplace is a place where it’s really tough for people to learn. And because of that, by the way, I would regulate it to a much higher degree, in the same way that we don’t let people drive through red lights because we say people are going to take unreasonable risks and they’re not going to think about it correctly. I think the same thing applies to many activities in the stock market.

MICHAEL: Dan, since you brought up red lights, it’s actually an amazing system of people where you would think they might not respect each other, but since there really isn’t a lot of traffic laws, so to speak, it works with this chaos. I don’t know if it would ever work in America, but it works there. It’s amazing.

DAN: And of course, it’s a question of how fast the cars are and what else do they do and how they drive and so on. Nobody really wants to die, but do people naturally take too much risk? In the U.S., some states regulate that you can’t text and drive. You know what happens in those states? The amount of people dying and getting injured from texting and driving is increased. And the reason is that because once this legislation was passed, people started texting below the wheel rather than above the wheel, making it much, much more dangerous, and killing themselves in higher frequency.

MICHAEL: Let me shift out. We’ve had this great macro conversation about banking, but I want to shift into some of your wheelhouse, because your world is far beyond just Wall Street, and behavioral economics, this is the everyday life study. There’s a quote that I saw from you, and I want to paraphrase a little bit; it says “the freedom to do whatever we want and change our minds at any point is the shortest path to bad decisions.” So this great American ideal of all this freedom and being able to do what you want and change your mind – but in actuality, it’s not that great for making good decisions, is it?

DAN: It is not, and this is because temptation is really a devastating force in the economy, and temptation is only getting worse and worse. Imagine two worlds; imagine a world in which I layer your desk every morning with fresh donuts and give you a choice to choose whether you want to eat any and how many, versus a world in which I ask you one time whether you want this every day or not. The choices, making everyday choice, is incredibly damaging, because every day we could say “Oh, it’s just for today and it’s just one time and it’s just this thing and it’s really tempting.”

This is, by the way, why financial education doesn’t really work. There was a recent meta-analysis on all the financial literacy programs ever being created, showing that they basically don’t work, that the best of them improve financial outcomes by 6%, which is very small, and it goes down over time, and it’s worse for the poor. The reason is that you can know something in principle, but acting on it every time is really, really tough.

We are designing the world that we live in, and we can design the world for people to make better decisions, and we can design the world to make people make worse decisions, and I think often we design the world to tempt people at the moment and get much worse access.

MICHAEL: Yeah, tempting at the moment. That feels like a very apt description of current day life. While we’re talking about current day life, why do people lie, Dan?

DAN: Why do people lie?

MICHAEL: Why do people lie?

DAN: Of course, there’s the simple ones, which are the white lies. You know, “Honey, how do I look in that dress?” kind of stuff. But what we find in the experiments is that people don’t lie because they do some kind of cost-benefit analysis and say “what do I stand to gain, what do I stand to lose, this is my long-term interest.”

People actually lie because they give them something at the moment. At the moment, and they only do it under the condition that they can rationalize the lie. So when something is rationalize-able, where you could say “this is actually for the benefit of the group” or “everybody else is doing it,” the tendency to do so is much, much higher. There’s a contrast on the difference between downloading illegal music from the internet and going to a restaurant and escaping without paying.

The difference between them is in one of them, you would feel bad about it; in the other one, you would not. What makes you feel good and bad is not about the probability of being caught, and it’s not about the size of the punishment. It’s about your internal feeling that something is reasonable versus not. It’s your internal ability to rationalize. This is the root of all dishonesty. By the way, this is not true for psychopaths, but if we take psychopaths out of the…

MICHAEL: If we take psychopaths out of the equation, we’re all right.

DAN: Because we can rationalize, but the psychopaths are very, very different. But the non-psychopaths, we all lie and cheat to the extent that we can rationalize it and explain why this is actually okay, even when it’s not.

MICHAEL: Hey listen, as we wrap up this morning, I want to ask you a couple different questions, and I think they’re interrelated. But I want to really know how you – obviously, to go in your path, very curious; you like to find out how things work, and you’ve probably been like that since a young man, I’m guessing. But I also know you had a fairly devastating injury as a young man as well, burns over a large portion of your body.

I wish you could maybe explain how you came into this field of behavioral economics – and you’re one of the leaders in it – and then maybe also, as a dovetail into that, explain some of the lessons that you came through, maybe some of the “aha” moments that you had going through your recovery process.

DAN: When I was a burn patient, there was lots of things that I saw that were done in what I thought were a wrong way, all the way from how they treated some patients to some treatments to how they used to remove bandages from burn patients. I saw this tremendous amount of misery, on my part and the part of other patients, and I had a desire to try and get some kind of basic understanding of how this should be done. It struck me as incredible that a lot of doctors were working on their intuition about what was right and what was wrong. But of course, they would never, in my situation, it wasn’t clear to me that they had the right intuition.

When I started studying this systematically, when I went to college some years later, I learned that often they had the wrong intuition. I’ll give you an example from this week. We are now working on a project on how doctors communicate really bad news to patients. And you know what? They do it in a terrible, terrible way. They don’t understand what the patients understand and what the patients don’t understand, and they give them news in ways that are really not very helpful and uninformative.

So there’s lots of things like that that I’ve been very interested in trying to fix. I think that’s also what’s driving me in behavioral economics. I look at all the things that we’re doing badly and I say “Can’t we do better? Can’t we just learn a little bit more, have some basic principles that tell us how to do this and do them better?”

When you look at the physical world, it is truly amazing what we’ve built. We’ve built buildings and cars and technology. It’s unbelievable. But when it comes to the world of understanding human nature, that has to do with how do we create educational programs and how we teach people and how we create risk and how we understand savings and so on, we haven’t really progressed that much. And I hope that we will be able to learn much more about this and make the world a truly better place.

MICHAEL: Hey, Dan, listen, I appreciate your time today. Where’s the best place that people can go and reach out and find out more about your work, maybe you – and I think there’s some online courses that anybody can take? Where can people go?

DAN: My blog is DanAriely.com. The course will probably start again in March or February. It’s an online course, open to everybody, and there’s lots of other information on my blog.

MICHAEL: Hey Dan, thanks. I could pick your mind all day. I know you don’t have all day, but I love your insights, and I appreciate your time today.

DAN: Thank you. Looking forward to next time.

MICHAEL: Take care.

DAN: Bye for now.

MICHAEL: Bye bye.
[/toggle]

Rate and Review Trend Following Radio on iTunes

Join over 12,000 others on the Trend Following mailing list.

Have a question or comment about this episode? Post it below.

If you Enjoy this Interview, then you may also Like my other Podcasts such as:

Dan Collins Podcast Interview

Victor Ricciardi

My thoughts on Herd Behavior

Ep. 193: Gerd Gigerenzer Interview with Michael Covel on Trend Following Radio

Gerd Gigerenzer
Gerd Gigerenzer

Subscribe to Trend Following Radio on iTunes

My guest today is Gerd Gigerenzer, the director of the Max Planck Institute for Human Development in Berlin, and a former professor of psychology at the University at Chicago. Gerd is also the director of the Harding Center for Risk Literacy (read David Harding the head of trend following firm Winton Capital).

The topic is heuristics.

In this episode of Trend Following Radio we discuss:

  • Uncertainty
  • Comparing decisions to baseball (gaze heuristic)
  • Complex problems and simple solutions
  • Using price action as a decision making cue
  • Unconscious heuristics
  • The art of knowing what one doesn’t have to know
  • The less is more effect
  • The miracle on the Hudson River a few years ago as a case in point illustrating heuristics
  • The idea of an adaptive toolbox
  • The element of surprise in Gigerenzer’s work
  • The distinction between risk and uncertainty
  • Intuition vs. rationality

In this episode of Trend Following Radio:

  • Why uncertainly and risk are not the same thing
  • How we use heuristics to make decisions
  • Why complex problems don’t always require complex solutions
  • Why heuristics and conscious reasoning are both important
  • “Less is more” – the art of knowing what you don’t need to know
  • How these methods are applicable in investing, management, law, and many other areas
  • What defensive decision making is and why you need to know

Mentions & Resources:

Listen to this episode:

Jump in!

[toggle Title=”View Full Transcript”]MICHAEL: Today on the show, I have Gerd Gigerenzer. Gerd is the director at the Max Planck Institute for Human Development in Berlin, and he’s a former professor of psychology at the University of Chicago. Gerd is also the director of the Harding Center for Risk Literacy. That would be the David Harding Center for Risk Literacy.

Our conversation today is about heuristics, and for those of you that trade, for those of you that invest, and for those of you that just want to navigate risk and uncertainty in your life, my conversation with Gerd is intriguing. Frankly, I find it fascinating. I find the topics that he goes into fascinating. His work is the foundation, the philosophical foundation, of trend following success, even if he did not set out for that to be the case. I hope you enjoy this conversation.

Hi Gerd, this is Mike Covel. How are you?

GERD: Yeah, I’m fine, so far. There’s no snow yet.

MICHAEL: Where are you today?

GERD: I’m in Berlin, at the Institute. And you?

MICHAEL: I am right outside Washington, D.C. right now.

GERD: Okay.

MICHAEL: Let me jump right in, Professor. I think your work and what you’re doing – in my world, in my trading world, much of your work is the foundational spine, so to speak, on good trading and the good philosophy behind good trading. I think where I’m going with that is, for example, the big question: how do we all make inferences about the world that we live in with limited time and knowledge? I mean, that gets right at where you start in with your work, doesn’t it?

GERD: Yes, totally right. But it’s not what most economists are talking about, and where the uncertainty is confused with risk, with known risks, and where one tries to model people’s behavior as if they could calculate all the risks. What I’m doing is to try to develop an alternative, or to find tools that actually can deal with uncertainty, and heuristics are some of these tools.

MICHAEL: One of the heuristics that I know you’ve used quite a bit in explaining this to all different types of audiences is the gaze heuristic, and specifically in sports, and specifically I’ve heard you use it in baseball. As a guy who once played baseball for a long time – and I was a baseball catcher – why don’t you explain that, though, from your perspective, and why that’s such a great way to lead into this subject?

GERD: The question is how does an outfielder catch a ball? If you look into explanations of that, you will find that many people think if the problem is complex, we need a complex solution. What would it be? Obviously, that the outfielder somehow calculates the trajectory of the ball and runs to the point where it’s going to go. You can find people like Richard Dawkins, in his Selfish Gene, writing exactly that. He puts the “as if” in there. This is an economics term. So behave as if one would be able to compute and know everything.

I’m interested in how actual people make decisions, here outfielders, and a number of experiments show that they don’t compute trajectories and don’t run to the point that has been computed, but rely on a simple heuristic that actually can do the job in the limited time, and also in a situation where one has not the information to estimate all the parameters you would need to determine the right trajectory, such as the initial distance, the initial velocity, the air resistance and the direction of the wind and spin and so on.

One of the simplest heuristics, among a number of heuristics that outfielders use, is the gaze heuristic. It works if the ball is already high up in the air, and it’s very simple. It consists of three building blocks. First, fixate your eye on the ball; second, start running; and third, adjust your running speed so that the angle of gaze remains constant. And then you will end up at the point where the ball ends up.

So here is a very different philosophy. There is a complex problem, and what’s needed is a simple solution, and when we look hard enough, we often can find it.

MICHAEL: In my world, and in some of the books that I’ve written on trading, one of the simple heuristics that people use is they say “How do I make a good investing decision? There’s so much information out there, there’s so many different variables; who do I make a good investing decision?” And many traders have figured out, why don’t we just focus on one piece of information?

For example, the price itself. So literally, if the price is going up, I want to be long with that instrument; if the price is going down, I want to be short with that instrument. So a lot of traders have really come at – whether they did it knowing about your work, or your work is the great foundation for why they’ve been successful – but this simple heuristic of using price action as a decision-making cue has worked extremely well.

GERD: Yeah. In trading, we have the same two philosophies: it’s a complex and difficult problem, and many are looking for complex solutions. You notice that it ranges from the traditional finance model, from Markowitz optimizing, to all kinds of computer-based and high, sophisticated calculation procedures. The other alternative is realizing this is not a problem of known risk. Trading is not trading in a casino. You trade in an uncertain world, and in an uncertain world, the optimization models will not necessarily work.

So we need something that’s robust, something simple, and the heuristic you described is a member of a class of heuristics that are called one reason decision-making. You try to figure out what’s the single most important reason, and then you ignore all the rest. It looks as if that would be irrational, but only if you believe that everything could be calculated. Many studies in social psychology, behavioral economics have tried to show that people just rely on one reason and ignore the rest, and they concluded that this is irrational.

But these studies overlook the important distinction between a world of risk and a world of uncertainty. In a world of risk, maignoring relevant information is irrational, or at least it’s a sign of it’s not so important, this problem. But not in a world of uncertainty – here, and what can be showed it mathematically, good decisions require to ignore part of the information, and if you try to make a complete pro/con list, then you will likely fail.

MICHAEL: As you bring up the world “fail,” I can think of an example that I’ve seen in your work that did not have much time for decision-making, and the pilot had to use various heuristics to make his decisions to save many lives, and that would’ve been the miracle on the Hudson River a few years ago. Why don’t you describe, through your work lens, why that’s such a great example to teach with?

GERD: The heuristics are often used unconsciously, like the gaze heuristic. If you interview a baseball outfielder, you will find out that most of them cannot say how they do what they do so well. So there’s more in our mind than we can describe. The frame heuristics can be used deliberately, and the miracle of the Hudson River is a case in point. As you will recall, the plane started to go from LaGuardia Airport, and within a few minutes, something totally unexpected happened: a flock of Canadian geese collided with the plane.

Now, the modern engines are built in a way to be able to digest birds, but not Canadian geese; they are too fat. The unlikely event happened that they flew in both machines, and it got very quiet in the plane, and the pilots turned around and had to make an important decision: will we make it to LaGuardia Airport, or will we hit the ground before it? That’s a decision about life and death. How did they do this?

One might conclude it’s a complex problem, so we need to compute the trajectory of this plane. But they didn’t. They used the same heuristic as the baseball outfielder uses, the gaze heuristic, now in a different situation. It means you look through the windshield, in the cockpit, and fixate the tower. And if the tower goes up in your windshield, you will not make it; you will hit the ground before. So here is an example of the same heuristic that many people use unconsciously now being used consciously, and it is more accurate than calculations, and in any case, much faster. So the pilots had time to do other important things.

It’s also a lesson that one can use this study of the heuristics people intuitively use in order to inform experts how to make better decisions. And it’s also an illustration that the common opposition between heuristics and conscious reasoning is wrong. So if you look in a famous book by Daniel Kahneman, you will hear a message about two systems. In one, they are heuristics and they are unconscious and they are error-prone. It’s not true. Every heuristic we have studied is used both unconsciously and consciously.

MICHAEL: It’s terribly fascinating. But your work – and I don’t think you have any problem with the controversy of this – you’re coming at these subjects of risk and uncertainty in a way that the establishment is not comfortable with. Many careers have been built off going a different direction than where you’re going. I relate very strongly to your direction, hence the reason I wanted to have you on my program.

GERD: That’s true. My work has caused many controversies. But that’s nothing bad. The science is there to discuss, to debate, to change, to improve. And the distinction between risk and uncertainty that is very fundamental to my research is not one that I came up with. You can find it in the work of the economist Knight in the 1920s; you can find it later. But it has not been taken seriously.

Moreover, most of intellectual effort has been put into building some ways and some methods that reduce uncertainty to known risks. Then we can use our probability measurements that we do. Probability is a wonderful instrument, and I rely on it, but it has its place. It’s not the only tool in our toolbox. And thinking that probability theory or Bayesian theory or any other tool is the only tool that can solve all problems would be like mistaking a hammer with an entire toolbox, and then thinking that everything in the world is a nail. It isn’t. There are screws, and you need different tools, like screwdrivers.

So the entire idea of an adaptive toolbox, of different strategies that the mind uses, is, I know, something that not many people are – or at least some people are not comfortable with, but it’s the same way our body is built. It has not one super organ, but it has many, and there’s a reason for that, because they work better.

MICHAEL: Let me slide into gut feelings. Because if I was to ask you, are you intuitive or rational, I think I know where you’re going to go, but how do you answer that when someone says “Gerd, are you an intuitive man or a rational man?”

GERD: It’s not an “or.” That’s a an error. It’s not an opposition, although you can read this again and again. We need both. We need our brains, we need our guts. More precisely, we need deliberate thinking, but also sometimes we need to trust our intuition. The only question is when. It’s not a question whether intuition is superior to deliberate thinking, or if deliberate thinking is superior to intuition, as many of my dear colleagues believe. No, that’s not the point.

You can show that good expertise is almost impossible without good intuitions. A composer needs intuition to compose. He or she cannot calculate the piece. A chicken sexer needs intuition – do you know what chicken sexing is? Chicken sexing is the art of finding out whether a one-day-old chicken is male or female. If you ask a chicken sexer how they do what they do, they cannot tell. It’s intuitive. But nevertheless, they can do this. And then there are other problems where it’s better to calculate, to do explicit pro and con lists.

And the unlucky attitude in much of social science is to put the one against the other one and look down at one of these. The intuition is based, according to my own research, often on simple heuristics. Why? Because intuition mostly has to do with real world problems that are characterized by uncertainty, not by known risk. If you play in the casino, roulette, you can calculate how much you will lose in the long run. You don’t need any intuition. But if you want to find out whom to trust, whom to marry, what job to take, what to do with the rest of your life, you can‘t calculate that.

Only parts can be calculated. There is a risk, and this is what we usually call experience. But it’s an experience that is not in language, that we cannot express, and this is why many people are suspicious of it.

MICHAEL: One of the things that I think is really interesting about your work is that you are tackling, at least in a way that people can understand and it can be useful then when it happens, is the element of surprise. And of course, you can’t necessarily prepare for a surprise; it’s a surprise. But why don’t you talk about surprise in your work and how you feel about it, what you’ve learned, and what’s useful for the audience to think about.

GERD: The moment one tries to reduce all forms of uncertainty to known risk, surprise is out of the question, because nothing can happen. Nothing new, at least. For instance, the financial crisis we recently had is an example why using standard models for estimating risk have overlooked every crisis and prevented none, and we’re surprised that things could happen that should not as easily happen. So dealing with uncertainty and devising and testing tools for uncertainty is also dealing with surprise. Also, the world needs flexible methods and flexible heuristics in order to adapt to new situations that are quite unexpected.

MICHAEL: When I was preparing for our conversation today, I was thinking one of the great lines that I’ve seen in your work is “the art of knowing what one doesn’t have to know.” Now, that’s slightly going back and talking about some of the issues we talked about in the beginning of this conversation, but the idea of knowing what one doesn’t have to know, the “less is more,” it’s a fairly – I think if most people think about it, it’s simple, it’s intuitive, this makes sense. It’s maybe common sense.

But as a society, we seem in many, many elements of our society – not just my world, for example, trading, but many elements of society, we have got to this point where complexity and more and more data has become overwhelming. I think people – for example, we have all these devices. We have the cell phones, we have the computers. The information overload never stops. And I really think a large number of the population think that all this extra information is helping, but if you just stop for a second and pause, you’ve kind of got to see yourself, “how does this help? It’s just distracting me.”

GERD: Yeah. This is part of the belief that we also talked about, that complex problems need complex solutions. It’s also a version of big data to hope that you can find your needle in the haystack just by having more. Not by knowing anything, but just having more computation. In an uncertain world, this is not correct, because you need to have sufficiently simple solutions in order to make better decisions.

Here is a very simple illustration of this, what we call “less is more” effect. A “less is more” effect is a situation where if you know more than a certain amount, your performance deteriorates, at least for some point. Dan Goldstein and I have studied how people answer simple trivia questions, such as “Which city has more inhabitants, Milwaukee or San Diego?” No, Milwaukee and Detroit, make it simple. About 60% of Americans that we have tested got the right answer, Detroit.

If you ask Germans, what we found is that 90% of the Germans got the answer right, not because they knew more, but because they knew less. Most of the Germans had not even heard of Milwaukee, only of Detroit, and they relied on a simple heuristic that we call the recognition heuristic. You’ve heard of Detroit, not of Milwaukee, so it’s probably Detroit. The Americans could not rely on this heuristic; they have heard of both, and they need to rely on the facts, on recall.

Here is a situation where less is more, where we can show and prove that a sufficient degree of ignorance can actually help. The art of knowing what you don’t need to know can be a conscious version of these type of heuristics that are used, usually unconscious, and realizing that in an uncertain world, the attempt to calculate everything is an illusion of certainty, and it will probably lead to failure.

Good decision-making in an uncertain world needs to find a balance between ignoring and knowing something, and this balance can be mathematically described by the so-called bias-variance dilemma, which I’m not going into it, and which shows us that when we need to make estimations, we should not try to fit every data point, but we have to ignore something and try to just be a little bit biased in order to make better decisions.

So in our work, bias has a positive meaning, and not the meaning that it has in much of work in social psychology, where every deviation from a so-called normative model is called a bias, and people are blamed for it. Intuition is probably more intelligent than this kind of argument by some social psychologists.

MICHAEL: Gerd, you brought up something here a second ago which I thought could lead to another example. I have two quick questions, and then we’ll wrap up. But the idea of recognition, and I think in your work – I can’t remember if it’s an experiment that you did or something that you covered, but the idea of people selecting stocks for investing purposes and the recognition. Why don’t you go into that example? I think that was really interesting.

GERD: That’s actually a study I did with some of my colleagues a long time ago. You’ll find it in the Simple Heuristics Make Us Smart book. We wanted to test how good a very simple heuristic is that can only be used by semi-ignorant people when picking stocks. In order to use this recognition heuristic, you need semi-ignorant people, so we went in downtown Chicago, asking pedestrians which of a long list of stocks they recognized by name, nothing more. Then we did the same thing in downtown Munich.

Then we had pedestrians and we had business students, and then we built portfolios of say the 10 most recognized stocks, and as a control portfolio, the least recognized. Then we waited a half a year; that was the criteria. Then we looked how much money did they make compared to randomly picked stocks, to a certain number of well-known blue chip firms, and some experts and all kinds of criteria.

The study showed that the recognition heuristic portfolios, based on the semi-ignorance of pedestrians, made most money. We replicated the study a few times in contests that have been defined by stock journals and others, and found similar results. Now, that will not always replicate, but I would venture that this simple heuristic does at least as good as well-known people, professional stock pickers, the Dow Jones or other indexes.

That’s an illustration that you can, in a highly complex and uncertain world, you can actually do quite well by simple heuristics – in this case, interestingly, heuristics that need semi-ignorant people. Not totally ignorant, because they need to recognize some half of the – so it should not be read that the less you know, the better. No. A “less is more” effect is defined in a different way. There’s usually some time where more knowledge helps, but then there will be a tipping point, where more knowledge will lead you astray.

MICHAEL: I think there’s something – it’s almost a pejorative in some ways, from my perspective, when I hear the word “simple” associated with your work, because for you to come to the position and draw some of the inferences and conclusions in your work with your associates, you’ve had to explore all of the other theories and whatnot that’s come before, and then come out on the other side with a viable perspective that makes sense.

I want to lead, as a last example, a last question: a few years ago, I had the chance to spend the afternoon with Harry Markowitz, who is obviously, for most people in the finance world and in their college textbooks, the name is quite familiar. But why don’t you go ahead just briefly and maybe describe for the audience Markowitz’s findings, and then maybe why don’t you add some critique from your perspective on perhaps ways that he might’ve got it wrong?

GERD: First, Markowitz didn’t get it wrong. He developed an optimization model called the mean-variance portfolio that works if the assumptions of his model are in place. But the claim that the Markowitz model would work in the real world of finance is a different one. If someone makes this claim, then it’s likely that the person gets it wrong.

Interestingly, when Harry Markowitz made his own investments for the time for his retirement, he used his Nobel Prize winning optimization method, so we might think? No, he did not. He relied on a simple heuristic: divide your money equally. So if you have two options, 50/50. If you have three, a third, a third, and so on, 1/N. There is not much computation involved.

The question now is, how good is 1/N, a simple heuristic, no computation, no free parameters, relative to the Markowitz optimization model? There are a number of studies. What most of the studies show is that in the real world of investment, 1/N typically leads to better returns than Markowitz optimization. Typically means. And the real question is, can we identify these situations where the simple heuristic does better, and the situation where the Markowitz model would do better?

To the best knowledge today, these situations are the higher the uncertainty, the better for the heuristic. Second, the larger the number of “N,” so the assets that you have, the better for the heuristic. If a small number, that is where the Markowitz model is profiting because it doesn’t have to estimate so many parameters. And finally, third, the larger the sample size, the better for the Markowitz model.

So then you can ask, given the answer of the stock market, and if N = 50 assets, for instance, how many years of stock data would you need in order so that you can likely argue that the Markowitz model finally does better? There are some simulations out there that have tried to answer this question, and the answer is roughly 500 years. So that means in the year 2500, we can start to distrust our intuitions, like 1/N, and do the computations, provided the same stocks are still around in the stock market in the first place.

MICHAEL: Gerd, once you got into this subject area, looking at risk, looking at uncertainty, doing the experiments, doing the research, this has just become a lifelong project for you. I mean, you are 100% passionate. You just love this, don’t you? I love it; I think it’s awesome. But I mean, you’ve got to just love it.

GERD: I do. And also, I learn so much about the world. For instance, this basic research about how to deal with uncertainty is relevant for so different fields that it not only includes finance, but it includes management. It also includes the law. I have taught about 50 or so American federal judges in decision-making. It includes healthcare. I have personally taught about 1,000 German doctors in order to understand risks. So I probably learn as much as these experts learn from my own work about how the world is functioning.

And there are situations where you need to teach people how to understand known risks, so risk communication, which doctors still don’t learn in their medical education. And then there are other situations where you need to teach them how to use heuristics, and also the difference between risk and uncertainty. And at the end, also teach them there is not just one method out there, but we have to deal with the world; we have to have a toolbox, which we call the adaptive toolbox, in order to make better decisions.

It’s true; it has been always much fun in my life, and I continue to learn new things about different areas in my environment, including the psychological factors of decision-making, such as defensive decision-making. If you go to your doctor’s and you believe that your doctor gives you the best advice, you may be the lucky outlier.

But most doctors in the U.S. practice defensive decision-making; that is, they suggest to you something that’s the second or third best for you, but which protects themselves from being sued by you. So that’s called defensive decision-making. You treat your clients different from, say, your relatives. In one study, 93% of all American doctors said, “Yes, I do defensive decision-making. That is, I do not advise the best thing for my client.”

MICHAEL: That’s frightening, isn’t it?

GERD: That’s frightening, but it’s important for everyone to know. And you cannot blame your doctor for doing that, because you are the one who is suing. But you have to understand the system. And there are some simple heuristics, then, that may help you.

For instance, when my mother got blind on one eye, and there was an experimental therapy, so I called up the person who has done most of these experimental therapies and explained the case and asked him, “What do you think? What would you recommend for my mother?” He said, “Just try the therapy.” Then I realized I asked the wrong question, because he will be in a defensive position. I could sue him.

So I asked him again, I said, “Look, I have only one mother. If it would be yours, what would you do?” He said, “I wouldn’t do anything.” His mother wouldn’t sue him, so he’s cautious. This kind of heuristic, don’t ask your doctor what you should do, but ask him or her…

MICHAEL: There’s a classic chapter in Stephen J. Gould’s book, Full House, where he talks about surviving cancer, and how he put aside doctor advice and looked at the bell curve, so to speak, for his own situation. I think that’s what you’re saying here, is take the power into your hands and don’t just trust.

GERD: Yeah, and also realize that the doctor is in a defensive position, so ask the doctor not what he would recommend, but what he would himself do, or what he would recommend to his own mother, in this case.

So there are a number of psychological factors that are also very important in order to understand decision-making and risk. There will be a book that’s coming out in April next year, so April 2014, called Risk Savvy, where much of this, what I’m just saying, is contained and elaborated. So if you’ve nothing better to do, then just get a copy and have a nice evening.

MICHAEL: Listen, I know people might be saying “Mike, you’ve got a professor of risk and uncertainty on the show,” but I’m going to go find it: I hear there is an interesting video of you on YouTube somewhere, and I believe it’s a commercial, and I believe music is involved. So I’m going to go find it. This is true, right?

GERD: Yes, this is from an earlier career. You asked about my life; my life has not been just about studying risk and uncertainty. I had another career. I was a musician in the entertainment business, and the video you are referring to was the first TV spot for the VW Rabbit, called at this time. And it’s actually an American spot. I had a band at this time, and we won the contest for the TV spot, and I’m on the steering wheel with the banjo, in case you don’t recognize me. That’s a long time ago.

MICHAEL: Well, I’m going to go find it. Hey Gerd, the best place for people to reach out and check into your work, if they want to reach out to you – I believe it’s the Max Planck Institute at Berlin. Is that the best place for people to go?

GERD: On the internet, you mean? Or the website, yes?

MICHAEL: Yeah.

GERD: But also it depends on what you are doing. The Risk Savvy book is my third trade book. There’s another one called Gut Feelings and there’s another one called Calculated Risk before. But there are also many academic books which you can easily find just typing in my name, “Gigerenzer,” and then you will find lots of things. Lots of interesting things to read and think and challenge you.

What I hope is, at the end, to make the theme of decision-making and risk something that is less abstract and less remote from what real experts and people in daily life that’s deemed as classic decision-making theory, which is about calculating probabilities and utilities. In most cases, we cannot calculate the probabilities, nor do we know the utilities, so we need something else. I hope there is a viable alternative that brings the academic science in more contact with the professions that really have to deal with uncertainty.

MICHAEL: I appreciate your time today, and hopefully when that new book comes out in the spring, I can have you on and we can discuss it.

GERD: Okay, we’ll do this.

MICHAEL: Thank you very much. Have a good day today, Gerd.

GERD: Yeah, it was a great pleasure to talk to you, Mike. Bye bye.

MICHAEL: Thank you. Take care.
[/toggle]

Rate and Review Trend Following Radio on iTunes

Join over 12,000 others on the Trend Following mailing list.

Have a question or comment about this episode? Post it below.

Ep. 192: Dan Collins Interview with Michael Covel on Trend Following Radio

My guest today is Dan Collins, a 25-year veteran of the Futures industry, founder of the Dan Collins Report, and most recently was named Editor-in-Chief of Futures Magazine.

The topic is trading.

In this episode of Trend Following Radio we discuss:

  • Why the mainstream media seems to not even attempt to hide the bias and the propaganda against the styles and types of trading in the worlds that Covel and Collins occupy (i.e. trend following, systematic, managed futures, etc.)
  • A particular Bloomberg article which attacked managed futures and alternative strategies
  • Why articles like these are an attack on the average investor

Listen to this episode:

Jump in!

Ep. 190: Jason Gerlach Interview with Michael Covel on Trend Following Radio

Jason Gerlach
Jason Gerlach

Please enjoy my monologue Michael Covel on Trend Following Radio. This episode may also include great outside guests from my archive.

Listen to this episode:

Want to learn more Trend Following? Watch my video here.