The Tim Ferriss Show Transcripts: Michael Mauboussin — How Great Investors Make Decisions, Harnessing The Wisdom (vs. Madness) of Crowds, Lessons from Race Horses, and More (#659)

Please enjoy this transcript of my interview with Michael Mauboussin (@mjmauboussin), Head of Consilient Research on Counterpoint Global at Morgan Stanley Investment Management.

Prior to joining Counterpoint Global, Michael was Director of Research at BlueMountain Capital, Head of Global Financial Strategies at Credit Suisse, and Chief Investment Strategist at Legg Mason Capital Management. Michael originally joined Credit Suisse in 1992 as a packaged food industry analyst and was named Chief U.S. Investment Strategist in 1999.

Michael is the author of The Success Equation: Untangling Skill and Luck in Business, Sports, and Investing, Think Twice: Harnessing the Power of Counterintuition, and More Than You Know: Finding Financial Wisdom in Unconventional Places. More Than You Know was named one of “The 100 Best Business Books of All Time” by 800-CEO-READ, one of the best business books by BusinessWeek (2006), and best economics book by Strategy+Business (2006). Michael is also co-author, with Alfred Rappaport, of Expectations Investing: Reading Stock Prices for Better Returns

Michael has been an adjunct professor of finance at Columbia Business School since 1993 and is on the faculty of the Heilbrunn Center for Graham and Dodd Investing. He received the Dean’s Award for Teaching Excellence in 2009 and 2016 and the Graham & Dodd, Murray, Greenwald Prize for Value Investing in 2021.

Michael earned an A.B. from Georgetown University. He is chairman emeritus of the board of trustees of the Santa Fe Institute, a leading center for multidisciplinary research in complex systems theory.

Transcripts may contain a few typos. With many episodes lasting 2+ hours, it can be difficult to catch minor errors. Enjoy!

Listen to the episode on Apple Podcasts, Spotify, Overcast, Podcast Addict, Pocket Casts, Castbox, Google Podcasts, Stitcher, Amazon Musicor on your favorite podcast platform.

#659: Michael Mauboussin — How Great Investors Make Decisions, Harnessing The Wisdom (vs. Madness) of Crowds, Lessons from Race Horses, and More


Tim Ferriss owns the copyright in and to all content in and transcripts of The Tim Ferriss Show podcast, with all rights reserved, as well as his right of publicity.

WHAT YOU’RE WELCOME TO DO: You are welcome to share the below transcript (up to 500 words but not more) in media articles (e.g., The New York Times, LA Times, The Guardian), on your personal website, in a non-commercial article or blog post (e.g., Medium), and/or on a personal social media account for non-commercial purposes, provided that you include attribution to “The Tim Ferriss Show” and link back to the URL. For the sake of clarity, media outlets with advertising models are permitted to use excerpts from the transcript per the above.

WHAT IS NOT ALLOWED: No one is authorized to copy any portion of the podcast content or use Tim Ferriss’ name, image or likeness for any commercial purpose or use, including without limitation inclusion in any books, e-books, book summaries or synopses, or on a commercial website or social media site (e.g., Facebook, Twitter, Instagram, etc.) that offers or promotes your or another’s products or services. For the sake of clarity, media outlets are permitted to use photos of Tim Ferriss from the media room on or (obviously) license photos of Tim Ferriss from Getty Images, etc.

Tim Ferriss: Hello, boys and girls, ladies and germs, this is Tim Ferriss. Welcome to another episode of The Tim Ferriss Show. I’m going to keep my preamble short because I have many pages of notes in front of me, and we’re going to run out of time before I run out of questions.

My guest today is Michael Mauboussin, spelled M-A-U-B-O-U-S-S-I-N. You can find him on Twitter, @MJMauboussin. He is the head of Consilient Research on Counterpoint Global at Morgan Stanley Investment Management. Prior to joining Counterpoint Global, Michael was director of Research at Blue Mountain Capital, Head of Global Financial Strategies at Credit Suisse and Chief Investment Strategist at Legg Mason Capital Management. Michael originally joined Credit Suisse in 1992 as a packaged food industry analyst. Some of you long-term listeners will perhaps recognize some of that from my conversation with Bill Gurley and was named Chief US Investment Strategist in 1999.

Michael is the author of many books, including The Success Equation, subtitle, Untangling Skill and Luck in Business, Sports and Investing, Think Twice: Harnessing the Power of Counterintuition, which I’ve mentioned several times on this podcast, and More Than You Know: Finding Financial Wisdom in Unconventional Places. More Than You Know is named one of the 100 Best Business Books of All Time by 800-CEO-READ, one of the Best Business Books by Business Week, and Best Economics Book by Strategy and Business. That’s in 2006. Michael is also co-author with Alfred Rappaport of Expectations Investing: Reading Stock Prices for Better Returns.

Michael has been an adjunct professor of finance at Columbia Business School since 1993 and is on the faculty of the Heilbrunn Center for Graham and Dodd Investing. He received the Dean’s Award for Teaching Excellence in 2009 and 2016, and the Graham and Dodd Murray Greenwald Prize for Value Investing in 2021.

He earned an AB from Georgetown University and is Chairman Emeritus of the Board of Trustees of The Santa Fe Institute, a leading center for multidisciplinary research in complex systems theory. You can find all things Michael at

Michael, thank you for making the time. It’s nice to see you again.

Michael Mauboussin: Tim, it’s awesome to see you.

Tim Ferriss: I thought we would start with some Latin. That’s my favorite place to start, especially as someone who knows very little about Latin. And I wanted to ask you about your course, and it’s two unofficial mottos, so maybe you could just mention those mottos and explain why they are the unofficial mottos and what they mean.

Michael Mauboussin: First of all, there are no official mottos, I made these up. But these are an attempt to set a tone with the students, but it’s not just the students in my course, but really broadly speaking in life. So, the first one, I mean the Latin is “Nullius In Verba,” which is the motto of The Royal Society. The Royal Society’s the oldest, I think, scientific society in the world. And it’s been around for 350 years, more than 350 years. And basically translated, it means “Take nobody’s word for it,” kind of “See for yourself.” And I really like this idea that a lot of the information that people use or even things that they’re taught or things they take from authority and they don’t go figure them out for themselves. And so this idea of constantly having an open mind and seeing for yourself and not working just on authority and questioning everything, that’s the tone I want to set in the course.

The second one is a quote from Carl Gauss, and I’m not going to even try the Latin, but basically the idea is: “Notions, not notations.” And the idea is don’t focus on only equations, computations — obviously super important, but really the key is to grasp the intuitions, the underlying ideas, and then allow the computation to serve that rather than the other way around.

Now, sometimes you can solve a problem computationally, and then you have to go back and figure out what the intuition is to get you there, but that’s the main thing. And so, I think that’s, for example, for business school students it’s a potential problem because from time to time they’ll run equations without thinking about what they’re doing, and they’ll forget about the concepts behind them. So, Charlie Munger, the Vice Chairman of Berkshire Hathaway, has got this line where he says, “People calculate too much and think too little.” And I think that’s what we’re trying to fight against with that idea.

Tim Ferriss: So Michael, this strikes me as the perfect segue to ask a question about some of your earlier chapters. And specifically I would love to know how your lack of business education was an asset on Wall Street when you first started out.

Michael Mauboussin: Yeah, it’s a good question. By the way, I did take one business class. My father strongly encouraged me to take accounting for basically non-business majors when I was a senior. And out of the complete generosity of the professor’s heart I got a C+, a gentleman’s C+. I knew nothing about what was going on at all.

I will say that I was in a wonderful training program and there’s a lot of remedial work, so folks like me could learn, get up to speed on some of the basic issues. But Tim, I think the answer is that I went in kind of wondering about and thinking about and being open to understanding things from first principles. And Wall Street, even to this day, is replete with lots of rules of thumb and sort of old wives’ tales and shorthands for how to do things. And some of these things, when I would sit there and listen to them and try to cobble it all together, just didn’t make sense. And so for me it was this idea of the beginner’s mind and really saying, “How does this stuff really work?”

I had for me a clear professional epiphany, it was a two by four across the forehead. Guy in my training program gave me a copy of Al Rappaport’s book called Creating Shareholder Value, which was published in 1986. So this is probably a year and a half or two years after that book was out. And he gave it to me for a completely different reason, it had nothing to do with the basic concept. But I read that book and the light bulbs all went off for me personally. And I very much connected with that whole way of thinking about things.

And I guess I’ll summarize, there were sort of three things that he talked about that have really remained the cornerstone of how I think about everything since. One is, it’s not about accounting numbers but about cash. And I don’t want to get too much down the financing road, but basically accounting doesn’t always represent the underlying economics of businesses that effectively. And he was one of the people that really emphasize understanding value and how value is created.

The second thing, which I think is really interesting is that valuation and strategy sort of go together. So when you’re thinking about as a business person trying to build a business, you have to make a bunch of strategic choices, but a good strategy is one that creates value. And to do a valuation of a business, you have to understand the competitive position of the company and the industry and so forth. So the idea that those are really joined at the hip and going back to even business schools, we tend to teach these things separately, but they really do go together.

And then the third and final point, which end up being the collaboration of the book we did together, is that stock prices reflect a set of expectations. And it’s very obvious when you say that, any asset price, right? What has to happen in the world for that thing to make sense? His target audience was corporate executives, but clearly that was relevant for investors as well. So from there, Tim, I would just say that I was sort of open to that and I think that to me was a good set of ideas to work with. That’s why I think that I was unencumbered with any knowledge to be open to thinking about the world that way.

Tim Ferriss: I came across something in doing research for this conversation. This was from Farnam Street, so This was a transcript of an interview that you did over yonder. And there was a line in passing that I wanted to revisit. And this is from your earlier chapters, yet again.

“In the early to mid-1980s, Drexel…” that’s Drexel Burnham Lambert, if I’m pronouncing that correctly, “…Drexel had a great food industry analyst – to this day, I believe he’s the best analyst I’ve ever seen. So I naturally followed him closely.” and then it goes on. “Shortly after I left Drexel,” et cetera, et cetera, et cetera. But I wanted to double-click on that comment. So if it’s still true, or maybe even if it was just true then, when you said it, “I believe he’s the best analyst I’ve ever seen,” what made this person such a good analyst? 

Michael Mauboussin: Tim, that’s a great question, and I’m going to tie a bunch of ideas here together. The first thing I’ll say is my first job was with Drexel Burnham Lambert in 1980s. That’s where that training program was. And I just think that there’s a big professional imprint on the first job. And so Drexel, at the time — this is when Michael Milken, high-yield bonds — the firm was really hot. It ended up getting in trouble, and then there was a crash, 1987. So things unwound, and the firm ended up going bankrupt. But at the time, it was quite hot. And they had an equity research department.

I should back up and say one thing. In our training program, we first did some classroom work, and then we rotated through different departments in the firm. So we really got exposed to all different aspects. And indeed, there was this one analyst that followed the packaged food companies — his name was Alan Greditor — who was extraordinary. And what made him great was he had a very different view of things than traditional. So he was very focused on, for example, financial cash flows versus simple accounting numbers, some of the things I was just talking about. He was very focused on things like share buybacks before those were a big deal, understanding asset values. And then, he had the ear of many management teams. So he was able to talk to those management teams and understand how they were thinking about things. And so he was just the complete package.

And interestingly, here’s the connection to sort of my own career path. One is, I mentioned my training colleague gave me a copy of Creating — the reason was there was a case study in the back of the book about Quaker Oats’ acquisition of Stokely- Van Camp. Right? Seems completely remote. You’ve never heard of Stokely-Van Camp, but you probably have heard of their most famous product, which was Gatorade.

Tim Ferriss: Oh, yeah.

Michael Mauboussin: It turns out there’s this little jewel, and it ended up being a really great acquisition, but in part because they found this little jewel and built it into this incredible brand. That’s the reason he drew my attention to this book in the first place, was actually about this Quaker Oats, Stokely-Van Camp thing. So I ended up learning a lot about that industry, again, by following this particular individual. And then, when it became time for me to become an analyst, one of the areas that I was drawn to, logically, was the food industry. So literally the reason I was a food analyst is because this guy was so good and because I learned. And by the way, I was a nobody or the little training peon, right, in a training program. So he had no idea who I was. But just gives you a sense of how these little things happen that almost set the trajectory. So it was both his analytical prowess, and it so happened to be applied in this particular industry, which is why I became an analyst in that particular industry.

Tim Ferriss: Now, harkening back to my conversation with Bill Gurley not long ago, friend of yours, and he invoked your name as someone who is able to connect ideas, principles, best practices from disparate areas. And I thought this might be an appropriate time to grab a word from your bio. In fact, it’s in the first line, “head of consilient research.” And maybe just define some terms. Consilience. What is consilience?

Michael Mauboussin: It’s probably not a good thing when your job title, you have to explain it to everybody. And that’s really the case. So, I was very taken in the late 1990s by a book called Consilience by E.O. Wilson. So we’ll talk about that book in a moment. But consilience itself is a fairly — well, not that old, but 1850s probably is when that word was coined. And it’s really about the unification of knowledge, right, this idea of bringing ideas together. And so E.O. Wilson wrote this book. He was a Harvard biologist. He’s most renowned for his work on ants, so he’s sort of the ant guy. He’s also famous for his early work on evolutionary biology. And there was some incident. He wrote a book about sociobiology in the 1970s that was very controversial at the time, at least.

But in this book, Consilience, Wilson argued, “Hey, we’ve made enormous strides as a world in using reductionism, scientific reductionism.” Right? So we’re breaking things into their components. We’re understanding how those components work. And if you look around you, many of the marvels you see are the result of that extraordinary capability. But he said as we look forward, many of the most vexing and difficult challenges and problems in the world are actually at the intersections of disciplines. And we’re going to need to bring these different ideas together from different disciplines to really tackle these big problems. And so of course, that idea very much resonated with me.

So I’ll just mention, so, consilience. So “consilient” would be the verb for that. And then, this is probably about 2000 or something like that. I published from time to time, but I wasn’t publishing on a sort of set schedule. And I’m one of those guys, I’m reading an article or watching a television show, and I’m talking back to it like, “Oh, no, that guy should be doing it this way,” or, “Here’s an insight that they should be having.” So I was like, “You know what? Instead of me muttering to myself, maybe I should start to write about this stuff.” And so I launched a newsletter called the Consilient Observer, and the idea was, “Let’s look at different topics and see if we can sort of shine a different light on it, look at it through a different type of a lens.”

And so that, the Consonant Observer series, ended up, you mentioned a moment ago, in More Than You Know, the book. That was the greatest hits of the Consilient Observer. And those were all sort of 1,500-word essays. So they were short, kind of pithy. It was hard to get really into ideas, but they were all over the place. And as a consequence, I think they were somewhat fun for the audience to read. So that’s where that word “consilience” comes from. And that’s, again, as I think about the world, this idea of being able to draw from various disciplines to thoughtfully address the problem or problems that you’re thinking about.

Tim Ferriss: Wheels on luggage. Bam. How’d it take so long for somebody to figure that out? I’m giving, perhaps, a silly example, although there’s a lot of utility there. Question — 

Michael Mauboussin: Can I tell you, Tim, that I’ve said to my wife, I actually think that wheels on luggage is the indication of the decline of Western civilization.

Tim Ferriss: Okay. This is — 

Michael Mauboussin: And the reason is because it used to be you’d have to lift your luggage, carry it around, a little bit of effort. And now it’s like everybody wheels everything around. So anyway — 

Tim Ferriss: Now we’re two steps away from WALL-E. If you remember those people with the Super Big Gulps on the reclining chairs.

Michael Mauboussin: Exactly. And you get some of these things that are tiny little bags with wheels on them. Like, “All right, come on. You’ve got to have that one.” Anyway, sorry about that.

Tim Ferriss: No problem. So E.O. Wilson, also, for people who don’t recognize the name, chances are you have latched on, at some point, to a quote from E.O. Wilson without even realizing, perhaps, the attribution and this person’s background. He’s one of the most quotable writers, in my mind, of the last a hundred years. It’s just remarkable how punchy and memorable so much of E.O. Wilson’s writing is. Do you have any examples from natural systems or biology that you have translated to business or evaluating companies or understanding markets in some fashion?

Michael Mauboussin: One is, and this is really, I think, probably one of the common threads through the research that’s being done at the Santa Fe Institute, is a study of complex adaptive systems. So, “complex” means lots of agents. Those could be neurons in your brain, ants in an ant colony, people in a city, whatever it is. “Adaptive” means that those agents operate with decision rules. They think about how the world works, and so they go out in there and try to do their thing. And as the environment changes, they change their decision rules. So that’s the adaptive part, their decision rules that are attempting to be appropriate for the environment. And then, “system” is the whole is greater than the sum of the parts. It’s very difficult to understand how a system works, an emergent system works, by looking at the underlying components.

Two or three obvious examples. One would be something like consciousness. Right? So consciousness is very likely an emergent phenomenon. We have these neurons. We have this physical genesis, but the system is more complex than the underlying neurons themselves. Or ants in an ant colony. If you study an ant colony, it’s almost like an organism in and of itself. It has a life cycle. It’s pretty smart about when it forages. They fight each other. Some are more docile. I mean, the whole thing. So understanding markets as complex adaptive systems, to me, has been an extraordinary insight.

So the classic ways to get to kind of efficient markets is that people are really smart. They’re rational, right, so they understand all the information, they know what to do with it, and they reflect that in prices. Now, nobody really believes that, but that might be a starting point. And then, the second way economists would talk about this is this idea of no arbitrage. Right? So in other words, that you don’t need everybody to be super smart. You just need a subset of people to be super smart. And when there are gaps between price and value, these super smart people come in, and they buy what’s inexpensive and sell what’s expensive and close the gaps. And so in their wake, the rest of us can benefit from these efficient prices. Problem is, here again, there’re just famous episodes where these arbitrageurs failed to do their jobs.

The third way to think about things as complex adaptive systems, and I think the way that’s easiest to understand this, is using some of the language from Jim Surowiecki’s great book The Wisdom of Crowds, which came out probably 2004, 2005. And The Wisdom of Crowds says crowds are wise when three conditions are in place. A, we have diversity of the underlying agents, or heterogeneity. Right? So this is one of the reasons that diversity is so important, is because we need different points of view and different decision rules represented. Second is an appropriate aggregation mechanism. So you can have all the information in the world in the heads of people sitting around your boardroom, but if you’re not extracting it and aggregating it, it’s of no value. Right? And then, the third is incentives, which are rewards for being right and penalties for being wrong. In markets, that’s money. But it doesn’t have to be money. It can be reputation. It could be fitness for a species or other measures of incentives that allow you to propagate, basically. Right?

So to me, thinking about markets as complex systems is very powerful. And in The Wisdom of Crowds, “Why are crowds smart?” And the answer is when those conditions are in place. And then, “Why do they go haywire periodically?” which we know that they do. And the answer is that one or more of those conditions are violated. And by far the most likely to be violated is diversity. So rather than you and I, Tim, thinking independently, we sort of correlate our views, and we become uniformly positive or uniformly negative. And as a consequence, that reflects in asset prices. So that would be one example. Another example — 

Tim Ferriss: May I pause just for one second to say, I think the second comment after diversity that you made earlier is really important, which is representing different decision rules. Because you could have, for instance, people who are every possible gender, every possible color, but if they’re all econ majors from Yale who took the exactly the same classes, they may actually represent the same decision rules or similar decision rules.

Or how do you think about that? Maybe I’m misreading.

Michael Mauboussin: No. No. You are absolutely right. And I think that the way I would think about that, and the way I read that literature, is there are really three types of diversity that we care about. The first is social category diversity, which is what you just sort of described, right, that people look different, but they have the same sort of way of thinking about the world. When most organizations talk about diversity, they’re almost always talking about social category diversity. And one of the benefits of that is because we can count. Right? We can see how many women there are versus men and so forth.

The second kind of diversity is cognitive diversity. That’s what you just described. And that’s really perspectives, point of views, mental models, training, personalities, and so forth. Nearly all the literature I’ve seen suggests that it is cognitive diversity that is the key to solving problems. To your point, it’s possible to have people that look the same and think very differently, or people that look very different but think the same. That’s not likely. Right? There’s some correlation between social category diversity and cognitive diversity, but cognitive diversity is sort of what we’re after.

And then, the third thing is values diversity. And you could rephrase this as almost a sense of purpose. And here we want to be uniform. Right? We want that kind of diversity to be low. So we’d like to really have people that have a common mission. Certainly, in any sort of organization that would be the case. And I’ll just mention that my favorite researcher on this topic is Scott E. Page at University of Michigan. And Scott’s written a number of great books. The Difference is his big book on this. And he wrote a smaller book called The Diversity Bonus, which a shorter treatment of same topic. But the reason I like Scott’s work so much, is that it’s not about hand waving or feeling good. It’s actually mathematical. And he can demonstrate mathematically why, precisely, this idea of cognitive diversity adds value.

And so it’s a combination. It is the cognitive component that diversity seems to be. So you want smart people, and you want diverse people, and both of them are important contributors to solving problems for corporate success. I’m glad you picked up on that because that’s, actually, a very interesting and important point. This is probably now 20 years ago, but when I was at Credit Suisse — well, it was, the time, CS First Boston — the guy that ran the business asked me to co-chair the diversity advisory board. So we were setting diversity policy for 20,000 employees. And you sort of say like, “Why would you want a straight white guy to do that?” And it was because that CEO thought that this cognitive diversity argument should be heard and should be part of every dialogue as we think about who we hire, who we promote, how we assess people, and so forth.

Tim Ferriss: Mm-hmm. I want to double-click. For whatever reason, this double-clicking metaphor has been on my mind, so I apologize if I use it another 47 times. But I would like to double-click on two components of what we’ve been discussing. So the first that I suppose we could touch on is the wisdom of crowds or the stupidity of crowds, depending on how many checkboxes are checked. And one example, real-world example, sort of a classroom demo hopped out at me when I was watching your 20th year tribute video, which included many of your students. And there was a jelly bean-guessing exercise that you had them perform. I don’t know if this is still the case, but could you just describe that? And then I have a follow-up question related to my own audience.

Michael Mauboussin: Sure. I believe The Wisdom of Crowds book opens with a story of Francis Galton, who was a Victorian polymath, cousin of Charles Darwin, by the way, who invented a number of important concepts in statistics. But Galton, toward the end of his life, went to a fair. There was a contest to guess the weight of an ox, and you had to pay a little fee. And about 800 people participated. A few of them had illegible handwriting. So I think he had 787 contestants. And Galton was fully expecting to show how foolish this crowd was. And by the way, he’s got a really interesting — it’s a nature magazine. He sort of says, “Some of these people are butchers” or whatever. So they would know. They’d have a sense of this. But he goes, “Some of these people were sort of operating on with their own fancies.” Kind of like, “I’m in a good mood. Here’s what I’m thinking about.”

And when he tallied up the results, it came out to be very different than what he anticipated. So the average or the median of all the guesses were within one percent of the actual weight of the ox. Right? So this sort of extraordinary illustration of that point. And so that was picked up. There’s a really interesting article by Jack Treynor, T-R-E-Y-N-O-R. And Jack was a very famous guy in the world of finance. He died a few years ago. And the article was called — this is from the late 1980s, I think it was called [“Market Efficiency and the Bean Jar Experiment”]. It’s a short little article, but he described this experiment he did with jelly beans. I think they were actually not jelly beans. I think they were actual beans. And he did that, and he came up with a very similar type of result. So he was trying to explain market efficiency through this wisdom of crowds. That name wasn’t being used yet, but through this idea.

By the way, as a funny side story, many years later I was presenting at a conference. And lo and behold, in the front row was Jack Treynor. I had to give the nod to the man, right, the OG on this thing. So he was totally cool about it. So I had the slide. I actually had a copy of my own slides, and I’m like, “Mr. Treynor, would you sign this?” So I have an autographed copy of a picture of a jelly bean jar autographed by Jack Treynor. 

By the way, one other weird thing is that I also talked about that day, there’s a famous social psychology experiment by Solomon Asch. Do you know the Asch experiment with the three lines?

Tim Ferriss: No.

Michael Mauboussin: Do you know this one?

Tim Ferriss: You know what — 

Michael Mauboussin: You probably do. It’s pretty famous.

Tim Ferriss: I might, if you describe it.

Michael Mauboussin: But very quickly, Solomon Asch was a social psychologist, and he wanted to understand the idea of conformity. So the setup is you have — and they did it in different ways. Let’s say you have eight people around the table. Seven are in on the experiment with Asch, and the eighth person is the subject. And the task is very trivial. It’s you have three lines, A, B, and C, and then you have X. And the question is, “Which lines, which of the A, B, and C lines, are the same length as X?” Right? So it’s actually a very easy visual task. They start with controls. People get like 95 percent plus. Right? So it’s not a hard task.

And then the experiment started for real, which was Asch signaled his confederates to give the wrong answer. And now, instead of it being C, which is the correct answer, everyone says A, A, A. They put the subject in the last seat. And so by the time it comes around to you, the question is how do you answer, right, because you’re thinking like, “Am I insane? I see this as being answer C, and everyone’s answering A.” And it turns out the reason we still talk about this today is most of the people actually, at some point, go with the majority. And so they just basically override their own senses. And Asch talked about why this was the case, but this is this idea of social conformity and how important it is in going with a crowd and so forth. And this goes back to our stuff on diversity.

So I’m describing this experiment, and Treynor raises his hand again. This is twofer, right? And he goes, “I was at Swarthmore College in” whatever. It’s the 1940s. “And I was a subject in this thing.” And he said, “I just want to tell you that I remained independent. I didn’t listen to anybody else.” So I was like, “Oh, man, that’s really cool.” So Jack Treynor was, obviously, he was a super important guy in the world of finance, but there are two little touchpoints that tie back to our two stories. One is sort of that he made that connection to the jelly bean jars in the world of finance and why it was important to understand how markets work.

I think Jim, obviously, came along, I mean Jim Surowiecki, the title, the idea of The Wisdom of Crowds had been around, obviously, to some degree. But Jim, I think, did an incredible job of putting a name to that and really being careful about saying, “Under what,” as you said, like, “what conditions are crowds wise, and under what conditions do they go mad?” And we know that both happen from time to time. So anyway, that’s cool.

Tim Ferriss: So, my understanding — tell me if I’m flubbing on this — is that you replicated this experiment in class, where there’s a high degree of variance. But among your business school students, ultimately, at the end, they were within, say, one to two percent of the actual count. Am I getting that correct? Or — 

Michael Mauboussin: Yeah, no, we do it every year. And we do it every year.

Tim Ferriss: Okay. Okay.

Michael Mauboussin: It’s like my little parlor trick. And Tim, it vacillates. My course started in January, and we just did it in January. It’s usually somewhere between two and 10 percent. I mean, I’ve done times where it’s been almost perfect, but it’s usually two to 10 percent. So let’s just pick 10 percent as sort of a non-uncommon number. What’s interesting is if you pick any person within the group at random, right, so you just close your eyes and pulled out somebody’s guess, they’re usually off by about 50 percent, five zero. So collectively, people are way better than they are individually, which is super interesting. And it’s often the case, by the way, that the collective guess is better than any individual within the collective. It’s not mathematically necessary, but that’s often the case as well. So the group is smarter than any person in the group.

Tim Ferriss: My follow-up to that is, what is the minimum effective dose of cognitive diversity? And if we fine slice it, are there particular types of cognitive diversity that are more valuable for this type of wisdom of crowds than others? Because in the state fair example, you have butchers. You probably have painters. You have a very broad swath of society with very, very obviously different people in different professions, perhaps different levels of education. In your business school class, I would think there is a higher degree of uniformity of thinking. Maybe that’s very unfair. But similar age ranges, probably similar priorities. They’ve chosen to apply to your class or to join your class. So I’m wondering, I suppose, just to restate the question, how much cognitive diversity you need? And are there different types of cognitive diversity that are more valuable than others for harnessing this type of wisdom of crowds?

Michael Mauboussin: It’s an awesome question. So I mentioned Scott Page a moment ago, and sort of one of the ideas that he’s promoted is an equation called the diversity prediction theorem. And I’m not going to go through the math, but basically, it says collective error, so how smart the group is, is a function of the smarts of the group, so how accurate each individual is, minus cognitive diversity. Right? What you get as the collective accuracy is a function of both smarts and diversity. So I just want to say one thing, just to be super clear, that there are many tasks where having a smart person is better than having a crowd. Right? So in other words, if you have a leaky toilet or need a really hard math problem solved, by all means, get a plumber or a mathematician. Don’t bring in an astrophysicist and a poet and whatever. Right? So that’s the first thing to say, is that we want to think almost like a taxonomy of types of problems. And there’s some problems where just the right person will answer it much more efficiently than any of these kinds of things.

But I think, Tim, it turns out that, and this is why Scott’s thing’s interesting, is it just ends up being a mathematical problem, that whatever models people are using, give them a range. And you might think about it this way. It’s not exactly right, but you might want to think about this way. Let’s say there is a truth. There is a number of beans in the jar. And I’m the God, G-O-D. I know that number. You might imagine what happens is people, and like you said, even the business school students — and I give them some tips, like, “Here’s some ways to think about this.” You might think they’re going to have almost some sort of sense of the answer, some sort of sense of the truth within error term. Right? And those errors might go too low or too high. And you could think about it almost like a normal or bell-shaped distribution, right, where the mean is the actual answer. And that’s what Jack Treynor was trying to — how he was explaining it, which is you have this distribution of outcomes.

And by the way, the standard error goes down. I mean, I don’t want to get too fancy, but the standard error goes down as a function of the square root of N, of the number of people who are guessing. So it’s actually less about — again, if they’re all skewed, that’s going to be really bad, and that’s going to be the [inaudible]. It’s like you want them to have different guesses, right, based on their models, their representations, but a bigger problem would be if there are too few people doing it. Right? I have my class of probably, I don’t know, 60 or 70 people doing it. Galton had close to 800. So as you get that sample size larger, the number of people participating greater, it homes in on the answer more accurately, typically.

Tim Ferriss: So, this is leading to a very self-interested question that I wonder a lot, which is what fun or useful things could I do with my audience? Because in this particular case, the N is quite large, and depending on the means of accessing them, but if we’re talking about social podcasts and newsletter, I mean, it’s 10 to 20 million people a month. It’s hard to de-dupe the people who are overlapping or appearing more than once in those categories, let’s just say. But nonetheless, it’s a large N for a lot of purposes. Do you have any thoughts? And I would also say that politically, geographically, from a gender perspective, quite a high level of what I would consider diversity, and I think psychographically, from an education perspective — probably a little less perhaps on the education side. I’d say my listeners tend to be pretty well-educated. But do you have any thoughts on how to perhaps design experiments for doing interesting, fun, or valuable things with my audience? I know that’s a very broad question.

Michael Mauboussin: No, it’s great. It’s great. These kinds of problems might be fun. It depends a little bit of what your goal is. Right? If your goal is simply to demonstrate how some of these principles work, and to use your audience as the way to show that, it would be super fun. And even something like a jelly bean jar thing would be cool.

I’ll mention, we’re going to do this in class. I think the Academy Awards is mid-March, and we also do an experiment with Academy Awards. I give them 12 categories. The front page is the Best Actor, Best Actress, Best Film, things you might have a shot at getting. And then the back is Best Costume Design or something like that, things that are going to be much less front and center. I say to the students, “Don’t tell me who you think should win. Tell me who you think will win or by popular vote.” Again, the winner in each category is the modal, so the most popular selection. And once again, the Academy Awards, the group does vastly better than any person in the group. So there’s an example. That would be a fun one. There’s another one I’ve always found interesting. And I think The Financial Times, the newspaper, did a version of this 15 or 20 years ago. Do you know about the two-thirds game? Have you heard of this before?

Tim Ferriss: No. I don’t.

Michael Mauboussin: So here’s the setup. So, one of the things that’s interesting in markets, but it’s interesting in lot of games, generally speaking, is there are sort of orders of thinking. Right? First order, second order, third order. And this is an attempt to capture people, how many orders of thinking they actually go through. So here’s the setup. I typically ask for a whole number from 0 to 100. Then you hold that thought, and then you say, “The person who’s going to win this contest is the person whose guess is closest to two thirds of the average of everybody else’s guesses.”

Tim Ferriss: Mm-hmm.

Michael Mauboussin: Right? So you follow that? So everybody writes down their number, and then you, as the MC of this thing, take two-thirds of that value, and then whoever’s closest to that number wins. And you’re probably already thinking about how would you go through this mentally, because you sort of say, “It was 0 to 100, start at 50, two-thirds of 50 is 33, but people are going to go that level,” especially your educated audience. “Well, two-thirds of 33 is roughly 22, but maybe they’ll go that far. Two-thirds of 22…” and so forth, right? So you keep iterating down.

Now, this is a problem that has an equilibrium solution, and the answer is zero, because it iterates just down to zero. And zero, by the way, is often a common answer in my course when I do this. And I say to students like, “You’re a real smarty-pants. You found the equilibrium solution, but you’re not making any money,” right? Because if anybody writes a non-zero number, you’re not going to win, right? So the question is, how do you integrate other people’s decision-making?

And I’ll tell you one other story about this, which is interesting. So when I was at Credit Suisse, I was part of a research group, and it had a number of very prominent kind of world leaders, former president of Mexico, one of the prime ministers from the UK, prominent economist. I mean, this was like a really erudite group of folks. And we were appropriately having a meeting in Switzerland and we had a behavioral economist leading a session on risk. And this guy decides to do the two-thirds game. And I’m thinking, “Now, this is going to be interesting,” right? Because we have literally world leaders who probably negotiated these high-powered things, and they were absolutely horrible at it. It was embarrassingly bad. So I was like, “Oh, my God, I’m glad they’re all retired now.”

Isn’t that interesting? And I don’t know if they didn’t get the setup, but they just hadn’t thought through those levels. And so as you know in negotiation but most things in life, understanding the position of somebody else is really important. The basic concept of empathy is super important, right? And so I thought that was a really interesting little side note. But taking it back to your broader question, that might be a fun exercise just to demonstrate these principles rather than come up with some sort of a dashing solution to a problem.

Tim Ferriss: Well, I like the idea of attempting some type of prediction through experimenting with my audience. And maybe I make it explicit, maybe I don’t, right? Maybe it’s couched in some other way. But I do love that idea. So we can table that for now, but it’s certainly something that’s — 

Michael Mauboussin: Well, no. Hey, Tim, here’s one other thought, and you’re probably familiar with a little bit of this literature. So Phil Tetlock and some of his members of his team at the University of Pennsylvania have done these forecasting tournaments, and the big thing, of course, is this idea of super forecasters. So the question is, would the Tim Ferriss audience be a super forecaster, right? And so if you gave them geopolitical or economic events, “In six months, will interest rates be X?” “In 12 months, we’ll try to do X, Y, or Z,” that’d be super interesting to see. And now the problem is, again, you’re getting one answer from a large group and so forth, and how easy it is to update, because the super forecasters actually update their probabilities as things change and so forth. But that might be another fun way to — but then again, that’s what markets do to some degree, they’re prediction markets. That’s what they’re trying to do as well.

Tim Ferriss: Mm-hmm. Well, I will allow that to gestate, because I would just love to run more experiments and then share the results of those experiments with the people who participated. I think it would be a lot of fun. So TBD, TBD, but I will certainly come back to that at some point.

Now, in addition to The Wisdom of Crowds, there are a number of books that came up in the process of doing research for this conversation that you’ve mentioned, and I don’t mean to imply that we need to spend a ton of time on all of these, but I would love to at least get your take on two that have popped up, and there may be one or two more, but I’ll mention two. One is Against the Gods: The Remarkable Story of Risk, and this came to mind because you had just mentioned risk in the context of Switzerland. And the other is Complexity by Mitchell Waldrop, if I’m getting that pronunciation right. Why are either or both of these books meaningful, or must-reads, or important in any way?

Michael Mauboussin: So let’s start with Against the Gods. It’s written by Peter Bernstein, who was a brilliant economist and historian, and it is the history of human understanding of risk. So it’s a fascinating thing. Now, I’ll just say that broadly speaking, I think understanding the history of ideas is incredibly valuable in pedagogy, generally speaking, right? So if I’m talking about an idea or I’m using an idea today, I think it’s very helpful to understand where it came from, who were the propagators, what were their blind spots, where did they take a turn, one direction where they could have gone a different direction, and so forth. And so Bernstein just brilliantly this out in Against the Gods, and he was a wonderful writer. It’s a very interesting book. By the way, he also wrote a book called Capital Ideas, which basically does the same thing for the history of finance. So Peter Bernstein, that is money. And if anybody’s interested in the idea of how we understand risk, and this goes back to the Bernoullis in the 1700s up to relatively modern times, this is, it’s a fabulous book

Complexity, I’ll give one other backup, one little step on this, which is, it’s a book I almost never talk about. But one day, when I was a food analyst, I was visiting a money management firm, it was actually the state of Michigan, so the pension fund in the state of Michigan, and I was in the waiting room literally waiting for my meeting. They had a bunch of books, and I just strolled over there and I picked up a book called Bionomics by a guy named Michael Rothschild. I mean, I think it’s a somewhat obscure book. But as the name would indicate, and this book was written, I think, originally in 1990, as the name would indicate, what he was saying was, the way to understand economics is really through biology.

And starting really in the late 1800s but into the early 20th century, economics became very mathematized, and in fact, there’s a wonderful book called More Heat Than Light by a professor named Phil Mirowski, which documents how economists literally, and I mean literally, mapped over equations from Newtonian physics to basically give economic street cred, right? So economics and finance went sort of this mathematical/physics envy route versus going more biological, and I think that in retrospect, you could sort of say that biological way would’ve been a very logical way to go, or as logical, albeit not as mathematically straightforward or tractable. So I read this book Bionomics and I’m like, “This is so cool.” And the guy sort of opens the book by saying, “Hey, you can’t really understand economies unless you understand evolution and so forth,” so I was very drawn to all that.

So that’s the backdrop. I’m sort of primed, I’m thinking about this idea. And then along comes Waldrop’s book, Complexity. And this is really the story of the founding of the Santa Fe Institute. And by way of background, the Institute was founded in 1984 by a number of scientists, and very prominent scientists, many Nobel Prize winners, who felt that academia had become too siloed, right? So the physicists hung out with the physicists, and the economists with the economists, and the chemists with the chemists. But again, most of the interesting problems in the world were really at intersections of these disciplines, and gee, wouldn’t it be awesome if we got these different scientists to hang out and talk to one another? And so this is how this thing got going, and some of the early conferences, for example, one of them was the economy is a complex adaptive system, right? So the idea of economics being in there early on was early days.

And so why this book is so, I think still to this day, kind of exhilarating is because you read about these scientists and how they were coming up with ideas that were far from the mainstream, and when we look back on them now, many of them have become much more mainstream ideas. But it was super cool. And so one of the main protagonists, I think the book does open with a story, is Brian Arthur, who is an economist, and Brian was promoting this idea of increasing returns. Now, if you’ve taken economics, microeconomics at any point, you learn about decreasing returns, right? So if Tim’s lemonade stand’s super profitable, Michael will open up a lemonade stand right next door, charge slightly lower prices, and so you’ll become less profitable, then you’ll have to match my prices and so forth, and we’ll compete our way down to less profitability, so decreasing returns. And Brian pointed out that under certain circumstances, there were these increasing returns, there were sort of these winner-take-all effects.

This is now, again, he was writing about this in the ’80s and ’90s, completely heretical. And by the way, basically, the mainstream economists wanted nothing to do with it. And so Waldrop, and I think in a very engaging way, describes how all these ideas developed. So again, if you said, “The Santa Fe Institute, is there a unifying theme?” it would be sort of this idea of a complex, adapting, evolving system is a way of thinking about it.

So those would be my answers. Those are two wonderful things. And my oldest son, before he went off to college, he did a gap year, and I thought like, “What would be a list of books that would be really great for him to read and internalize?” And I think we had a list of 15 or 20 books. But these were both on that list because I just think it’s really under super cool to understand the history of ideas. And by the way, as a teacher, if you’re ever teaching something, I think it’s just super helpful to know where it came from, like what is the genesis of this. By the way, there are a couple things that I’m actually trying to track down now. These are specific finance type of things, and I’m having a hard time finding the first person to come up with this. So it’s kind of cool to sort of go on these little wild goose chases.

Tim Ferriss: Mm-hmm. So I am going to definitely come back to the Santa Fe Institute, and I’ll seed my question now so it can marinate over on your side, but I’m going to take the conversation in a different direction and then we’ll come back to it, which is, it seems like value investing and value investors pop up a lot around the Santa Fe Institute. It seems like Bill Miller, value investor, introduced you to the Santa Fe Institute in, I guess, the 1990s, and then I’ve seen a number of other mentions, so this could be just a misread, but I wonder why — maybe we can just tackle this now. Why not? Why it seems that value investors seem to gravitate towards or be involved with the Santa Fe Institute. And I suppose for a definition of terms, if you wouldn’t mind just defining value investing for folks as well.

Michael Mauboussin: Sure. Yeah. And I’m not sure that’s the wrapper I would use, but let me just say what value investing is. Well, just one very specific distinction here. Often when you hear people say value investing, they think about buying things that are statistically cheap, so low price-to-earnings ratio or low price-to-book ratio, and that way of thinking was founded by Ben Graham, who taught at Columbia Business School, and many of the Graham acolytes were sort of these low PE, low price-to-book folks, and they were just buying against statistically cheap stocks. I think the modern manifestation of value investing is simply buying something for less than what it’s worth, so it seems like all investing should be value investing if it’s done intelligently. So I think that’s the distinction.

But to your point, I think that the Santa Fe Institute, I mean, there are two reasons that I’ve always found it so exciting. One is the ideas that come out of it, and the second is, and perhaps even more important from my point of view, is the type of people that tend to be drawn there, right? Many academics are perfectly happy to stay in their lanes, right? And in fact, much of academia is aligned to encourage you to do that for promotion, recognition, and so forth. And so the type of people that go to Santa Fe are massively self-selected to be interested, like have polymath-type tendencies. And by the way, investors are a really fun group. As you know, investors are a fun group to hang out with, in part because they tend to be curious people, and I think curious is the main adjective I would use to describe most of the great investors that I know. And they’re learners, right? So almost all the people read a lot and try to think about things.

So Bill Miller’s a great example. Bill’s been extraordinary in my life. You think about different people along the way that have been really important, he’s certainly been that for me. He introduced me to the Institute, as you pointed out, in the 1990s. We’re at a baseball game, we’re sitting there, and he talked about it. But that’s a guy that reads all over the place, reads very widely, reads different disciplines, and thinks a lot about how those ideas might apply, how he might be able to apply those ideas to areas of markets.

You mentioned Bill Gurley. I mean, Bill Gurley’s a venture capitalist, right? You wouldn’t really call a venture capitalist a value investor in the classic sense, but you certainly would say they’re trying to buy things for less than what they’re worth and cultivate them to some degree, and he immediately got all these ideas, cracked all these ideas as well.

Josh Wolfe is on our board. So we have a lot of really — yeah. So I think the main connection between all these folks, the thread, is this idea of intellectual curiosity and a willingness to pursue that curiosity by immersing yourself in that type of environment.

Tim Ferriss: I’ll add one thing to the investor comment, which is, part of the reason I enjoy, if not spending time with, at least observing and studying investors is because they also have a report card in a lot of cases, right? So they are placing bets based on their thinking, so it’s not simply opinion versus opinion. You can look at the returns, or at least for some of them, which I find very, very refreshing. So I also really appreciate that aspect of interacting with or at least studying a lot of investors.

And in preparation for this conversation, I also shot texts to two mutual acquaintances, both thinkers I respect greatly, Josh Waitzkin and Patrick O’Shaughnessy. So I wanted to actually grab a few of the possible questions that Patrick had forwarded. Also a great podcaster. I recommend people check out Patrick’s podcast. And you can feel free to decline, divert, or transmogrify this however you like, but he sent me quite a few things, and we’ll probably dig into a number of them, but I thought one that would be fun for listeners, and I include myself in that group, is the following. An asset class tour with him could be really fun. In other words, how do you think about private equity, venture, credit, public equity, cash, gold, real estate, and what they do for you when done well in the role they can play? Is that a sandbox you’re willing to hop into for a little bit?

Michael Mauboussin: Well, I mean, it’s just something I actually don’t know that much about, but I could probably make one or two comments. In one of my COVID projects was in 2020 was to do a big piece on public to private equity, and we needed to find a bunch of things just to get out of the way. So public equities are just stocks that are traded in the United States. Today, there are around 3,500 of them, roughly speaking. What’s fascinating is there are about as half as many companies, public companies that trade today as there were in the mid 1990s, and there are, in fact, less than there were in the 1970s. So obviously, our economy’s bigger, our population’s bigger, right? So you’d say like you would expect the number of public companies to roughly grow in line with all those other metrics, but that has not been the case.

So that’s an interesting thing in and of itself, why is that, and at the same time, we’ve seen the emergence of private equity. Now, private equity is a wrapper that really covers two different areas. One is buyouts, and in a buyout, typically, a sponsor will buy a company, usually using leverage, then own it for typically five years or so, and then try to sell it for a profit, right? But the key there is that they buy stable businesses, typically with lots of cash flow. Venture capital is the other side of that, which is buying very young companies, often when they’re just getting going, and actually playing a very important role in fostering that development, so being on the board, giving guidance, and so on and so forth, right?

So a couple observations that come out of that that are interesting. One is like why are there fewer companies. And basically, the simple answer is, there have been lots of mergers and acquisitions and not a lot of IPOs. So part of it is the cost of IPOs have gone up, and so there are simply fewer public companies than there used to be. The flip side is they’re much bigger now and more profitable on average and so forth, so that’s interesting.

And then the second thing I’ll just say, Tim, this is just me sort of speculating a bit, which is, roughly speaking, say, since 1980, so let’s call it the last 40 years or so, there’s been roughly, maybe we’ve reversed this in the last year or two, but roughly a steady decline in interest rates. So that means your expected returns basically go down, right? Because you put money in the bank or invest in something, you get lesser expected return. And at the same time, our liabilities are going up, right? If you have to put a kid through college or you’re thinking and just planning your own retirement, you’re expected to live longer and so forth, so your liabilities aren’t going down.

So if the expected returns are going down and the liabilities are going up, what do you do about that? And the answer is typically you go out on the risk spectrum, you take on more risk. And how do you take on more risks? The answer is, there are two interesting ways to do that. One is to use more financial leverage, more debt, and that’s what buyouts do specifically. And the second is to buy young companies that are inherently riskier companies, and that’s venture capital.

So I think part of the answer of the emergence of those two segments, those asset classes, is precisely to accommodate that basic reality that returns in more public markets have been less exciting than they have in the past. Some of the other ones you mentioned, credit or gold or whatever, I don’t really have much to say about those things.

Tim Ferriss: Okay. Well, we can park those. I would like to transition then to base rates, so the power of base rates in life and business/investing, and the prompt that I got was something related to one of the famous Triple Crown horses. So if that is enough of a catalyst to get the party started, then I’ll let you run with it.

Michael Mauboussin: Let’s start the party. So let me just say, Tim, that if there were an idea that I could go back and tell my young self, my 18-year-old self, it would be this idea of base rates. I learned about this from Danny Kahneman. He used slightly different language, but we’ll lay out the terms and go back and forth.

So Kahneman used this idea of the inside versus the outside view, and the outside view is base rates. Okay, so here’s the basic setup. If I present you with a problem, and it could be almost any kind of problem, how long will it take you to remodel your kitchen and what will it cost? When will you be done with your book manuscript? You’re a college student, when will you be done with your term paper, your problem set, or whatever it is? The standard way to think about that is to gather a bunch of information, so think about it, combine it with your own inputs, and then project, right? And left to our own devices, that’s how we all do these things. That’s called the inside view.

The outside view or the base rates, by contrast, says, “I’m going to think about my problem as an instance of a larger reference class.” I’m going to ask a really simple question, like what happened when other people were in this situation before? And it’s a very unnatural way to think about the world, for a couple reasons. First, you have to leave aside your own views and your own cherished information, which we tend to place a disproportionate amount of weight on. And then second is you have to find an appeal to this base rate, and it may not be at your fingertips or you may not even think about it that overtly. But what psychologists have demonstrated, and it’s not all one or the other, but some thoughtful combination of the inside versus the outside view, sort of your own analysis versus base rates, tends to lead to better and more accurate predictions.

So, and this is sort of as a real-time, I don’t know what year this was, 2008, 2009, something like that, there was a horse running for the Triple Crown. So the first race is the Kentucky Derby, and it won the race by like four and a half lengths. Big Brown, this is. The second leg is the Preakness, it’s in Baltimore. It was stronger still. I think it won by like five lengths or something. So it’s one race away from horse racing immortality, which is pretty impressive.

Now, the last race is the Belmont. It’s the longest, which is difficult. So many of these horses are actually quite fast, they’re sprinters, longer distances are more challenging for them, but Big Brown was obviously sort of the favorite, appropriately so. And he went off at odds that suggested a 77 percent probability of winning the Belmont. 77 percent. So the question is like, all right, how likely it is that he’s going to win at that probability? And what you do is you look at base rates and you ask how many of the horses in a position to win the Triple Crown, what percent actually did that. And there’s some data, by the way. There have been Triple Crown winners since this, so this is the data at the time. At the time, there had been, I think there was like 28 or 29 horses that had tried, 40 percent had succeeded. So 40 percent is already a lot lower than 77. But what’s interesting is that eight of the nine horses that tried before 1950 succeeded and only three of the 20 since 1950 it succeeded. So it was a 15 percent success rate since 1950.

All right, so you’re like, “Okay. Well, that’s an interesting data point. He’s going off at 77. Now, it’s…” And then the second thing is, you go, “Maybe this is just like a wicked fast horse. Maybe this is the new Secretariat.” And there’s a way to measure that. It’s called the Beyer Speed Figure. And speed figures really haven’t been kept accurately since the — I think they started about 1990, so about 30 years, call it. At the time, it was probably more like 20 years. And this horse was actually the slowest by speed figures of the last seven contenders for the Triple Crown, all of which that had failed. So he’s going off at 77 percent odds. And of course, he ran the race, and what sort of capped the story was the horse basically took the day off. The jockey technically eased him, but he came in last, basically, right? And so next day, they give him a physical, from his nose to his tail, and they’re like, “Is he okay?” She’s fine. He was just fine.

So the guy who I was talking to about this, guy named Steven Crist, who’s an awesome guy by the way, fascinating guy and a famous handicapper and a brilliant writer. So I’m like, “Steve, what’s up with this, man?” And he’s like, “People don’t seem to realize we’re talking about horses here.” So anyway, maybe Big Brown didn’t realize. So again, the story was he was a favorite. He was probably a 40 percent probability, 45 percent if you’re using all these data, but at 77 percent, he’s a massive bad bet.

And this goes back, Tim, just to circle back around to our wisdom of crowds thing, right? So what happens is, this is a diversity breakdown, right? So most people don’t bet on horse races day in, day out. There are professional handicappers and people who do it for fun, but for the most part, there’s not a lot of volume and so forth. However, when there’s a Triple Crown contender, people get fired up, right? And in fact, the Belmont, I think the attendance doubled from the prior year even. It was a really hot and steamy day, and people pull out their wallet and they start plunking down their bets, because you want that ticket that says, “I bet on the Triple Crown winner,” right? And of course, these are markets that are set by the dollars flowing in. They’re parimutuel markets, right? So the dollar flows are what dictate the odds. So there is a perfect example of diversity breakdown, right?

Tim Ferriss: Right. Makes perfect sense.

Michael Mauboussin: And again, it’s like day in, day out, horse race results are actually incredibly efficient and they’re certainly very hard to beat when you take out the track take, for example, the vig. Obviously, there are professional handicappers, but not many, and those who are doing it are very sophisticated and so forth, but you and I, regular Joes, we’re not going to make money doing this. Anyway, so that’s a good example. And it’s hard to short. You can bet the field or something, but it’s hard to short a horse. It seems a little bit un-sporty to do that. But anyway, that’s the Big Brown.

But the broader lesson is that no matter what you’re thinking of doing, moving to a new city, taking a new job, anything you’re thinking about doing is asking, “Is there an appropriate reference class? Is there a base rate that I can look at to gather some information a little bit about my prospects?” And where we spent a lot of time on this, for example, things like corporate performance, right? So I know that you do a little bit of investing as well, but questions like, “If a company has sales of a billion dollars, what’s the distribution of growth rates I should expect?” Let’s look at history to figure out how good could it be, how bad could it be, what’s the average, then where do I think my company’s going to fall within this distribution, and how optimistic or pessimistic it might be.

Tim Ferriss: Mm-hmm. So I could ask a million questions about base rate, but the one that’s really sticking out for me is what on earth happened around 1950 with the horses? Was it anti-doping measures? What clicked in that led to that — 

Michael Mauboussin: It might be the opposite, right?

Tim Ferriss: Right. It could be the ubiquity of doping. The secret’s out.

Michael Mauboussin: So, Tim, by the way, this is a fascinating conversation. I’m way over my skis already, but I want to tell you that for almost all the big races, the Kentucky Derby, because they’ve been run for a very long time under the same conditions, that I think the horses today run essentially the same speed as they did in the 1950s. They’re no faster. And thoroughbred horses, this topic is fascinating in and of itself, right? Because they’ve been bred from a very, very narrow stock. So there are a handful of horses that are essentially the ancestors of every horse that’s around today, and so the genetics are really crazy, and that’s why they’re such fragile animals. But I think that they’re at their physiological limits, and they have been for a while.

Secretariat was a freak, literally a genetic freak, and as you know, you probably heard these stories when, I don’t know if they actually did this, but apparently, there was some sort of an autopsy and Secretariat’s heart was much, much larger than was normal for a horse like that. So Secretariat really was one-in-a-multi-generation freak. But yeah, I think they’re at their physiological limits, so the horses all today are no faster than they were back in the day.

Tim Ferriss: Yeah, certainly hasn’t stopped people from doping horses. I know of quite a few drugs that have made their way from research, to racehorses, to bodybuilders, to the billionaires who want to live for a million years with six-pack abs. So the racehorses have been a sort of a test bed for a lot of drugs for a long time, but it’s interesting that as far as speeds go, it seems that, at least the top end — I wonder if the averages have changed at all, if it’s — 

Michael Mauboussin: Yeah, it’s interesting. I mean, we can go look at all the numbers, but I would imagine things like marathon times or 100-meter dash and so forth, I’m sure they’re all faster, right? Swimming times, for sure they’re all faster than they were in 1950. So it is interesting that this is one domain where they hit the wall, but part of it is because, this goes back to our thing on diversity, you have no diversity in the genetic pool, so as a consequence, you sort of tapped it out. So if you wanted to build a faster horse, you would have to shuffle some jeans around, which has not been happening, right? By definition, it can’t happen, so it’s interesting.

Tim Ferriss: Mm-hmm. Yeah. Well, I’m not going to become a handicapper, but I may have to chat with Steven Crist and learn a little bit more about this horse game.

I would like to jump, though, to discussing what separates good from great investors, and there’s a line here that I believe is your writing, please fact-check me if I’m not getting that, but this is part of a larger piece, but: 

“What separated the good from the great investors had little to do with their analytical tools but a great deal to do with how they made decisions.” 

And I suppose that’s maybe a subset question of, just in your experience, in your studies, in your practice, what have you found most reliably separates good from great investors?

Michael Mauboussin: Let me just say the analytical stuff is ante for the game, right? So you have to understand basic accounting, finance, and so forth. So there’s no getting around that and you need to do that. But I think that quote captures certainly what I think about this, which is, the key is making really good decisions. And this is particularly difficult under a bunch of different circumstances that we could talk about, but maybe the two or three biases I think are really difficult to circumvent. The first is overconfidence. We tend to think we understand the future better than we actually do. And this is something we can test pretty well in experiments. And as a consequence, people think about ranges of outcomes that are too narrow and that gets them into some trouble. So, overconfidence. And the second one I think is even more difficult, which is confirmation bias. Which is once we’ve made up our minds about something, we tend to seek information that confirms our point of view, and dismiss, discount, disavow information that does not.

And one of the vital things in investing is updating your views. And if you’re not constantly doing that and doing that honestly and objectively, or as objectively as possible, that’s going to put you at a substantial disadvantage. So that’s that decision-making aspect. Now, there’s another aspect to all this that’s really important, which is you ultimately have to come up with a view that’s different than other people, and that view ultimately has to be correct. And so there’s a little bit of an anti-social component to investing, like great investors there’s a little bit of an anti-social component to it. And some people I think do that better than others or more naturally than others. But that ends up being decision making because at the end of the day, often you buy something and it goes down in your face even if your thesis is correct, and do you have the fortitude to stick with that position? The third thing I’ll mention, Tim, have you had Robert Sapolsky as one of your guests? You have, right?

Tim Ferriss: No, I haven’t, actually.

Michael Mauboussin: No. Do you know Robert Sapolsky, do you know him?

Tim Ferriss: I recognize the name. You should probably remind me.

Michael Mauboussin: Yeah, so he’d be a fun guy for you to talk to on many levels. And he’s a neurobiologist and a primatologist at Stanford University. And he wrote a book called Behave in 2017, which is, I would say, probably the best book I’ve ever read on human nature. It’s not an easy read, but it’s very well-written and very interesting. He has a book coming out later this year, I believe, on the topic of free will. So, interesting guy. But Robert spent for years, obviously his academic years at Stanford in his lab, and then he would spend the summers in Kenya studying baboons. And if you ask him, “Why do you study baboons?” he would say, “Because they’re physiologically like humans, and like humans, they spent three hours a day feeding themselves and the rest of the time tormenting one another.”

But what Robert was interested in is the topic of stress. And so he wrote a book called Why Zebras Don’t Get Ulcers in the mid 1990s, which it’s a wonderful title and it’s a wonderful book about the idea of stress. So he’s a key guy in understanding stress. And so of course in these baboon troops, the questions he’s asking is like, hey, who’s stressed out here? Is it the alpha male, the beta male, the females, who’s getting stressed out? Or the low ranking males? And of course you can shoot a dart into the flank of a baboon and draw blood and measure cortisol, so they can understand exactly what’s going on, at least in this rough proxy for stress.

This is all a big wind up to talk about something that’s important for investors, and in particular, time horizon. So Sapolsky asked this question: what stresses out an animal? You’re a zebra hanging out in the savanna, pursuant to its title, what stresses you out? And the answer is a lion decides you’re the target for lunch. Bad news, right? So your stress response is going to kick in, you’re going to pump blood, you’re going to pump adrenaline, and you’re going to run really fast. What’s also important is you’re turning off all your long-term oriented systems. You’re turning off growth, immune, digestion, reproductive systems. These are caloric luxuries for another moment. If you elude the lion, you go back to your group and you reverse all those processes, you go back to homeostasis.

Now the question is, what stresses a human being? And we have episodic physical stressors, and you’ve put yourself through more than your share of physical stressors in your different antics. But for the most part, our stressors are psychological. And it’s the big deadline at work, it’s a relationship concern, it’s something about money or whatever it is. And the point that he makes is those psychological stressors trigger the same physiological response. Your brain is not always so good at distinguishing between a physical threat and a psychological threat. And if you’re constantly psychologically threatened, your body turns its stress system on, or stress response on, and it never turns it off. And so if I said, what are the symptoms of stressed people, you’d come up with a very familiar list. They’re sick all the time, they have problems with their gut, they have reproductive problems. In extreme cases, they don’t grow properly and so forth.

So here’s the punchline as to why I think this is so interesting. When you’ve turned on your stress response, you tend to shorten your time horizon. So you’re the zebra running away from the lion, you’re not thinking about what am I going to be doing in two weeks? You’re thinking, how am I going to survive 20 seconds? Likewise, if you’re a human and you’re stressed, you pull on your time horizon. And that’s a really interesting issue for an investor, because it can be the case that as markets, for example, are tumbling, and one could argue that your opportunity set is becoming more attractive, because you’re getting the same asset at a lower price, everything inside you is going to say, man, I need to survive now. I don’t need to worry about what’s going to happen in two or three years.

And I actually lived through this. I mean, one of the big drawdowns of ’08 and ’09 in the Financial Crisis, some of our investors were recommending stocks to the portfolio manager and the portfolio manager was saying, “Hey, this seems like a great idea over three years, but if I put this in my portfolio and it goes down the next three months, I’m going to be out of a job, right? I’m going to be gone.”

Tim Ferriss: Yeah, totally.

Michael Mauboussin: And so this is another aspect of it is this equanimity, this ability to keep your eyes on the horizon, even when you’re feeling the short term stress, because we all do. And that leads obviously to what should you be doing to do all the natural destressors and so forth, and there’s obviously a list of things we could talk about there. But to me those are some of qualities. And I guess the other thing I’d just say overall, I already mentioned it, Tim, but I’ll just emphasize it again, is that almost all the great investors I know are just incredibly intellectually curious. And they want to understand how things work and they’re willing to listen and read and learn.

None of them think they’ve figured it out. None of them think that the game has been mastered. They’re constantly trying to improve their ability, their craft, and so forth. So that’s the other thing. And that’s why they’re so much fun to be around, for the most part, because they’re typically very interesting people because they read a lot and think about how various ideas that are swirling around there might apply to what’s going on in the world today.

Tim Ferriss: I’d love to chat a bit about some of the content of your book Think Twice, which, for people who haven’t read it, I recommend. And also has a great cover quote from Billy Beane, the general manager of the Oakland A’s, or then-general manager, who was profiled and highlighted in Moneyball. One hell of a quote, so congratulations on the cover quote, as well. My question relates to a component of the basic book description, so I’ll just read this. 

“In Think Twice, Michael Mauboussin shows you how to recognize and avoid common mental missteps, including: one, misunderstanding cause-and-effect linkages. Two, ggregating micro-level behavior to predict macro-level behavior Three,” — this is the one I want to talk about — “considering enough alternative possibilities in making a decision. And four, relying too much on experts.” Not considering enough alternative possibilities in making a decision. Could you perhaps give some examples or just recommendations for how people can expand the number of alternative possibilities they can generate or consider in making decisions?

Michael Mauboussin: Absolutely. It’s a fabulous question, Tim. And by the way, Think Twice, just the premise of that is that when you’re faced with certain types of situations, those that you described, your mind naturally wants to think about it one way when there is a better way to think about it, often a better way to think about it. So this is an encouraging to say, from time to time, “Slow down, think twice.” So that’s where that idea came from. And you put your finger on something so important, which is again, thinking about a wide range of alternate possibilities and wider than people typically think. So what are the techniques to allow people to do that? I’ll mention maybe three of them, and I’ll try to do this fairly quickly.

The first is what we’ve already talked about, which is base rates. So let’s just pick something specific. You’re trying to figure out the growth rate of a company or something mundane like that. You might say, well, the consensus, it’s going to grow five percent, and if they kill it, they’re going to grow eight percent, and they do really badly, they’ll grow two percent. So we have this fairly narrow band. Then you might say to yourself, well, okay, let me look at other companies that were sort of in the same situation, same size, roughly the same industries, and look what their actual growth rates were and see if that’s different. And quite commonly, they’re going to have much bigger range of outcomes, and that’s going to open up your mind to say, okay, perhaps I should be thinking about something more expansive than what I’m doing now. The second idea is a concept called a pre-mortem. Have you talked about this on the show before, pre-mortems?

Tim Ferriss: I think I discussed pre-mortems, well, pre-mortems and their opposite. I think I discussed this with Roelof Botha of Sequoia. And there are a few other folks, but I think this is worth revisiting.

Michael Mauboussin: Yeah. And I’ll be quick about it. So we all know about post-mortems. So post-mortem, the patient has died and we sit around and we say, given the information we had at the time, could we or should we have done something differently to lead to a better outcome? So we learn from our mistakes, if there are mistakes that were made. And we’re all familiar with forecasting, so we’re standing in the present, peering out to the future. A pre-mortem, as the name implies, is actually a third exercise. And by the way, it’s developed by a psychologist named Gary Klein. So if you want to know more about pre-mortems, look up Gary Klein. He wrote a Harvard Business Review article about this specifically that’ll allow you to track this down.

So a pre-mortem is that you’re standing into the future. You pretend like you’ve made the decision. Now you’re in the future, say a year from now, and this decision’s turned out horribly. It’s embarrassing. And then each person individually, and again, going back to our concept of diversity, each of us individually writes down why this decision turned out so poorly. So essentially we’re documenting the ways things can go bad. And so what happens often in organizations is we all coalesce around making a decision, making a particular investment, whatever it is, and we’re kind of optimistic and we’re sort of positive about it.

And the pre-mortem, again, just draws out the possibilities of things not going well. And perhaps there was the junior person in the room who didn’t feel emboldened to talk about this downside scenario that no one seemed to be contemplating. But when he or she has to write it out and we all read them, that opens up our minds to some degree. The last thing, which is related to this, is just red-teaming. So most people are familiar with this concept, but it comes from military strategy, originally. I think cybersecurity is a really good example of it today, but the blue team defends the strategy, the red team attacks. So if you’re a general laying out a strategy, you say, “Okay, here’s our strategy. If we were to attack ourselves, how would we do this?” So you’re red-teaming.

So in cybersecurity, you develop your security and then you hire hackers essentially to hack yourself, to see how vulnerable you are. So again, it’s just opening up your mind to other possibilities in a way that might be difficult to do. And one thing I want to mention I think that’s related to this, Tim, that I think is so important, and it was part of your conversation you had with Niall Ferguson, and that was this idea of counterfactuals. And I think he said something, which is correct, which is people like historians don’t really like counterfactuals. There are some psychologists who talk a bit about counterfactuals. But a counterfactual, again, is what didn’t happen that could have happened?

So we all understand that the future is pulsing with possibilities. And even if we do lay out these ranges of possibilities, we know that there are different paths the world can take. But once things happen, it’s like we forget about all the other possibilities. And there are two things, one’s called the hindsight bias. We start to think we knew what was going to happen with a greater probability than we actually did. And the other’s called creeping determinism. We start to think that what happened was the only thing that could have happened. And those are both very dangerous things to fall into as well. So it’s this idea of always keeping your mind open, saying, okay, this turned out this way, but we understand there was a counterfactual and things could have turned out differently.

Tim Ferriss: So building off of learning from failures and just pre-mortem, post-mortem, all of these wrappers we can put around mistakes and what we can learn or attempt to learn ahead of time in terms of expanding possibilities, but also learning afterwards. That’s a very clumsy way to lead into a very simple question, which is, do you have any favorite failures of your own? So these could be failures that ultimately seeded later successes, things that taught you a disproportionate amount compared to perhaps other mistakes or failures you’ve experienced. Do you have any favorite failures?

Michael Mauboussin: I mentioned Drexel Burnham Lambert and this training program and all the positive things that came out of it. And by the way, David Epstein — well, I think David Epstein talked about it, but I’m sure the idea’s been around, is this idea of skill matching, which is, you figure out what you’re likely to be okay at, and where you can add some value. That benefit happened there. But that job at Drexel Burnham ultimately led to being a stockbroker, or today it’d be a financial advisor. And Tim, I was just an abject failure at this job. Well, I resigned, in quotation marks, but effectively I was fired.

It was pretty bad. In other words, I was doing something I wasn’t good at, I wasn’t happy doing, I was getting negative feedback on that. So I would call that a failure. And that was the culminating point of the training program. By the way, other people in my training program went on to be quite successful. So it was me, not others. And what I took away from that was I’m probably not doing what I should be doing, and I should think more about what kinds of things where I could be more valuable or more useful and try to pivot my activities in that direction.

Tim Ferriss: Coming back to Think Twice, overreliance on experts. I’d love to hear you expand on that. And maybe just defining experts as compared to, say, people with experience we could do first. But I would love to know how people can counteract or how they should think about overreliance on experts.

Michael Mauboussin: The psychologist Greg Northcraft has this great distinction between experience and expertise. And he says, “An expert is someone who has a predictive model that actually works.” And this is a super big deal in the world of investing. So there are a lot of people who have been around for a long time, they sort of gotten through, maybe a couple good breaks along the way. And they’re experienced, but they don’t necessarily have predictive models that work. Whereas, really, a true expert has a predictive model that works. I’m increasingly skeptical about experts and I think the pressure on experts is coming from two different directions. The first is algorithms. And this goes back to some famous work by Paul Meehl from the 1950s on clinical psychologists. But just demonstrating that it’s almost always the case that algorithms, so basically rules that we write down, outperform many experts in many different fields.

By the way, it’s actually even more interesting than that because sometimes you can go to somebody and say, “Tell me how you do this. Tell me how you think about what you’re doing.” And you can actually write out a bunch of rules, and they’re telling you the rules. And then if you actually compare the person with the model that the person created, the model does better than the person themselves. So the slippage is not the way they’re thinking about the world, the slippage is they’re not executing, right? And this is a lot of the work on the checklist and so on and so forth. But anyway, so the first thing is where there can be algorithms. And I think what we’re seeing in our world is more and more of these — now, they’re downsized algorithms, but more and more of these algorithms.

And then the other one, Tim, is what we’ve already talked about, which is in the complex domains. I think in many instances there’s a lot of evidence that the wisdom of crowds is better than “experts.” And this goes back to, we talked about Phil Tetlock and the work on superforecasting a few moments ago. Phil Tetlock, the first book I read of his was in, I think, 2006 or 2007, called Expert Political Judgment. And there he actually took about 400 experts, asked them to make very specific — and by the way, these are master’s, PhD-level people at the top of their field, and asked them to make very specific predictions in economic, political and social domains. And then kept track and found that they were not much better than chance at making their predictions.

And by the way, they’re like you and me. When they get something wrong, they have a litany of excuses. Like, “Oh, you know, you just wait.” Or, “Oh, had this happened, I would’ve been right about this.” One of my favorites is somebody said, “My prediction was so important, it changed the course of world events.” It’s like, fuck it. So that’s an interesting one, that’s also interesting. So I think that the wisdom of crowds can outperform. And so I think that’s leading to this idea called the expert squeeze. But in certain domains, again, you should listen to experts. One of the ways that you can figure this out is to say, okay, if I appeal to a bunch of different experts, are they going to basically give me the same answer? And so tomorrow you want to say, hey, how’s the weather going to be tomorrow? You could turn on various TV channels, probably a couple internet sources, and you’re going to get roughly the same thing.

So those experts are going to converge on the same solution. By contrast, if you say, again, geopolitical stuff or what will the price of oil be and whatever, you can get some very well-qualified and credentialed experts from across domains, with different views, and they’re going to say things that could be potentially wildly different. And so one example, for instance, where this is a really big deal today, is stuff like climate change. So even if you and I agree on the inputs and we’re both experts, we may both build models that are on some level credible, but they may generate very different outcomes. And so what do you do with that? So that’s an area where we’re experts. Where do you have to put the question marks?

Tim Ferriss: I suppose a lot of folks rely on experts in many, many different domains. Let’s just say if they’re trying to look at some way of analytically chewing on all the data from experts to make a decision or from one or two select experts. On the opposite end of the spectrum, there’s this term that gets used, by many people, I think, very haphazardly, which is intuition. However, one of the questions, I suppose broad questions, I had that I wanted to explore with you was how you think about intuition, question mark, end of question. And broadly, here’s one from Josh, which is, “How can an awareness of cognitive biases be internalized into intuition?” And then he added to that, “and somatically accessed,” but we can potentially hit pause on that. You and I both know Josh. But how do you think about intuition?

Michael Mauboussin: It’s an absolutely fascinating topic. And here again, by the way, I’ll mention Gary Klein, the pre-mortem guy, has been a pretty strong advocate for something called naturalistic decision making, which relies a lot on intuition. Danny Kahneman, who won the Nobel Prize in economics as a psychologist, has been one who’s pointed out the heuristics and biases and things like prospect theory and so forth. So they were sort of on opposite ends of this. And interestingly, they did an adversarial collaboration in 2009, they published a paper called “A Failure to Disagree.” So that’s interesting, Kahneman and Klein. So Tim, I would take one step back and say, using some language from psychology, this is brilliant branding by the way, there are two systems of the mind, system one and system two. It’s just like inside/outside views. Those guys need marketing, they need a marketing department over at those psychology departments.

But system one, as you know, people have talked a lot about this, it’s your experiential system. So it’s fast, it’s automatic. It’s difficult to train, but it basically runs your life for the most part. System two is slow, it’s purposeful, it’s deliberate, it’s costly. You have to recruit it often. It’s lazy, it doesn’t really want to do much work. And I would say that intuition is a situation where you’ve trained your system one in a particular domain to be very effective. For that to work, I would argue that you need to have a system, so this is the system level, that it’s fairly linear and stable. So linear in that sense, I mean really the cause and effect are pretty clear. And stable means the basic rules of the game don’t change all that much. So if you have those conditions, and by the way, even driving your car, you’ve got some intuitions about how that works, you can get around, that’s fine.

If I put you in a stressful driving situation, of course, like bad weather or certain speeds, whatever, what you know to do is going to not be sufficient to get the job done. So I had this interesting conversation with Josh about this, specifically. By the way, The Art of Learning is fabulous. I’m a huge fan. And by the way, just in awe of his accomplishments. All his accomplishments, but certainly when you think about two domains, chess and martial arts for instance. And he was working with different investors. And he would say to them, he told me, he’s like, “I would tell them, rely on your intuition.” And I was like, “Dude, I don’t know if that’s good advice.” And the reason is, I’m like, “For you, Josh, you happen to excel and become world-class in two domains that were linear and stable.”

Chess is a perfect example. It’s almost like the canonical example where experts exist. They’re grand masters, they chunk. I mean, all that kind of stuff, they’re just operating at a different level than certainly I am. And then you go to the martial arts, and I just remember reading in The Art of Learning, he’s got this interesting section toward the end where I think he was in some sort of world championships for Tai Chi and it was in Taiwan. And so he goes there, and I don’t know enough about how all this works, but I hope I get this roughly right. Apparently, they decided to change the rules, to change a starting position for engagement, and the size of the ring, which seemed like a really big deal. So he’s been training, using these other things, and all of a sudden, essentially, my system became unstable. Likewise, by the way, if you said chess, we’re going to make the board now 10 x 10, and the pieces move differently. All the expertise of the grand masters would go out the window.

So I think that intuition tends to be way — because we all feel it, by the way. We all have that sensation of an automatic answer that comes to us. But I think it tends to be way overweighted. And the other thing we have, another problem with intuition is a big sampling bias problem, which is people say, “How’d you come up with this idea?” And like, “Ah, it just came to me in a flash, intuitively.” And we forget about all the dumb ideas that came to people they don’t talk about, so there’s a sampling problem as well. So that to me would be the thing on intuition that I think is really important. So it’s not that it doesn’t exist, it absolutely does. And it’s through this training. Now, this goes back, I mean, Josh’s question’s a little bit another level, which is how do you eradicate bias from this?

And I do think that’s almost part and parcel of the system. Because if you’re learning in one of these stable linear systems, it’s easier to point out your failures, I think, or your mistakes, or your errors than it is perhaps in other systems. So I think they kind of get weeded out to some degree. Also, I mentioned before, overconfidence and confirmation bias. We have biases and all sorts of other things, but those tend to be more relevant for more open domains per se. And by the way, I should ask you, Tim, so would you say, when you think about your experience as a wrestler, let’s just focus on wrestling, perhaps, or even dancing, but wrestling. Do you feel like you trained yourself in such a way that you — not to say it was perfectly automatic, you may have had a little dialogue in your head, but you sort of knew what you were doing and how to do it fast and hard and accurately and all that. Would you say that?

Tim Ferriss: I would say that probably even more so in Judo. But I certainly think I went from, say, unconscious incompetence to conscious incompetence to conscious competence to unconscious or automatic competence, which is — 

Michael Mauboussin: Yes. And that par for the course.

Tim Ferriss: Yeah, it’s part and parcel of competing in probably, I would have to imagine, most sports. Not all, but speed is a deciding factor.

Michael Mauboussin: Absolutely.

Tim Ferriss: And so I do feel like I got there.

Michael Mauboussin: Yeah. And again, I don’t know exactly how the rules are in Judo, but again, if I change the rules somehow, or even in any sports you watch, popular sports like basketball or ice hockey or football, whatever. If you change the rules, it takes people time to adapt and “retrain” in order to internalize. And like you said, you go from that awkward phase of I’m thinking about what I’m doing, to being automatic.

Tim Ferriss: And it should provide maybe a little bit more context to you for why I’m asking about intuition. Certainly I’m asking because I wanted to build off of what Josh had put forth, and I know he thinks very deeply about different systems of thinking and is truly a spectacular competitor and is almost just beyond comparison for me in his ability to systematically deconstruct and become world-class in different domains. I mean, he’s doing it with foil boarding right now. And to watch his progress over the last few years has just been astonishing, quite frankly. But I’ve also started to ask other podcast guests and friends in different domains about intuition. Not because I’m looking to justify it in any kind of new age way or over-weight it, but I’m curious how, perhaps put a different way, they utilize different systems of thinking in their own lives.

And so had for instance, John Vervaeke, who is a professor of a few different things, cognitive science among them, at the University of Toronto. And he talked about intuition as implicit learning, so a pattern recognition, and how that can be well shaped or poorly shaped. I’m using my words and not his. So I am, and this isn’t a question, I suppose, more a confession of context, which is just to say I am increasingly interested, though, in very fast decision making. Understanding that there is a sampling bias. It’s very easy to remember the handful of times where you’ve come up with something brilliant very quickly, and subconsciously or consciously dismiss all the stupid things that you came up with. But nonetheless, an area of interest, I guess partially because I’ve just focused so much on the analytical side and the more laborious forms of arriving at answers, that that’s something I’m deeply interested in.

Michael Mauboussin: But I do think, I mean, even your point on implicit — I don’t know what your exact phrase was, but implicit. I think certainly would fit with what I would believe about this, which is that even, again, often experts, if you ask them how they’re conceptualizing things, I’m not sure they’re completely aware of how they’re doing it. Even things like chunking, I think chunking was something that was observed by psychologists looking at chess players. I don’t know that the players themselves were saying, “This is how I’m doing it.” Right? So it’s super interesting. Yeah, no. I think it’s super interesting how the mind works. And experts, they definitely work in a different way than the rest of us. But I’ll just say in the world of investing, investors often talk about this idea of pattern recognition. And again, pattern recognition, if your system has certain characteristics, I think works great. The question, again, in investing is how do I parse what I think would lend itself to where pattern recognition will be effective versus where it’s unlikely to be effective? And that’s the interesting question. That’s the line I’m interested in understanding.

Tim Ferriss: Totally. I’m similarly interested. Well, let me bring up maybe just one or two final questions, and we can certainly go anywhere else that you might like to go. But this one is less conceptual, more personal. So this is something that came to mind for me, but also came up with some of your students in your tribute video, and that is time management. So you have jobs. You write books. You read a lot of books. You have five kids. The list goes on and on, and I would love to hear any advice or tenets or rules you have for time management. Maybe that’s the wrong term to use, life management perhaps. You mentioned one of them before we started recording, which is being religious about sleep. I don’t know if that’s one of the pillars, but how on earth do you juggle all of these things and still seem to have a reservoir of energy left over? It’s pretty staggering to me.

Michael Mauboussin: Yeah, it’s certainly not as impressive as it looks. But let me just say that I’ll start with my life partner, my wife, and I really mean a partner. And I’ve always thought, this is at least my own view, I don’t know if people have talked about this or this is how they feel, but it’s almost like an amplifier. And if you have a really good relationship and someone that works well with you, and you’re compatible, and you have the same sort of sets of goals, it just amplifies everything. It makes everything better. And I’ve had that. So we’ve been married 30-plus years, and that is the number one single factor, I would have to say, just being with a partner who’s really supportive and understanding. For years when we’d go off on vacation, she would always allow me a couple hours to go off and read stuff and something like that, so just being thoughtful about that. So that is first and foremost, and dealing with a lot of stuff, including with the kids and so forth.

And then the second thing is, I don’t know, I’ve just been doing this for a very long time, right? So it sounds like you’re reading up all these things. It’s just over a very long period of time. I will just say that I enjoy a lot of this stuff, so I really enjoy learning things. I enjoy reading things. I enjoy trying to understand things at first principles. I enjoy trying to track ideas down to their genesis to understand where they came from and how they got to where they are today. I also enjoy writing. I’m not that good at it, but I need to keep working on that. By the way, was John McPhee your professor, by the way, at Princeton when you — 

Tim Ferriss: He was. And much like some of your students were saying that their friend took your course 15 years before and kept all the notes, I still have all of my notes from John McPhee’s seminar at Princeton.

Michael Mauboussin: Yeah. I mean, I just read — I’m embarrassed to say, I got to it late, but I read Draft No. 4 last year, and I was just like, “On some level it’s brilliant, and on another level, it’s so discouraging.” It’s like this guy is so off the charts, so good. Or you read Michael, another Princeton guy, Michael Lewis, or you read these guys, you’re like, “Oh, my God.”

Tim Ferriss: Yeah. They’re both spectacular.

Michael Mauboussin: Spectacular, right? But I think this idea of organizing your thoughts and trying to communicate them effectively, that’s also very motivating. But I will say this, and this is something I share with my students, and I try to embody this to the best of my ability. But sort of the foundation, I use the lame term, but a mental athlete, but if I were trying to compete at a high level as an athlete, what kinds of things would I do? And I would practice, and I would make sure I was getting my rest. I would recover. I would make sure I’m eating properly and so forth, right?

So what would that be for cognitive tasks? And by the way, I really appreciate that you spent so much time with Matthew Walker and brought his ideas to the world because I think his stuff is really, really powerful, and in particular, just to zoom in on this idea of the cognitive in performance. Well, let’s say negatively, the cognitive degradation, if you’re not sleeping properly, is just staggering, right? So to me, the cornerstone is sleep. And I really do try to focus on that, pretty religious about the sleep thing. And I’m an eight-hour guy. That really does make a difference for me.

And then for me, what follows is exercise. I’ve always been a lifelong athlete, so I’ve always enjoyed movement. But for me, that’s really important, to be able to move every day and be very active. And I don’t know that I do everything, Tim, that you’re telling me I should do. I lift, and I could mix it up a little bit more. And then for me actually, diet, and I try to be very careful with diet too. But that is actually — if the first two are in shape, the last one seems to go pretty much. It’s like if I’m not sleeping or exercising and everything else goes to hell in a hand basket, for me personally. So those are the things. So that allows me to perform, I think, to operate at a fairly high level, for me.

Now, one quick story on this is that in the late 1990s, I was at Credit Suisse, as you mentioned, and I had a brief stint as the — a job called the product manager, which meant I ran the morning research meeting, which started at 7:30 in the morning, which meant I had to get in there a little bit earlier. We live in Connecticut, so I had to commute in. So I was getting up really early every day. And with five kids, it’s hard to go to bed super early, right? So I was just massively sleep-deprived. And I signed a book with Al Rappaport to write the first edition of Expectations Investing.

So I was like, “All right, I’m going to do this.” I got clearance from my boss. I’m like, “I’m going to work from home three days a week.” This is back — whatever, this is 1999, 2000. And so what did I do, right? I slept an hour more, spent an hour more of my time, but I slept an hour more. I’m like, “Oh, my God. This is how I’m supposed to feel?” At 3:00 in the afternoon I’m still productive. For years literally, afternoon, I was a basket case, right? I could talk to people, but I could certainly not do any hard-lifting cognitive work.

Whereas correcting all that has really made a huge difference. So that to me, that works for me. Now, I would just say the other thing is — because people often comment how much I read and whatever. I actually don’t do a lot of other things too. If you ask me about Game — I’ve never seen Game of Thrones, for instance. I’m not proud of that. So there are some trade-offs, right, just to be clear about all these things. So people focus on one aspect of something without looking at the deficits in other categories. It’s very important to balance all that out. But ask my — 

Tim Ferriss: What are other things on your not-to-do list for you personally?

Michael Mauboussin: Oh, okay. So for me, the things that I try to avoid — so first of all, I just would say that this is by constitution. This is just the keys that were handed to me. I don’t have an addictive behavior, so that really helps a lot in things. So I’m not, for example, a big drinker, a very light social drinker. So I think alcohol itself is a huge thing. And I would not do more of it, so that’s one big one. Yeah, the other one is, it would be time allocation and just be how I’m spending time. So I just don’t — and in some ways it’s probably, again, it’s not good because I’m not up on all the things in pop culture and so forth.

Tim Ferriss: I think you’ll be fine.

Michael Mauboussin: Yeah, yeah. Well, I don’t know. Yeah, I hope so.

Tim Ferriss: Five kids. Any resources you’d recommend or thoughts on parenting that you’d like to share with people who are in earlier innings?

Michael Mauboussin: It’s interesting that people often ask this question, if you change your mind about anything. And the one thing — I don’t know that I completely changed my mind, but the one thing that I found to be really interesting, and to some degree liberating, is the work of Judith Rich Harris. Do you know Judith Rich Harris’ work?

Tim Ferriss: I don’t.

Michael Mauboussin: And her first book is called The Nurture Assumption. And then I think she wrote a second book called No Two Alike. She passed away a few years ago. But first of all, her background’s very interesting because she was sort of shunned from a PhD program in psychology but went on to write textbooks and was basically an independent scholar. One of the premises I think most people operate with is that parents are really important in how their children turn out. And I think what her work shows — and by the way, the fascinating literature in twin studies, twins separated at birth, I think, just demonstrates for the most part that obviously you need to put a roof over a kid’s head and love them and feed them and do all those good things. But for the most part, there’s a big chunk of nature that’s important in how people turn out.

So I think that, to me, was a really interesting one because when you have five kids under one roof, you have a lot of diversity in their own interests and capabilities and so on and so forth. And then the other one — I don’t know. This is probably totally pop, but I always like this book called Parent Effectiveness Training, PET. And what I liked about it was that the guy — I hope I’m presenting this accurately, but the author argues, you should think about — and this is not when they’re babies, grow up, and they’re a little older. He argued like, “When are the problems your problems, and when are the problems the kid’s problems?”

By the way, this is a little bit of the Jonathan Haidt work incidentally, sort of free-range kid thing, right? So, “When is it your problem? When is it the kid’s problem?” If it’s your problem, then you say to the kid, “Listen, I need you to help me to get this thing solved.” Right? So that you’re asking for the help from the kid, but it’s your problem. If it’s the kid’s problem, the natural inclination of parents is to solve the problem because you know how to do it rather than let the kids solve the problem for themselves. And you might say something like, “This is your problem. I need you to solve it. I’m here to help you if you’d like help.” But you can do this with a kid, with a five-year-old, right, “I’m here to help you if you need help, but I want you to figure this out on your own.”

So this idea of, again, taking the initiative, thinking about alternatives for yourself as a little kid even, and I just always like that distinction. So I thought that’s another — and there’s a lot of other stuff in the book, but that, to me, is one idea I’ve always thought about. Is this my problem or is this the kid’s problem? If it’s the kid’s problem, let me make sure that I let him or her solve the problem, me, to help if they need it, but only playing that secondary role.

Tim Ferriss: Well, thank you for answering that. As much as you dislike talking about yourself, I may force you to do it one or two more times. Complex adaptive systems, I’m curious to know, outside of business and investing, how being exposed to the Santa Fe Institute and/or complex adaptive systems and learning more about such systems has just changed how you look at the world or experience the world.

Michael Mauboussin: Unbelievable. Totally changes your point of view on everything, I think. It makes you much more circumspect, right? I think you recently were talking to Jonathan Haidt, and I think he kept talking about this basic concept of when you’re messing with a complex system. I think the big point, and there’s a chapter about this in Think Twice, I think we used the example of Yellowstone National Park, is that it’s very difficult to manage a complex adaptive system, right? So in other words, the perturbations, it doesn’t have to correspond. The outcomes are not corresponding always with the size of the perturbation, which is really hard for people. So classic examples, ecosystems, economies, all the climate issues, these are all complex adaptive systems, and they’re just very difficult to think through and manage and even to model to some degree. So yeah, I think once you have that framework in your mind — and by the way, for me, just professionally, this is, I think, the best way to think about markets specifically and why, again, markets are hard to beat, which they are, but why they periodically go haywire, which they do.

So I think it’s just a lens through which to see a lot of different things in life in a way that’s, I think, more representative. Now I think the downside is that the recognition, I think it makes you feel like you have less control, which is, I think, fundamentally correct. But I think that’s in some ways liberating. And so as you look at these systems, and you try to improve the world, the question is always — and people always talk about unintended consequences, but if I have an intervention, try to engineer an intervention, what is the unintended consequence? Can I think about those things? And understanding if you’re messing with a complex adaptive system that’s a non-linear system, it’s just going to happen.

Tim Ferriss: And so you find that freeing because you’re not grasping too tightly to the misfounded belief that you can control these things.

Michael Mauboussin: Precisely. Yeah, precisely, precisely. And the interesting thing is — I don’t know. It’s interesting if you’re trying to understand a system, is it useful to be able to describe it in terms that seem to be reflective of what’s going on? And I think that’s probably right. So I think this is just a better description of a lot of systems that we deal with, and we don’t necessarily describe things properly. So I think this helps in that regard too.

Tim Ferriss: Well, I’m ready to dig in after being waterboarded with all the value of studying complex adaptive systems from Dr. Bill Gurley and yourself. I’m sold. So I need to — well, I don’t know. If I wanted to read about complex adaptive systems, and this may not be — let’s see, I’m trying to find the title of the book, Complexity, that came up earlier. Is that the place to start, or are there other books you would suggest starting with?

Michael Mauboussin: There’s a book by Melanie Mitchell, who is now a resident faculty at the Santa Fe Institute, called Complexity: A Guided Tour. And it’s more than just the narrow concept of complex adaptive systems, but there’s tons of stuff in there that’s absolutely fascinating. And I would just say every thinking person, certainly scientifically literate person, if you can grasp the ideas in Melanie Mitchell’s book Complexity, that would be a great start. There are a lot of resources on the website at the Santa Fe Institute, so that is, so I would check that out as well. But if you really just go to a bookstore or go to Amazon and type in “complexity” or “complex adaptive systems,” a bunch of stuff will show up. Melanie, by the way, one of the people she worked with was John Holland. He may have coined the term complex adaptive system, but John Holland is also another, he’s also passed away, but a titan in this whole area as well.

Tim Ferriss: So I wanted to perhaps wrap with the question of the metaphorical billboard. And I’ll put this in context, and if this is a dead end, I’ll take the blame for it. And the questions is, I suppose, pretty simple. It’s simple to ask, which is, if you had a billboard on which you could put anything, metaphorically speaking, to get a message, a quote, a question, an image, anything out to billions of people, ideally noncommercial, what might you put on that billboard? Does anything come to mind?

Michael Mauboussin: Yeah. I mean, one thing that comes to mind is there’s a quote from Phil Tetlock’s book, Phil and Dan Gardner, Superforecasting. There’s just a quote, which I love, and I find myself repeating it often. And it says, “Beliefs are hypotheses to be tested, not treasures to be protected.”

Tim Ferriss: Oh, I love that. That’s great.

Michael Mauboussin: “Beliefs are hypotheses to be tested, not treasurers to be protected.” Now we all have treasured beliefs right, to some degree. But to the degree to which we can really, like you said, sort of have a light touch on our beliefs, a light hold on our beliefs, I think that’s a great way to try to go through the world. So I love that, and I think that would be great for people. Can I mention another one?

Tim Ferriss: Of course.

Michael Mauboussin: Again, this might be too wordy, but there’s a really interesting book called The Psychology of Intelligence Analysis. Do you know that? You probably do know that book by Richards Heuer.

Tim Ferriss: I don’t.

Michael Mauboussin: Okay, so it’s called The Psychology of — 

Tim Ferriss: I shan’t tell a lie. I do not.

Michael Mauboussin: It’s called The Psychology of Intelligence Analysis, and the author is Richards Heuer. And there’s actually a PDF of it, which you can get through the CIA website. So if you just Google Heuer, H-E-U-E-R, the Psychology of Intelligence Analysis, and then CIA, it’ll pop up somewhere. And I’m going to paraphrase this. It’s toward the beginning of the book, but I always love this too, and it goes along these lines. He says, “Analysts who know the most about a situation have the most to unlearn when the world changes.” It’s kind of a related theme, right? And Phil Tetlock talked a lot about this in Expert Political Judgment. You’re an expert on the Cold War, and you’ve been studying US-Soviet relations for a long time, and all of a sudden the Berlin Wall comes down, and there’s a whole new reality. What are you going to do, right? You have to unlearn all the stuff that you know and start anew, and that’s just really difficult for people to do.

So one of my takeaways in that latter, why I think that one’s interesting, is that’s why the beginner’s mind is so important. And as I always like to say in organizations, what’s bad about young people is they don’t know anything. And what’s good about young people is they don’t know anything, right? So this idea, can we have people around us? Can we surround ourselves with people who are willing to ask the naive question, who are willing to have fresh eyes, who are willing to not carry around baggage, perhaps? And that baggage may have been very helpful for us at some juncture. But to not have that baggage, and so that, to me, would be another — this idea of trying to avoid this situation where you fail to unlearn based on your past.

Tim Ferriss: What a great place to begin to wind to a close. So, Michael, people can find you on Twitter, @mjmauboussin. That’s, M-A-U-B-O-U-S-S-I-N. Your website, Where should people start with respect to your books? If they’re a layperson, not focused on investing, but would like to become better thinkers, where would you suggest they start? And then for professional investors who are new to your work, or people who would like to become better investors, where should they start?

Michael Mauboussin: For decision making, the more fun ones would be Think Twice and The Success Equation, which is about specifically a topic of luck and skill. So there is investing stuff in there, but there’s a lot of sports stuff and business stuff. Now what’s interesting, I’ll tell you, these are all these little backstories. So I wrote Think Twice, and I had a chapter about luck. There’s a chapter about luck and skill toward the end, but I had it as chapter two. And my editor goes, “Oh. Yeah, I don’t know. This seems a little bit boring. No one’s going to really care about this. You could keep it, but put it at the end. Nobody reads a whole book.” Okay, so I put it at the end, and then I sent out the book, and a bunch of friends contacted me, people I like, and of course, they’re going to say nice things, but there’s a bunch of people — a bunch of people said to me, “That was fine. It was all good. But boy, that luck and skill stuff, I wish you had talked more about that.”

So I was like, “Oh, man, I knew that was good,” right, because of the Big Brown story. We did lead the book with the Big Brown story. And so that got me thinking more about what can we do with that luck and skill topic. Now, Nassim Taleb wrote a book called Fooled by Randomness in 2001. And I have to say Michael Lewis’ book Moneyball — I was a lacrosse player. I played lacrosse at college, so I was never a big baseball guy, but I read that book. In the hands of Michael Lewis, that topic was absolutely fascinating and got me very excited about thinking more about sports analytics. So I delved into that community a little bit, and there’s a ton because it’s obviously a very — there’s a lot of data and more constrained systems. There’s a lot those guys can talk about with luck and skill. So that ended up being The Success Equation, which is a book about luck and skill.

So broader decision making, Think Twice. If you’re interested in luck and skill, particularly as a topic, The Success Equation. I mentioned already in More Than You Know, which are the greatest hits of the Consilient Observer, that one is going to be — it’s a little bit all over. It was the most commercially successful of my books, but it’s a little bit all over the place. So if you like just picking up a book and reading a random chapter without a lot of structure, because I end up putting it just in sections, that’s the book to read. So these are 1,500-word chapters. You’re not going to get bogged down by anything.

Tim Ferriss: Could you repeat the title one more time?

Michael Mauboussin: Yeah, More Than You Know.

Tim Ferriss: More Than You Know. Got it, yep.

Michael Mauboussin: Yep. And then the final one is Expectations Investing. There are two versions of it. The first was published — by the way, think about this, Tim. This is amazing. We signed the contract for this book in 1999, right? The stock market’s roaring. The economy’s doing great. The book came out September 9th, no, September 10th, 2001.

Tim Ferriss: No. Oh, boy.

Michael Mauboussin: The day before a national tragedy, which happened to be in the middle of a three-year bear market in the stock market, so its timing is horrible. Anyway, it’s Expectations Investing. And so we did another version of it 20 years later, and that came out in the fall of 2021, so Expectations revised. And so if you’re a serious investor, and by the way, there’s a website that goes with that called, which also includes a bunch of downloadable Excel tutorials. So it brings those ideas to life. I already mentioned Al Rappaport, Creating Shareholder Value. He’s an extraordinary guy. He’s now in his 90s. I talk to him very frequently. He’s fabulous. He’s working on a multiple project. His mind is going great. And so it was an absolute — it’s been, throughout my career, knowing him for 30-plus years, a complete delight working with him. But it’s not stopped. It’s been so much fun. So I’ll just say that’s for the serious investor.

Tim Ferriss: Yeah. I mean, what a gift and what a turn of fate to be introduced to his work and have it impact you so deeply when you were just getting started, and then to get to the point where you’re collaborating. That’s just incredible, just wonderful.

Michael Mauboussin: Well, I’ll just say, Tim, it’s interesting that I — you’re absolutely right. And again, as we know, we all have our lucky turns. I met him first in May 1991, and I got to remember — somebody called me up and they’re like, “Professor Rappaport’s going to be in New York. He has 20 minutes to meet you. You could bow.” And I was like, “Oh, my God.” Right? It’s like I’m not worthy kind of scene. And I hit it off with him, and he invited me to join some executive programs at Kellogg in the early 1990s. So I was a young guy. I was in my 20s, and I think it was kind of risky for him to do that.

And that’s where that relationship got going. So yeah, it’s been incredible. And by the way, as someone who’s tried to teach — I’m in my 31st year at Columbia Business School. With someone who tries to teach, it’s extraordinary because if I do need to learn, if I need how to explain something, all I need to do is call him up, and we talk it through. And he’s just such a brilliant teacher. He sort of helps me understand what I’m talking about and get to the right place. Anyway, these are the kinds of relationships that are so valuable and gratifying.

Tim Ferriss: Yeah. What a beautiful relationship. Michael, this has been a lot of fun. I’ve taken copious notes. I have a lot to read, and I really appreciate you taking the time. Is there anything that you would like to add? Any closing comments, requests of the audience, complaints that you’d like to lodge formally, anything at all that comes to mind that you’d like to say?

Michael Mauboussin: No. I think we covered a lot of terrain, Tim, and I really appreciate — first of all, I want to say how much of a fan of yours that I am and how much I’ve learned from your podcast over the years. And there’s so many things I admire about what you do in particular that I sometimes feel I am trying to contribute in a way to help people think better and work better, especially in the domain of investing. But I really appreciate that you’re actually the guy doing stuff all the time. So I’ll just say that as a point of admiration.

And yeah, I just think that the one — yeah, I’d leave this as sort of the final thing I say to my students, but this idea that recognizing that if you’re in a domain that’s largely cognitive, what would you do to try to improve your performance? And we talked about sleep and exercise and diet, but I just think that these are things people should take really seriously to be really good performers. So that’s what I probably would leave with, is just to make sure that people, to the best of their ability — you have kids running around. Things happen in life. I totally get it. But your ability to do those kinds of things, that’s awesome.

Tim Ferriss: Yeah, absolutely. Well, thank you, Michael, for saying that and for being so game to cover so much terrain. And to everybody listening, we’ll have links to everything we discussed in the show notes as usual at And if you can’t spell Mauboussin, then just type in Michael, and chances are you’ll find Michael right away. And until next time, thanks for tuning in. Be just a bit kinder than necessary to other people and to yourself. And remember, if you are doing a lot of work cognitively, you’re a cognitive athlete, and your brain is not separate from the rest of your body. So you need to mind your Ps and Qs when it comes to the basics, the fundamentals, and that includes sleep. So thank you for listening, everyone, and thank you, Michael. And until next time, this is Tim Ferriss signing off.

The Tim Ferriss Show is one of the most popular podcasts in the world with more than 900 million downloads. It has been selected for "Best of Apple Podcasts" three times, it is often the #1 interview podcast across all of Apple Podcasts, and it's been ranked #1 out of 400,000+ podcasts on many occasions. To listen to any of the past episodes for free, check out this page.

Leave a Reply

Comment Rules: Remember what Fonzie was like? Cool. That’s how we’re gonna be — cool. Critical is fine, but if you’re rude, we’ll delete your stuff. Please do not put your URL in the comment text and please use your PERSONAL name or initials and not your business name, as the latter comes off like spam. Have fun and thanks for adding to the conversation! (Thanks to Brian Oberkirch for the inspiration.)