Business Books & Co.
A monthly in-depth discussion of a popular business book.
23 days ago

[S5E2] Thinking, Fast and Slow by Daniel Kahneman

Overcoming Cognitive Biases for Better Decision Making

Transcript
David Kopec

Thinking Fast and Slow is one of the most influential books ever published in behavioral economics. In this episode we break down its key insights and discuss how they apply to areas as diverse as advertising, risk management, stock trading and career choices. Join us as we discuss this complex yet relatable work. Welcome to Business Books and Company. Every month we read great business books and explore how they can help us navigate our careers. Read along with us so you can become a stronger leader within your company or a more adept entrepreneur. This month we read the 2011 book Thinking Fast and Slow by Daniel Kahneman. This book was selected by you, our listeners, based on a listener poll, so thank you for participating. Kahneman is an award winning psychologist who made significant contributions to the field of behavioral economics which ultimately won him the Nobel Prize. In Thinking Fast and Slow, he recounts his research journey and makes digestible to the reader some of the key insights that may help them improve their decision making. Common's book introduced many now common terms into popular culture and helped illuminate the cognitive biases that we all face both at work and in our personal lives. Instead of tediously covering every facet of this long and sophisticated work, in this episode we will delve into the ideas that we found most interesting and that we think you can apply to your work and financial lives. But before we get to the book, let's introduce ourselves.

David Short

I'm David Short, I'm a Product manager.

Kevin Hudak

I'm Kevin Hudak, a Chief Research Officer at a Washington D.C. based commercial real estate research and advisory firm.

David Kopec

And I'm David Kopeck, I'm an Associate professor of Computer Science at a teaching college. So let's start with the author like we often do. Who is Daniel Kahneman? Or we should say, who was actually.

David Short

So Kahneman was an Israeli American psychologist who as you mentioned, won the Nobel Prize in Economics for his work on behavioral economics. He actually shared the shared that prize. He was born in Tel Aviv in 1934 and actually spent his childhood in Nazi occupied Paris. His father was actually rounded up in one of the first phases of the Holocaust in France, but was actually luckily able to get out of Nazi control through some work from his boss. After that the family was on the run. Unfortunately, his father did die during the Holocaust, but from from illness, not fortunately from the Nazis themselves, although obviously I'm sure the lifestyle that they led may have made it more likely that, you know, he suffered that illness as well. The family eventually returned to British Mandatory Palestine, as it was known at the time. This was shortly before the formation of Israel and So he then spent much of the rest of his life in Israel. He received a B.S. in Psychology and minor in mathematics from Hebrew University. He served in the IDF before then earning his PhD at Berkeley in Psychology. He had a prolific academic career across a number of institutions including University of Michigan, University of British Columbia, Harvard, Cambridge and Princeton, where he spent the bulk of his career. And he was very prolific. He went across fields from cognitive psychology, behavioral economics, expected utility, judgment and decision making, which makes up much of this book. He was also the founding partner of TGG Group, a consultancy, and he passed away in 2024.

David Kopec

We just missed having him on the show. May he rest in peace. So let's get right into the core of the book. I don't want to beat around the bush. This is the most important topic in the whole book. Kevin, can you tell us a little bit about what System one thinking is and System Two thinking is and how they differ? This is the core concept for the first part of the book. That then is a theme throughout the whole book.

Kevin Hudak

Yeah, sure, Kopeck. So Kahneman describes and he ascribes System one and System two, the names, the labels to deeper functions or processes of the brain. You know, essentially System one and System two are different ways in which we systematically relate to the world. He even gives them personalities, right? System one is more fast thinking, reactionary, intuitive, but not always correct. According to Kahneman, it operates automatically and quickly, with little or no effort and no sense of voluntary control. System two is when we bring attention to something like mental math or complex computations. It's more logical thinking. It's associated with agency and choice and concentration. When we're engaging our System two, you know, he says that our pupils are dilating, we're absorbing and processing information. When we think of our sense of self, that System two. But as a result, it can often be lazy and slower moving. Hence, where System One's reaction is instant, System two can be sort of the apologist or give meaning or rationality to some of the actions of System one. But that's not all there is. System one can often be wrong, but it's our fundamental evolutionary trust of stereotypes and mistrust, of novelty. You know, imagine we were trained to see open or aggressive eyes in the dark as a risk and run. System one creates that immediate and voluntary reaction. But it also leads us to answer the question, how many animals of each type did Moses take on board the ark? Right. You would typically say two, but it's not Moses, it was Noah who had the ark. It helps US Complete sentences. War ended peace. Right. System two steps in when we actually have to do more complex mathematical problems, when we have to do things like think about increasing the pace with which we're walking, when we have to consider the meaning of one sentence versus another. If we look at some examples in the book that Kahneman shows us, for example, right in the beginning of the book, there's a picture of a woman who looks like she's on the verge of shouting. Right. Right there on the page. System one allows us to instantaneously recognize the anger in her face. System two would sit back and think, well, why is she angry? And is that really an angry face that she's displaying to us? He also gives the example of System 2 recognizing a sound and understanding what the sound is, whereas System one can only help us identify the location of the sound. I really, and I'm referencing this again, but I thought an interesting example of System one versus System two was with System two helping us to sustain a faster than normal walking rate. Right. He details that this is actually system 2. Even though it sounds fairly simple, there's some thought, there's computation and intentionality behind it. You know, he actually cites how when you're trying to walk faster than your average pace, there's a decline in your ability to concentrate on other items or even solve math problems. So really it's all about this contrast, and they're not people within our brain. He basically takes these labels and uses it throughout the book to describe the thinking fast and the thinking slow. System one is the thinking fast. System two is the thinking slow.

David Kopec

Thanks for that, Kevin. I think that was a really clear explanation. So a lot of the book is about the problems with System one thinking. And an important component of understanding System one is understanding how you get the thinking that comes out of System 1 developed in your own brain. So, David, can you tell us a little bit about that? How is someone System One developed?

David Short

Yeah, so System One, which is that fast thinking is really developed primarily from repeated exposure to experiences and patterns. So it really just comes from your life experience. You see certain things and you hear them repeatedly and they kind of become ingrained and somewhat automatic. So the brain can truly quickly form these intuitive judgments and responses from the heuristics that it builds over time from these real world experiences. It's kind of a learning by doing and then relying on the subconscious to move forward.

David Kopec

And that actually sounds kind of good. It sounds good to have this automatic way to quickly get a response and be able to answer a question or Decide what to do next. So what's the problem with that? What's wrong with relying on this system one?

Kevin Hudak

Well, I love how Short mentions heuristics because as Kahneman points out in the book, the derivation and the origin of heuristics is Eureka. Right? That Eureka moment. And what Kahneman points out in his book is a number of different Eureka concepts that allow us to understand how System 1, while not necessarily overall detrimental or at least intentionally so, does has have some flaws. And in particular, if you're relying too much on your system one and not deploying that system two, you'll miss a lot of the facts, you'll miss a lot of the evidence that should otherwise inform your decision making. Because system one is so fast acting, so involuntary common brings up a number of biases that we should be aware of and that we should deploy our system to, to take a second look at in everyday life. So, for example, the first one is the availability bias, right? The things that come to mind more easily, we tend to believe are more important. Think about public policymaking today and the shifting zeitgeist. It's very easy to latch onto certain concepts and believe they're more prevalent than they are because they're covered in the news so much. For example, today, if we think about greater attention to airline incidents this year because of some of the tragic losses that we've experienced, right? I've seen on social media folks posting so much more about their fears of flight pictures from the tarmac, even though it's still statistically much safer to fly than it is to even drive to the airport. The other biases he brings up include anchoring bias when we take irrelevant numbers into account when making estimates or forecasts. You know, as a past fundraiser in college and for different nonprofits I volunteered with, I really liked his suggestion and his anecdote around the suggested donations that you can include in your outreach letters to supporters, funders and philanthropists. When you present a range of donations that's artificially low, those donors might anchor to that and so too will their donations. But when you include a high anchor, even if it seems a little bit unrealistic, so he presents $400 in a fundraising letter, the responses then artificially inflate. And I think he said when he actually included a $400 upper limit as a suggested donation, the responses were something like $143 higher on average. Another bias he brings up is the conjunction fallacy, right? The habit of substituting in an easy question for a complex one that's A very system one thing to do. The idea that if we're asked a complex question, we instead respond with something that's easier for us to grasp and articulate and explain. He provided the example of an investor or CEO who was asked whether or not the company should invest in Ford cars. Rather than go into the dynamics of Ford as a company, their fundamentals. In his mind, he substituted it with the question do I like Ford cars, right? Or he goes to dolphins a bunch, which are some of my favorite animals. And he mentions when folks are asked about dolphins and saving an endangered species, it becomes more do I like dolphins? As opposed to, well, are they an endangered species worth saving? And are there ways in which we can particularly direct our assistance to help save them? And then I'll end on just one factor that he brings into the discussion around system one versus System two. And this is the idea of statistical base rates versus causal rates. He defines statistical base rates as the facts about a population to which one case or one problem might belong, but may not be directly related. These are the statistical facts and evidence that may be part of a problem statement, may be part of a conversation. Causal base rates are things that are more based on stereotypes or some of the qualitative around an individual case or a problem with causal base rates. Sometimes our system one steps in and concocts stories and illustrations and examples that aren't actually real or proven. What Kahneman says is that we tend to underweight the statistical base rates so those statistical facts, or even neglect them, while we tend to overemphasize the qualitative causal base rates. The example that he gives is the cab problem in which a hit and run accident is caused by a certain colored cab. 85% of that city's cabs are green and 15% are blue. The statistical base rate there is obvious, right? The green cabs are more than likely to cause 85% of the accidents as well. But a witness in this case has testified with 80% accuracy that the cab that caused the hit and run was blue. Those quizzed tend to lend much higher probability to the witness and the blue assessment, even though there's the statistical fact that green cabs are more prevalent. And then the causal element, the qualitative, the more stereotypical element, is then presented in the book with the insight that green and blue cabs in another scenario operate the same number of cabs. But green cabs are involved in 85% of the accidents. So folks switch their thinking here, stereotyping green cabs as the culprit as the bad drivers, even when the witness says with 80% certainty that it was a blue cabin. So we have to unpack these biases. We have to understand the difference between statistical base rates and causal base rates, the idea of association, the idea of availability. And then to the extent that we are joining things together, which shouldn't be joined as we're breaking apart the System one versus System two, thinking and actually solving problems and making decisions.

David Kopec

Thanks, Kevin. And that actually foreshadows a lot of the rest of our discussion. So thanks for getting into all those details. David, tell us a little bit about why we use System one so much, even if it has all these problems that Kevin pointed out from Kahneman.

David Short

So I think I'll be a little bit more brief there. It just is easy, right? It's fast, it's automatic, it's somewhat unconscious. So to some degree, you aren't making an active decision. It is just the way that the brain works. But as Kevin had kind of alluded to earlier, that is a lower cognitive load. It doesn't require as much effort or energy, frankly, to actually engage in. And so when you can do those things in an automated way, it does sort of free up your energy to be focused on other kinds of things, to be able to, you know, also walk quickly or whatever. If you're walking quickly, then you're going to need to use that. That automated response if someone throws something at you. And so, yeah, it really just comes down to being more efficient, being quicker. And it's not perfect, but it's good enough that, you know, that balance ends up working out.

David Kopec

And, you know, an interesting thing is that some people just have really good system ones. So I'll just give the example of my dad. My dad was an international chess master. This was actually the last book he read before he died nine years ago. So I was actually reading his copy. It was kind of weird, you know, and I. We are listeners that want to do this book. And I had a little bit of trepidation because I was like, what am I going to find as I'm reading through it? And there was a part of the book where he wrote David, and he highlighted several paragraphs, so it was a little strange experience for me. But anyway, he was an international chess master. And Kahneman points out in the book that chess masters are able to look at a position and without doing all the calculation, without doing all the stuff that they are able to do, they're able to look several moves into the future. Right. But without even doing that, they can right away use some powerful heuristics in their System one to instantly tell you some things about the position and have an idea of which moves are ones they should do the calculation on and which ones are ones they shouldn't. And my dad was incredible at that. You know, he instantly could look at a position and tell you all kinds of things about it without doing all the calculation. He also was a professor for like 30 something years. And a thing he would do is he would like, he would know a student, get to know them over the course of the semester, right? Then it would be towards the end of the semester they had to do some big project or do some paper, and he would say, this student is going to get a B minus. And like, when he then went through their work, they ended up getting a B minus. He could just get to know a student after all that experience of teaching for so long that he could tell how well they would do on some specific kind of project or paper. So his System one was very good and it helped him in both of those endeavors. I'll admit that he sometimes wouldn't completely. He would skim some papers when he had a very strong System one about how it was going to go, which isn't good. I never do that. But it really did help him be efficient in both chess and in grading. I'll just add to our discussion about this that Kahneman says one of the reasons we rely on System 1 so much is that our brain is naturally lazy and it wants to expend as little energy as possible. And obviously System one uses less energy than System two. And so we have this natural inclination towards System one in almost all activities that we do, and we have to actually push ourselves to do System two. Okay, going to a related but slightly different topic. There's this concept in the book of the Law of Small Numbers. Kevin, can you tell us a bit about that?

David Short

Sure.

Kevin Hudak

This is another bias that Kahneman brings up. It's really our exaggerated faith or reliance on small samples. The truth is that smaller samples tend to demonstrate more randomness and smaller changes within a small change sample will create what looks like large variants. Kahneman demonstrated the propensity of even social scientists and psychologists like himself to trust their judgment too often to set sample sizes, which results in some extremes. And we'll have a brief discussion on sort of the validity and some of the biases inherent in Kahneman's approach as well, but basically without a direct doubt presented to it. System One wants to assume that the findings of a small sample Size are true. Our System one will create stories or illustrations around it that prove the result. And it's in those instances we need to deploy our system to and think about the biases in the sample, the sources of variance, and really what the difference is between a sample size of 150 and 3,000. And I found this super applicable to the work that I do as a researcher and explaining the importance of sample sizes to some of my clients. Right. There's a fact that at about a thousand, you start getting a law of diminishing returns as you increase the number of respondents in a survey or study. Right. But what Kahneman was telling us is we have to beware that lazy, as Kopeck mentioned earlier, System one from letting us fall into the trap of small samples and then trying to apply those to a much larger population.

David Kopec

Another interesting concept from the book is that of anchoring. We heard about anchoring before, David, in the book never split the difference. That was anchoring in negotiations. I think similar, but not exactly the same as it's used in this book. So can you tell us a little bit about anchoring from Kahneman's perspective?

David Short

Yeah. So the way Kahneman views anchoring is really much around the idea that you will be influenced by even completely irrelevant numbers that you're prompted with. And so essentially, just if you're shown big numbers, you tend to think, you know, big related numbers. If you're showing littler numbers, you tend to tend to get anchored by that. That smaller number and stick to a smaller one. So to just give like a specific example, he ran a test where he asked people the age of Gandhi and whether or not it was more than 114 years old when he died. And not surprisingly, but maybe a little bit surprisingly, it was dramatically higher the estimate that people gave when it was asked that way than when they were asked whether Gandhi was more or less than 35 years old. So experiments are able to show that people's behavior really is influenced much more than they may be aware by the sort of irrelevant information that's presented to them when they're thinking about a number.

David Kopec

Thanks for that, David. Another common term in our language all the time, but I'm not sure everyone's familiar with, is the idea of regression to the mean. If you've never done any statistics, you might not know what this is. Kevin, can you tell us a little bit about regression to the mean?

Kevin Hudak

Sure. And this one is a little bit depressing as well when you think about it. And simply put, Kahneman says that regression of the mean is his take on the fact that extreme results, whether positive or negative, are often outliers, and a lot of times they're even luck. And that on repeated attempts or experiments, the outcome will always return closer to the average or closer to the mean. This becomes a problem when structures or systems start to believe that certain interventions are resulting in changes in performance. Right. The idea that. Well, he gives the example of a drill instructor thinking that punishing poor performance is better than rewarding improvements in performance. But in reality, everyone is just regressing to the mean. Also interesting is that the average regression to the mean, or Kahneman says its regression to mediocrity is always directly proportional to the original deviation from it. So in a way, we're always all sort of doomed to, to regress to the mean or regress to mediocrity, as Condon puts it, in personal lives and also in careers in econometrics, et cetera. And there's obviously a lot more flavor and different results and different experiments and things like that. You know, I personally love the example he gave of store performance and building revenue forecasts. Right. So in this experiment he said that a business manager instructed the employees to accept the overall forecast of economists that sales will increase overall by 10% and apply it to a handful of individual stores. Some of them were high performing revenue, high performing sales stores, others were lower performing relative to the others. Right. Our system, one brain would step in the lazy side and tell us to just increase each store's numbers equally. Right. By 10% to build our forecast. But common suggests, and the data supports, that the outlier high performing stores will likely regress to the mean, regress to mediocrity, while the low performers will rise. So you essentially, in this example, knock some of those stores down by a certain percent and then others up, and it's a bit more uniform. And that's how you get to that 10% overall increase in a more reasonable, likely correct way.

David Kopec

Thanks for that, Kevin. And another interesting example, classic example of regression of the mean that Kahneman touches on is generational talent. Right? So if you have, I'll take my dad again, I'll make my dad the theme for me today. So my dad was great at chess, right? I'm not. My dad was one of the top 10 players in the United States at one point. I'm barely like a little better than average. Sometimes people don't expect that. They'll come to me, they'll be like, oh, your dad was really good. You must be really good too. No There was a regression to the mean. Like, genetically, I'm sure there was a higher probability that I would be really good at chess, but on average, I was still going to be much worse than he is. And that is true of any kind of. If you go and you look in any kind of great talent and you look at somebody's children, you know, it's very rare, actually, that the children are as good as the very talented parents. That doesn't happen so much. And when you do hear about it, it's really actually something extraordinary. When you hear about the family where there's been three generations who are professional baseball pitchers. Right. Most of the time, you never hear about the father and son who are both professional baseball pitchers, because there was some regression to the mean. With the son, he might have been more likely to be a little better than average because those genes were pushing him up a little bit, but he wasn't likely to be extraordinary. Anyway, another interesting topic from the book, and it's kind of a theme of several chapters, is the idea of if we should trust experts or algorithms. And, you know, this is actually, I hate to make this a little bit about contemporary events, but there's been kind of a big discussion in society about how much we should trust experts. Right. And this book, of course, was well before that. It was written in 2011. But Kahneman makes the point very clearly in the book with many different kinds of examples, that usually algorithms do better than experts if those algorithms obviously are well thought out and proven to work, et cetera, and based on a good data set. And he even makes the point that actually simple algorithms often outperform experts. And I have a question for you, Kevin, because you do commercial real estate and a lot of research, if I took like, a random area that maybe you're familiar with but you're not an expert on, you don't, like, have a building that you've researched in that area. And I ask you, how much is this apartment likely to go for? Or, sorry, you do commercials. How much is this storefront likely to go for? I bet you wouldn't do as well as if we just did a linear regression, which is probably, you know, one of the most classic statistical techniques that everyone learns in their first statistics class in college. So what do you think? Do you think you would do better, or would a linear regression do better if you were familiar with an area? But there, there was a store, you know, that we were interested in the space, and I wanted to know what it's likely going to go for on the market, would you do better or a linear regression?

Kevin Hudak

Yeah, so I would say and one on, on the commercial real estate side, we do all manner of, you know, multifamily commercial real estate, offices, retail as well. I would say that in some of those, in some of those supply sides. So what is available, what rents are they getting today? Which regions or which markets are hot? Linear regression would serve its purpose well and probably be more accurate than asking and having self reported. Renters, office decision makers, et cetera, project some of those things. Right. So taking in the secondary data and the analysis of market comps, rates, occupancy, absorption, etc. Generally serves its purpose. What I tend to do and what I try to achieve is helping developers actually get there with more of the qualitative around demand. So yes, a given submarket is looking prone to pop soon, but how would you actually take advantage of those market factors driving that with the programming of your property, what amenities we're going to be putting in, all this new demand coming in that we're showing in both the surveys and the secondary data, what are they looking for in terms of interior features, interior finishes and units? And again, just the types of buildings and the types of products that we're offering. So while the linear aggression and the data can show you the what will be happening, the data that I tried to provide will show you how to get there and how to be one of the haves versus the have nots, the success stories versus the regression to the mean stories of mediocrity. So I think there's a place for both. But I definitely agree that in many of these instances the simple algorithms will get you most of the way in terms of predicting demand and understanding where it's coming from. It's more the qualitative, the survey respondents, the focus groups that we do that help you understand how to actually achieve that and reach full potential of some of these assets.

David Kopec

But that's still a data driven approach. I mean you're doing qualitative data, but it's extensive qualitative data. Right. And that's my understanding is that's one of the unique things about your firm, actually. Not to do an advertisement for your firm, but you collect an incredible amount of data that tells the story better than just the simple linear regression.

Kevin Hudak

Correct. It's the qualitative and the quantitative that we combine pretty well. You know, that's where we get to the idea of the sample sizes. You know, I like to say that good data confirms 80% of what you know and gives you 20% of expanded understanding, the targeting of what you know to what can be and really helps you cross that finish line. Not to use a common trope there, but really, you know, when it comes down to it, the experts and, you know, commentary is one thing, but having good reliable data, both qualitative and quantitative to back up these decisions just makes, it puts everyone in a better position to, you know, achieve, you know, premiums, to achieve faster lease up faster purchases and you know, the rising research tide lifts all boats in that case. So appreciate it, Dave.

David Kopec

So, so let me put this another way and sorry to dwell on this, but I guess the point I was trying to make, let's not say you, but let's say you're a real estate agent, a real estate agent that's coming into learning about an apartment that has no data that they're going into it with and, but they know the neighborhood and they tour the apartment, but they know nothing else. I bet a linear regression would do better than them 80% of the time.

Kevin Hudak

Yeah, I could see that too. And I think, you know, going back to the Sam Zell book that we read, right. He was able to identify some of his macro strategies about student housing. Buy up all the student housing in one in one block and then expand from there. Right. He didn't necessarily have to come in knowing market intricacies. He did interview some of the current occupants and some of the businesses around there, you know, but he was able to succeed without having, you know, costar or other sophisticated data providers around back then. So in that case, I think, you know, that linear regression, combining some of the available data can help that, you know, real estate agent make, you know, a fairly good decision. It's when you have more complex projects, when you are creating new communities, you know, ex nilo, that it becomes more complex.

David Kopec

Yeah, that makes sense. And there's a lot of these examples in the book and another great example in the book that I think a lot of people have heard about before is that of mutual fund managers. And you know, there's been these studies that Common cites in the book that people have, I'm sure read about in the newspaper before where a lot of mutual fund managers don't do better than throwing darts at a dartboard. So if I take a sector like I'm going to buy a mutual fund in, let's say semiconductors, and I go and I just randomly pick five stocks in semiconductors, I'm probably going to do better on average than a mutual fund manager in semiconductors who spends all their time thinking about which semiconductor stocks to pick and picks, you know, the ones that they think are best using their intuition based on some data. They still do research, of course, and look at data. And that's also why index funds have become so popular. Almost nobody beats the market. Right. The market, on average, does what it does. And if you randomly pick stocks, you're not that likely to necessarily do better than the market, on average.

David Short

Obviously, Intel Semiconductor loyalists on this podcast after the last episode.

David Kopec

Yes. And I do hold a lot of intel stock.

Kevin Hudak

Well, Kopeck, when it comes down to it, I think the experts versus algorithms concept, you know, Kahneman mentioned even with the mutual fund managers, how he had lectured them. He had analyzed all of their performance metrics over the course of hundreds of trades and many years going back 10 or 18 years, I can't remember how long it was for some of them. And he presented to them exactly what you were saying, and there was some defensiveness, and they sort of dismissed some of his lectures in person to him. And so he got some pushback on that. And I think that what it comes down to is there's the known knowns, the known unknowns, and the unknown unknowns. The experts seem to discount the latter of those two all too often. And that contributes to his idea that optimism is good. Right. But sometimes overconfidence is bad. The overconfident CEO, the overconfident professional, and the overconfident expert.

David Kopec

Yeah. And I don't think that the theme is don't listen to experts. I think the theme is that, again, simple algorithms, if there's a good data set, often do as good or better than the experts. So if you're somebody who understands basic statistics, and I think everybody should, and you're making an important decision, you should look at the basic statistics and some basic algorithms based on those statistics as you make your decision and not just rely on some expert that gives you advice. If you're capable of understanding. Now, there's some things you're not capable of, right? Like I can't go and understand all the papers on brain surgery and then give somebody good advice about brain surgery. But I'm certainly capable of understanding, based on my background, something about how the stock market works. And so I probably will do as good with some data about what I'm selecting as I would if I just talked to one random expert who has some biases and is steering me in some very specific way. Anyway, let's change the topic. We're on this for a long time. So Kahneman talks about different kinds of personalities and how they can end up leading to different outcomes. Specifically, let's talk a little bit about optimists and how Kahneman says that optimists drive the economy.

Kevin Hudak

Yeah, so he writes that the optimists tend to, and I referenced this earlier about optimism versus overconfidence. Optimists tend to take bigger risks. The optimist bias itself leads them to underestimate the odds they face. But they also are a bit more extroverted. They better solicit investment from others. They raise the morale of their employees sometimes. And in the case of the small business entrepreneurs that Kahneman cites, they fail. But many times they do in fact, succeed. So as a result, he calls them the engines of capitalism. He says that the optimistic mindset encourages persistence and sustainability even in the face of obstacles. And again, it's when that overconfidence instead of optimism drives someone that some of these organizations start to fail. So, yes, in part, it is the optimists that drive the economy, according to Kahneman.

David Kopec

Yeah, that was interesting. And by the way, that was the part of the book that my dad highlighted. He wrote David next to the optimist section and highlighted it all. So another interesting thing in the book is about how do we conduct an interview. And Kotman has actually some very specific advice. Now, some people won't like this, but it comes from his experience in the Israeli military, where his job was to help figure out if certain recruits were appropriate for certain roles. And he came up with a system that they kept using for decades and decades and decades in the Israeli military. And then he turned that into a wider kind of, let's we could say, algorithm for hiring. That. That can apply anywhere. And I actually want to read a little quote from that chapter. Bear with me. I think it's really going to succinctly explain what he means. So here it goes. Suppose you need to hire a sales representative for your firm. If you are serious about hiring the best possible person for the job, this is what you should do. First, select a few traits that are prerequisites for success in this position, like technical proficiency, engaging personality, reliability, and so on. Don't overdo it. Six dimensions is a good number. The traits you choose should be as independent as possible from each other. And you should feel that you can assess them reliably by asking a few factual questions. Next, make a list of those questions for each trait and think about how you will score it, say, on a one to five scale. You should have an idea of what you will Call very weak or very strong. These pre preparations should take you half an hour or so. A small investment and it can make a big difference in the quality of the people you hire. To avoid halo effects, you must collect the information on one trait at a time, scoring each before you move on to the next one. Do not skip around to evaluate each candidate. Add up the six scores because you are in charge of the final decision, you should not do a Close your eyes firmly resolve that you will hire the candidate whose final score is the highest. Even if there is another whom you like better. Try to resist your wish to invent broken legs to change the ranking. A vast amount of research offers a promise. You are much more likely to find the best candidate if you use this procedure than if you do what people normally do in such situations, which is to go into the interview unprepared and make choices by an overall intuitive judgment. So he's saying when you go into a hiring process, you should basically have some simple criteria that you rate every candidate on and then just take the candidate whose average score as a whole or some of all of their scores on each of these criteria is the highest and that's it. And I will say I've been through some hiring processes on the interviewing side where I was on a committee and we were coming up with criteria where this has actually worked very effectively. Right. Where we've basically listed like four or five criteria that we said were basically check marks. We didn't even do rating scales on all of them. And we checked, does this person meet this criteria? Does this person meet this criteria? We use that, though, for first round. So to decide who we were going to give phone interviews to. We didn't use that for later in the process. And he's talking about using this for even the, you know, the final judgment.

David Short

Yeah, it was interesting as I was reading this, because my company has like a very numeric system for candidate evaluation. I wonder if the person who implemented it read Kahneman or. I'm sure there are other authors that may have proposed some other things. But yeah, we do have a very formal. You have to rate them 1 to 5 on these different criteria. Those criteria depend on the particular role you're hiring for. But yeah, you do have to do that and then defend those ratings in a panel with the other people who evaluated the candidate as well and then ultimately come to a consensus.

David Kopec

But that is so much not how I've seen it happen. Being on, unfortunately, many interviewing committees so much, it's like, I really like this person and it's like people's feeling, you know, people's intuition. But Common says there's a lot of studies and a lot of data to show that that doesn't work out, that you really like this person, does worse than this person, met some kind of, you know, cold criteria in some ways. Right. There's no warmth in just giving everybody a rating on a scale, but apparently it works a lot better.

Kevin Hudak

Well, I think the main thing here too is that if you were to close your eyes and think about this person working at your firm or applying your gut and your intuition to this, that's more of the system one thinking, right? That's the thinking fast. And so what he's encouraging us to do here is kind of set forth that rubric. You know, it also actually makes you be introspective and think, what is my rubric for this position? And what does success look like? So I thought that was important too. The other thing I'll add to this point, Kopeck, is that if you're listening to this and you're a young executive who's responsible for a hiring process or looking for an associate or a coworker to hire, if you're setting forth this rubric to other colleagues of yours, he actually gave, if you remember, there were interns or research associates of his where he gave them the chance to write down a little bit of that gut feeling at the end. I believe in the first few times he tried this, he allowed kind of a fill in for, you know, close your eyes and think about what this person would be like the firm, are they a good fit? What's your gut say? So make sure if you're, you know, thinking of change management, if you're going to implement this Kahneman style of interviewing and hiring at your own firm, make sure to make it a bit slower. More system to give people an out here where they can still express their overall thoughts on a candidate and that it's not just the algorithm right away.

David Kopec

Yeah, that makes sense. Thanks for that, Kevin. I'd like to take a brief break from our book discussion to discuss how the three of us actually read the books that we discuss on this show. David, Kevin and I usually read on our Kindles or hard copies of the books, but sometimes that's just not convenient. I know in the past, all three of us have used the service from audible.com to listen to audiobooks. Sometimes you're on a business trip and you just don't have time to, to open up that hardcover. It's much more convenient to play in your car an audiobook from audible.com or listen on headphones while you're on a plane or on a train. And all of us have had really a convenient experience downloading and using books from Audible over the years. If you're interested in trying out audiobooks yourself, you can visit a special URL audibletrial.com/biz that's audibletrial.com/biz audibletrial.com/biz if you go there, you're going to have access to a 30 day free trial of Audible that includes credits to get a free audiobook of your choice. What can be better than that? Such an incredible offer. Thank you to our friends at Audible for giving our listeners this wonderful offer. David, tell us a bit about loss aversion and how Kahneman says that that should affect the way that we take risks.

David Short

So loss aversion is a term that Kahneman and his frequent collaborator Amos Tversky coined in a paper in 1979 that many of us have probably heard of. It really refers to the cognitive bias that causes people to ultimately evaluate a loss more negatively than an equal gain. And so you're given the exact same parameters. You know, winning $10 does not feel as good as losing $10 hurts. And so that really causes people to in general not take certain actions that perhaps would be net present value positive. You know, he gives some, some examples related to sort of specific financial options that they give people. But I might not remember the details exactly, but I think it was something like you could choose between $1,500 with a probability of 33%. You have two options. In one case it's like 1,500 with 33%, 1,400 with a 66%, but there's a 1% chance you're going to get zero dollars. And then another option where you're just guaranteed $920. And so if you run the math, it's basically you're going to end up with 14, 19 if you were to run the numbers. But that fear of that 1% ending up with zero just causes people to say, I'm going to take the 9 20, the like much lower overall value just to avoid that, that risk of oh, I'm that unlucky 1% and I end up with nothing.

David Kopec

Yeah, what I, what I didn't like about those scenarios in the book is that sometimes people go into this who are maybe don't have a lot of money and to them, when they're thinking about it, it's like, I just, I don't want to even have a risk at all that I won't have any of the money. And it actually makes, even with System two thinking, a lot of sense that you would take the sure thing. And so I think there's like, depending, you know, there's a lot of bias here from the perspective of the person coming into it and how wealthy they are, how much these questions, how their, their wealth might affect their response to these kind of questions. And I don't know that that was accounted for, at least in his description.

David Short

I actually had a psychology professor demonstrate this to us in, in college, and he offered a exact same, like, sort of bet to demonstrate it, which was he'll flip a coin and you have to give him $10 if he gets, you know, whatever he picks, but you get $15 if you don't. And, like, very few people in the class, like, raise their hands as, like, willing to take the bet. I was one of the ones who did because I know the probability. And I was just like, okay, yeah, I mean, I'll, I'll, I'll risk $10 to potentially win 15 with a, you know, even odds distribution. But, yeah, most of the students in the class were not interested. Seemed like other examples that. I don't know if they exactly fit it as well, but in a, in a corporate training session where they were trying to sort of demonstrate this a little bit, the guy just took $20 out and held it up. And he said, whoever grabs this, gets it. And me and one other person were the only people that got up to get the money. Like, it's just. It was literally free money, but, like, you know, thinking maybe he's not really going to do it or it's just going to be embarrassing or whatever. Like, there were 20 people in the room. There are only two of us who actually ran to get the $20. The guy in the back actually really ran. I just kind of walked and I still got it because I was closer.

Kevin Hudak

Well, remember, there's two points to that too short. So one on the coin flip example, Common has the example in the book of then someone offering to do that 200 more times, right? The $10 for versus the $10 loss versus the $15 gain on the coin toss. And most humans will reject that 200 time offer, even though statistics show that doing that 200 times will often lead with you far, far ahead on that. And the second point too, about getting up to grab the $20. There was a really Cool thought experiment or an actual experiment they conducted about helping someone who starts having kind of a fake cardiac emergency. And 20 folks see that happening. Looking from the outside, we all think that more people will jump up and try to grab the $20 or try to help that person than actually do. And that that's part of our sort of system. One thinking that others will take care of this problem or others will seize that opportunity. So there's some interesting, interesting examples in the book. Highly encourage everyone read it to see some of those thought experiments.

David Kopec

Yeah, that is interesting. Another interesting cognitive phenomenon is framing. Kevin, can you tell us a bit about framing and how it might matter in some business scenarios?

Kevin Hudak

So framing is the idea that the choices we make are frequently and often indelibly altered by the manner in which the choice and the supporting data is presented to us. So this is a key component of Kahneman's prospect theory, right? In which we as humans are more sensitive to potential losses than potential gains. It goes into that loss aversion that we were just talking about. You know, an example he provides to framing is a thought experiment where respondents were asked about lives saved as opposed to lives lost. And we are acutely more reactive to the lives lost example more than the lives saved, even though the choice that mentions lives saved would likely result in fewer lives lost. Within framing as well, there's also this concept he brings up of narrow framing and broad framing. So for example, an investment banker or trader who's focused on narrow framing might look at their daily or transaction specific results of their work and they might be dismayed or move their money and trade more if they're down for the day. Whereas an investor looking at a broad frame of an otherwise stable account might not alter their strategy. Right. Is not going to go out and trade even more, Tom, and potentially sell at a loss. And therefore they're going to have longer term success. And an even deeper core of prospect theory is that Short mentioned this earlier, but the idea that we all prefer that sure thing gain. But when it comes to losses, a dimension that Short didn't mention before is we're more likely to gamble. So, for example, if in a bet, we might either lose 750 for sure, so $750, we'll lose that for sure, or we'll have a 90% likelihood of losing $1,250, but a 10% likelihood of not losing anything, many of us will take that gamble and end up more than likely losing even more than $750. Right. And that's where we Start getting into sunk cost fallacy, which is sort of a result of some of this framing. It's important to keep in mind for businesses or if you're on a project where you're allocating investments between a consistently losing project versus one where you can cut your losses and take that investment into one that has a greater chance of return. So always keep in mind the framing. Personally, it's what we see in the political spectrum as well when we talk about things like public assistance versus help for the poor and indigent. It's pro life versus pro choice framing really gets to the system one versus System two and the associated biases.

David Kopec

Thanks for that, Kevin. Okay. We did not promise you it's a long book. It's over 400 pages of dense studies and insights, A complete summary of the entire book. We gave you some interesting facets, some of the main themes, and now it's time for us to talk about the book as a whole. And I want to start with its credibility. So did the two of you find this book credible? Did. Did you feel the author backed up his assertions with research and evidence? Did you come out of the book feeling like this was somebody who had their own cognitive biases and those came into the book? Or was this a fair arbiter who really reported the science correctly to the lay reader?

Kevin Hudak

Yeah, I found it pretty credible. Right. He extensively cited his own and others papers. You know, one thing I liked about it is that he often collaborated with folks who disagreed with him and he showcased their work together in the book. He cited those that could be seen as competitors. Right. If we were to simplify to that and it lended credibility. It offered some alternative viewpoints, though. Obviously as the author, Kahneman had the sole authority to support their views or debunk them in the narrative. I also thought that he sometimes fell into this idea of a small sample size. But in many of those instances he recognized that and he spoke about it in the book. One of my favorite anecdotes in terms of his own self reflection was when he described to us committee meetings surrounding drafting a textbook based on his and Amos's findings. And it was a committee of, say, eight or ten scholars like themselves, they gathered around and one of the things that they did was put together some projections. So he encouraged everyone to write down the projections of how long it would take to complete the textbook, you know, on a slip of paper, and then wrote that all on a chalkboard. And let's say they all, you know, coalesced around two years to write this textbook. Then he actually asked one of his colleagues who was aware of others who had tried to write a textbook like this for what they had experienced. And so even though it was hearsay or it was, you know, one scholar talking about the activities of another group, that person said, well, it, you know, it's the outside view that we'll talk about. But that person said it took actually eight years. And despite knowing that outside view that it took other scholars eight years to write a similar textbook, the experts, right. Of Kahneman and his peers still went forward thinking that it was going to take two years. And I believe in the end it took seven years. Right. So, and he gladly, it seemed, even though it was one of his professional failings that he mentioned, he shared that with the reader as an example of how even an expert can learn, you know, from this understanding of System one versus System two and all these biases that we essentially deploy without even realizing it in project planning, program management, timelines, et cetera. So I thought he was very self reflective. He collaborated with his competitors, slash scholars of opposing schools of thought. And I thought that lended credibility in addition to the dozens of pages of notes and appendices that are in the book as well.

David Kopec

David, how did you feel about the credibility?

David Short

I thought it was generally credible, but to be honest, I feel like it's more of a system 1 opinion than it is a system 2 one. I did not really, you know, read any of his academic papers. I didn't look at the citations to see the actual data. I didn't do any, you know, assessment of the statistics that were involved. It just kind of feels right, like the, with System one, System two concept, it intuitively seems right that, yeah, I do have these two different ways that I deal with things and I sometimes make very quick decisions and I sometimes go through the effort of, you know, fully evaluating. But then like the pieces, for instance, on, like, how are these different systems formed and whether or not there truly are like these two systems? Like, are there maybe there are 17 systems and not just two, to be honest, like, I guess I'm not enough of an expert to like to question the credibility or anything, but. But some of it does feel like more of a concept that was developed in order to explain certain things and then he did certain research that backs it up to some degree, but there definitely was some expectation going into the design of many of the experiments, it seems like. So not to fundamentally question the credibility, but just to some degree, psychological studies in general and whatnot. I have to take With a grain of salt.

David Kopec

Yeah.

Kevin Hudak

In short, now that you say it, I kind of agree with you on that idea of us reading and digesting. This book is a system one exercise, right. We're sort of fast thinking, scrolling through it, reading it, and I like that idea. I also thought that it was interesting that, you know, I often in this podcast go back and think about the overall structure of a book relative to what it's trying to accomplish. And, you know, one thing I did enjoy about it is he presents a lot of different thought experiments to us, the reader, to answer. And I thought in that way he was encouraging us to dig a bit more into that system too, as we were reading. He just didn't propose or put a number of examples, like in a Malcolm Gladwell esque way, of others doing something. He actually had us going through some of those same tests and computations that he was trying to outline in the book.

David Kopec

I did go and look at some critiques of the book that were published after it came out. And some of the more recent critiques alluded to the reproducibility crisis that's been affecting many sciences, but especially psychology, since the 2010s. So since like a little bit after when this book came out, basically, it's been found that many psychological studies that were thought to be kind of paramount, kind of anchors of the field are not actually reproducible. In fact, more than a third of classic psychology studies, according to some papers that have looked back and then looked at trials to replicate these old studies are not reproducible. And then there's a lot, actually, according to some of these critiques of the book that are in the book that it turned out later on, were not reproducible. And there's so many that it would be too many for me to actually list now. And so you can't go into this book and like hang your hat on any one chapter because the chapters are all based on findings from research studies. And so it's very possible that you'll go pick some specific chapter in this book. And the studies that that chapter was based on were later either not replicated or have even been proved to have the opposite effect in some cases, according to these critiques I read. And so I think the overall ideas from the book are very interesting. I think the System one versus System two is a really powerful concept that everyone can apply to their lives and that he shows so much evidence for that it's hard to deny. But if you want to go into something very specific, which we have done some of in this episode. I would not go without doing further research and say that for sure is true based on the issues with reproducibility that have affected so many of the studies cited in the book.

Kevin Hudak

Kopeck, you have experience with younger students, you know, at, at Champlain. I'm wondering, do you think that it's a sign of the times that we have this replication crisis right now? Just that some of when we try to reproduce some of these thought experiments and social experiments, that everyone is just too overstimulated by social media, that we've just become, you know, impossible to predict or forecast or impossible to thought experiment on, that there's just too much variability regardless of the sample size that we're looking at.

David Kopec

That's a really interesting hypothesis. And I'm. I'm not a psychologist, so I won't comment on specifically what you're saying, but I will comment on what I've read about the reproducibility crisis. And I think a big part of it is actually that there weren't stringent standards being used in the original studies. So some of the studies, they might have been doing P hacking, for those of you that know what that is, they might have had too small a sample size. There's a variety of reasons why the original studies weren't conducted properly in the first place, and I think that that plays a big part into it. And I think also the atmosphere in academia around research where it's publish or perish, it drives people to sometimes do P hacking or to do things that are a little bit unethical with their data. And I think that's much more common than maybe the layperson thinks. So I think that's a huge part of it. Whether the psychology of individual people has changed enough over the 70 or so years that there's been formal psychological studies in the field. I, I can't really speak to.

David Short

Do you want to describe P hacking just just briefly for the audience? Kopeck sure.

David Kopec

To put it in, in simple terms, it's the idea that, you know, you. You have a result and it's not statistically significant. So you feel like, you know what, this is not going to really be notable for my career. Maybe I can get it published, because you can. Sometimes you can. A lot of times you can get things published. It's still valuable to publish things showing that something's actually not significant. But I'm going to make a much bigger splash if I can actually show that this is significant, that I found something really exciting. And so when they don't find something initially, they'll try wrangling the data in different ways to try to find some way of angling it, some way of maybe filtering it, maybe not. You purposely not using some of the results and having some way of explaining away why you didn't use that part of the data set in order to find something significant. So you try squeezing the data in many different ways, maybe to the point where it's getting unethical, where you're squeezing it in ways. And your bias of wanting to have something significant is leading you to do some things that you know you shouldn't be doing. And you end up with something that is significant that then you write your paper around. Right? That's basically what it is in a nutshell. And unfortunately it's all too common. And I think that that is a huge part, at least from my understanding of the replication crisis.

Kevin Hudak

It comes down to my own research practice. I mentioned this before, but I try to set appropriate expectations. And the one thing I repeat often, like I said, is good data confirms 80% of what you already know and offers you 20% of new targeting, you know, varied feedback to adjust your program, to adjust your plans. It's when I start seeing in some of these different research studies and private research products, these blockbuster findings that probably overreach that I become a bit suspicious. So great points, Dave Kopeck.

David Kopec

So again, it's not to say that Kahneman is not credible. Of course he's a Nobel Prize winner, he's done amazing research. But he's not just citing his own studies, and some of even his own studies have not been replicated. But he's also citing a lot of other people's studies and unfortunately a significant number of them have not been replicated. So I wouldn't go on any like, individual finding in the book and be like, wow, this thing is definitely true without doing further research. Okay, let's also talk about how this book has affected the three of us. Is there anything that you're going to change in your work life as a result of reading this book?

David Short

I don't know that there's anything I'm specifically planning on changing, but I do think that interview process that I just talked about, I will plan on bringing to any future companies I may be at. I really do think having that metric focused interview process does strip away a lot of the bias for just someone who's like you that can otherwise come through. So I think needing to rate them and have decided beforehand, what are those areas you're going to Rate them against, and then sort of assigning people questions to evaluate those different kinds of things I think is a very useful way to make for a more fair process. And I think probably better outcomes. I buy that the outcomes are better.

Kevin Hudak

Than just trusting your gut and echoing Short. You know, we had obviously picked up a lot of these concepts in school through other books, you know, including the Chris Voss book in Training for our Professions. You know, however, some of the examples that Kahneman brings up, the talk track, I find pretty useful. You know, reading this will certainly impact the language, the anecdotes I pull, you know, when talking to my clients about research, particularly when we're talking about sample sizes, the biases inherent with that system. One thinking, you know, I conduct dozens of in depth interviews with customers of my clients, with my clients executive teams as well. And I definitely intend on just like Short, going back to some of Kahneman's chapters about effective modeling in the qualitative interviews to maybe even try modifying my approach, simplifying some of the questions and setting forth that, that rubric of six dimensions that I'm really looking to, you know, find some good qualitative insight on to make it even more actionable.

David Kopec

I was thinking about grading a lot while I was reading this book. And the truth is that just like my dad, I have a biased system. One where I think sometimes, you know what, this student, based on their prior work is probably not going to do that well on this next thing. But I actually get surprised on a regular basis. And so the book was actually a good reminder to me to which I think I'm pretty good at. But it's important to have these reminders in life, right, not to be biased by students prior work. The students do surprise you sometimes somebody does, you know, three bad projects and then they do an incredible next project that does happen. It happens regularly. And so this book was an important reminder to me to never become too biased by past results. Past results may be giving you a more likely indication of what the next results will be. But our job as somebody who evaluates somebody else is to give them the benefit of the doubt and always give them our full attention in their new work.

David Short

I actually had a professor in college in a philosophy of law class who was very focused on avoiding this bias. And so every assignment he would have us submit anonymously using our student IDs so that he didn't know whose papers he was evaluating or tests he was, he was reviewing. And so, you know, his assistants would, you know, print the things out or whatever with it just identified with the number and you know, so then in theory he doesn't. He doesn't know. Obviously maybe he does remember the numbers slightly over time, but obviously much better than, you know, seeing the name where you definitively know who it is that you're evaluating.

Kevin Hudak

And I think an important part too, just to echo both Short and copac, is this idea of when you are grading those papers, if there's three essay questions in a row to make sure to read the essay questions separately, as in grade everyone's first essay anonymously and then move on to the second essay, don't take one exam of three essay questions and read them all together. As Kahneman points out that in some exercises that bias will move from the first question to the second and to the third. And I thought that was an interesting way of looking at the world. And Kopeck, I appreciate your character arc throughout this podcast and all that you've learned about grading.

David Kopec

Well, I said I already was unbiased and that this was just reminding me to be unbiased. Anyway, so anything else that we missed from the book that you want to talk about?

Kevin Hudak

I just wanted to go back to framing and the idea of the outside view, and I thought it was just a great phrase, a great concept to use for something that we all recognize is important. I often bring this book up and look for business applications of it, but when it comes, one of my favorite books is Max Brooks's World War Z. So imagine that's Mel Brooks's son who wrote a book about the zombie outbreak throughout the world. It's very different from the Brad Pitt movie, so I highly recommend the book. Don't recommend the movie, but in that book there's a mention of what they call the 10th man rule, which is an Israeli factor. So imagine in the book, and I'm not sure if this actually mirrors reality, but after the surprise of the 1973 Yom Kippur War, Israel actually instituted this 10th man rule, which encourages dissent when nine members of a group agree on something. So that 10th man has to come in and basically play the devil's advocate about what if our group think is wrong, right? Because the 1973 Yom Kippur War, nobody thought that would happen. They could have used that 10th man to do that. And as a result of adopting this in the book, Israel is actually able to be one of the leaders in responding to the zombie outbreak because that 10th man came in and said, what if all this evidence pointing to a zombie outbreak and an upcoming zombie pandemic is true. Let's prepare. That was the outside view in action. And given Kahneman's origin in Israel, I thought it was, it was salient to bring up. One of the thing that we may have missed too is I think it's this book and its findings is very applicable to sales. The first time I actually ran into this book and read most of it and Learned about System 1 and System 2 was, was from sales consultants. Right. It's the idea of the power of first impressions, which is System one framing effects, the aversion to loss. Right. And the power of stories that resonate a lot in the sales ecosystem. You know, drawing on the cognitive biases and associations to make sales. You know, I found it incredibly useful for Kahneman to focus on the simple digestible language to amplify sales. Not coming off as too confident and cocky and instead asking more questions. The idea of anchoring and pricing that we spoke about through Chris Voss as well. Right. All of that. They're all pillars of the consultative selling process. And while common mentions branding and marketing to some extent, I think this is also a great book for sales leaders.

David Kopec

I have an anecdote that relates to the idea of system one versus system two in computer programming. So back in the 1960s and 1970s, research in computer chess was going two directions. There were some that thought the way we're going to get master level programs. And back in the 60s and 70s, they were barely amateur level, the computer programs of the 60s especially. And they were like, how are we going to get them to master level? How are we going to get them as good as the best humans? And some people thought we need really deep search. We just need fast computers with algorithms that can go as deep as possible. What is this person going to play then? That person going to play in reply, then the original person play in reply, back and forth, back and forth. Can we go 10 moves into the future? And then there were other people that thought we need to make computer chess programs that think like human beings. So we need to use heuristics that a grandmaster would use. And what we're going to do is we're going to program as many of those heuristics as possible into the position evaluator. And that might take a long time and mean that we can't look that many positions into the future. But because we're going to get these great heuristics from these grandmasters, we're going to have a really good program because it's going to think like a grandmaster thinks. So These competing schools of thoughts were competing at computer test competitions in tournaments over the 60s and the 70s where computers would play against each other. And actually, my dad was on the team for Dartmouth computer chess program. I think it was called Dartmouth CP Dartmouth Chess Program in the early 1970s. And they were really going all in on this. Let's make it think like masters, think. And my dad was actually a chess master at the time. And so he was giving input into. And he, you know, I think he worked on the programming a little bit too, but he was giving input on, like, this is a heuristic we should use. And they consulted a few masters and they gave their input on, you know, this is the type of evaluation you should do. The best program at the time was from Northwestern University. It was the best program in the world. It was called Northwestern Chess. It's just known as. Was just known as chess, actually. And it would win every tournament. And what it did was not that it did the deep search. It. It just. It didn't use a complex evaluation function. It tried to just as quickly as possible, look at the position, look at what the. The other opponent can reply, go back and forth, and go as deep as possible. It was winning doing that. And then it played the Dartmouth program. And the Dartmouth program was actually really good, and it actually tied Northwestern chess. And so this approach of using kind of system 1 heuristics like a grandmaster would use worked once, and then the Northwestern program got a little faster and got a little deeper, and the Dartmouth program never beat it again. And actually, that's how chess programs became master level, Grandmaster level, and then superhuman is by getting deeper and deeper search and not spending a ton of time on evaluation on any individual position. Interestingly, with neural networks, we're now going back a little bit the other way and making the evaluation function more sophisticated. And that's led to some further improvements. But the way we actually got to Master, Grandmaster and then superhuman levels was by really deep search, which you could think about more as System two, doing a lot of calculation as deep as you can, instead of just quickly applying many different heuristics like System one might. That a human Grandmaster does. So I thought it was an interesting anecdote just because it shows that the System 1 versus System 2 doesn't just apply necessarily to humans, but also on the development of algorithms themselves.

David Short

It's funny that you tell that story. Not that they need the promotion, but Magnus Carlsen went on Joe Rogan last week, actually, and had a pretty interesting conversation with him about the different kinds of computer programs and the way that he's used them, the way that he thinks that it's changed the young chess players as well. So if people are interested in learning more about that from the greatest chess player of our current generation, you can hear him from the Number One podcast.

David Kopec

Yes. And hopefully Joe Rogan will link to us as well. So what were your biggest takeaways from the book? What were like, the number one thing that after reading it, we're like, wow, this is a big deal.

David Short

Yeah.

Kevin Hudak

I think the biggest thing for me was being able to put a name and an acronym to what you see is all there is bias. Right. The. The WYSIA T I that Kahneman mentions throughout, in almost every chapter. The idea is that we latch onto available evidence and we don't consider the absence of other evidence in our decision making. Right. What we need to do is be able to pause our system one and bring that system to thinking to bear. And so if you read this book and you take away one thing, you're leaving aside the idea of the replication crisis that we talked about, the anecdotes, the talk track. If you leave this book thinking, I need to look outside of what's provided and look for the evidence that's not there, or at least consider the unknown unknowns and the known unknowns, then I think you'll be a better statistician, you'll be a better career professional, a better executive, a better person. And so that was really my big takeaway. In addition to all the great stuff, I really enjoy the chapter about the two selves that we didn't get into. The remembering self and the experiencing self, Particularly when we're talking about mental health and wellness and life satisfaction. So there's so much more in this book that we didn't even cover here, and so many takeaways.

David Short

I would really say the exercises itself were what were interesting for me. Just like I sort of understood a lot of these concepts, but then actively going through them, I thought was the most valuable part of going through the book for me was really just the exercises. So I really appreciated that inclusion.

David Kopec

For me, it was actually just the basic idea of System one versus System two. Now I go through a lot of decisions in my life, and I think, am I doing this just based on my biases, based on my, like, automatic response, or am I really thinking through this decision? And I think that's already helped me make some better decisions, both at work and in my personal life. To always ask that question, where is this instantaneous reaction Coming from, did I do any calculation or is it just system one? Okay, the big question we always ask, do you recommend this book? And if you do recommend it, who should read it?

David Short

I'm going to be honest, I really don't recommend the book. I don't know if I've ever done that actually. But I just felt like this did not need to be as long as it was. It felt very repetitive, to be honest. I feel like I'd learned a lot of this from other places before reading this as well, which is a testament to what an important book it was and how wide ranging it's impacted a lot of other things. But I guess the point I'm making is not that it doesn't have a lot of great insights, but just they really were beaten to death. I feel like it was just, you.

Kevin Hudak

Know, a lot of the same concepts.

David Short

Over and over and over again. And I just, I really don't think it needed to be as long as it was for. For what I got out of it. So sort of one of those. That's probably. There are two or three articles that he and Amos wrote that are, you know, a total of 60 pages that get you, you know, 99% of the value.

Kevin Hudak

And I'd agree with Short in the underpinnings of his non recommendation, but I would say that I definitely recommend the book. I think, you know, I asked that question of Kopeck earlier about is it a sign of the times that we're having some of this replication crisis and the reproducibility of some of these social experiments. And I think that when this book did come out in 2010 or 2011, there was probably a need to make it a bit longer, a bit more redundant, you know, Whereas today I'm not saying Short, that you're expecting shorter things and more concise, more brevity, know. But today we're just used to a different kind of business book and I think this is almost like the last of the, you know, business book epics or, you know, or long volumes. And it's important, right, because it's, it's important to understand the underpinnings of the concepts that we now almost take for granted based on hearing them a lot. Right. And a lot of recommendation, a lot of repetition. You know, if when I'm reading a Malcolm Gladwell book like I did earlier in my career, you know, I feel like I'm in an airport or a travelogue with him, you know, in Common's book, I felt more like I was in a Faculty lounge or a classroom. And, and there's certainly a place for that in our lives. So, you know, I definitely would recommend this for all manner of young professionals and leaders to give them a sound understanding of, you know, the ways to approach the world with, you know, maybe some more system to statistical thinking. You know, as a funny aside as well, this is actually one of my fiance's father's favorite books out there and he quotes from it a lot. He uses it in his, you know, everyday teaching and everyday life. And I really did enjoy. And that's not just pressure from my future father in law. I thought it was interesting when we actually got engaged, he pulled together our family and had them write down on slips of paper when they all thought the wedding was actually going to happen. And it was interesting to then read in the Kahneman book how, you know, often with experts putting their heads together like that and forecasting without any biases, right. Without any further talk, you can get towards a more accurate estimate around when something will occur or what the outcome of an experiment will be. So I just thought it was out of character for him to start gathering everyone and doing that. But it was actually some learnings from the common book. So looking forward to making all those estimates come true.

David Kopec

I agree with both of you. I think it's a little longer than it needs to be. I was actually talking to my physical therapist. He was like, oh, you know, I read half that book and we had a pretty full discussion about the parts of it that I found interesting. So, you know, I think the book is split up into five parts. I would recommend to our readers that you just skip parts four and five unless you're particularly interested in the topics. You know, you can read like the beginning of each of them if they really take your imagination. They're not like badly written, they're fine. But the main things that people talk about from this book are all covered in parts 1, 2 and 3 are the first like 250 pages of the book. And you might say you guys are lazy. It's only a 400 something page book. You know what, we read a book. What do we read? Titan was that two seasons ago and that was like a 700 page book about John D. Rockefeller. Right? It took me longer to read this 400 page book than it took me to read that 700 page book. And the reason is that this is just dense. This is super dense. I mean, not only that the words are actually literally small on the page in the physical book, but it's just so much that you have to think about as you're reading it. You're using System two the whole time you're reading it. Whereas a lot of books you're using System one. When you're reading about Rockefeller, you're like, oh, that's interesting about his life, whatever. And you keep going, right? But this book, you have to stop and like really process what he just said. Sometimes I found myself, you know, rereading the page a couple of times. So it's a dense, heavy book and it will take you much longer to read these 400 pages than it'll take you to read the average 400 page book. And so for most people, I think you can get all the value out of it from the first 250 if.

David Short

You want a little bit more approachable version of this book. Michael Lewis wrote the Undoing Project, which is a breakdown of Amos and Daniel's work. And I looked it up, it has slightly lower ratings actually. So maybe I'm wrong in feeling this way, but I found it. I mean, it just had more of a story about the relationship and whatnot. That kind of like draws you through it in a way that's different from just like the pure academic kind of work that this is.

David Kopec

It's funny you mentioned that. My physical therapist read that book too and mentioned it when we were talking about this one and said that it was better and that he, he liked it and that's. And he, you know, he read all of that, but he only read half of this.

Kevin Hudak

So anyway, Kobek, I hope that you're focusing on physical therapy as well. It sounds like you guys are having a salon or the agora in your physical therapy appointments.

David Kopec

He's a great guy and his. The reason I don't get better is not his fault. I just have trouble healing tendons. I have like a, you know, sprained ankle right now and I've had all these other sprains and I just, I just don't get better. So we have plenty of time to talk because my, my body's not responding.

Kevin Hudak

I'm sorry to hear that, but I love the shout outs. We have my future father in law. We have your physical therapist. This is great.

David Kopec

And don't forget, this was the last book my father read. So this was a special episode and our listeners picked this book. We've never done that before. So thank you listeners for picking a book that all of us thought was too long. Okay, speaking of books, let's get into our next book. Another listener feedback related incident is going to occur on the show. We're going to be doing a book related to the video game industry because our listeners in a survey said they want us to do a book related to the video game industry. So we're not ready to announce what that book will be, but we'll be back next month with it. We're making some final decisions amongst the three of us. Okay, before we say goodbye, anything that any of you want to plug and how can our listeners get in touch with you?

David Short

You can follow me on x @davidgshort

Kevin Hudak

You can find me on x @hudaksbasement. H u D A k s Basement and I highly encourage everyone to follow Short. I've been liking a lot of your posts recently Short on X and it's super interesting thread you have there.

David Kopec

Yeah, David is great on X. I'm on X too. I'm Avecopeck D A V E K O P E C We'd like to thank our friends at Audible for sponsoring this episode. Don't forget to check out the offer at AudibleTrial.com/biz. That's AudibleTrial.com/biz, which we've also linked to in the show Notes. I want to remind everybody this is very important. You listen to us now for like an hour and a half. You need to go rate the podcast, go on Spotify, go on Apple Podcasts and leave us a review. It really helps other people become aware of the show. The more reviews we have, the more more popular a show looks. Not that we're not popular enough. We're doing pretty well. And with that in mind, don't forget to subscribe to us also on your podcast player of choice. Follow us on Spotify. Follow us on Apple Podcasts then you'll never miss an episode. Our episodes come out on a monthly basis and we would love to have you along for the ride as we continue Season five. Talk to you next month.

Daniel Kahneman is an award winning psychologist who made Nobel Prize winning contributions to the field of behavioral economics. In Thinking, Fast and Slow he recounts his research journey and makes digestible to the reader some of the key insights that may help them improve their decision making. Kahneman’s book introduced many now common terms into popular culture and helped illuminate the cognitive biases that we all face both at work and in our personal lives. Instead of tediously covering every facet of this long and sophisticated work, in this episode we will delve into the ideas that we found most interesting and that we think you can apply to your work and financial lives.

Thank you to our friends at Audible for sponsoring this episode. Check out AudibleTrial.com/biz for a 30-day free trial of Audible and credits towards a free audiobook.

Show Notes

Follow us on X @BusinessBooksCo and join our Amazon book club.

Edited by Giacomo Guatteri

Find out more at http://businessbooksandco.com