[S5E8] What You Need to Know About AI with James Wang
A Primer on Being Human in an Artificially Intelligent World
Transcript
Artificial intelligence is both a threat and an opportunity for everyone's career. In this episode, James Wang, the author of what yout need to Know About AI will help our listeners navigate this complex labor landscape. Welcome to Business Books and Company. Every month we read great business books and explore how they can help us navigate our careers. Read along with us so you can become a stronger leader within your company or a more adept entrepreneur. But before we get to the book, let's introduce ourselves.
Speaker B:I'm David Short. I'm a product manager.
Speaker C:I'm Kevin Hudak, a Chief Research Officer at a Washington, D.C. based commercial real estate research and advisory firm.
Speaker A:And I'm David Kopeck. I'm an associate professor of Computer science at a teaching college. Although the field of artificial intelligence has been around since the 1950s, it's only in the last few years with the rise of large language models, the the general public has truly taken notice. Apps based on LLMs like ChatGPT have already upended the labor market, creating productivity gains in some roles while eliminating others. In James Wang's new book, what yout need to Know about AI, he walks us through not only how modern machine learning works, but also what one needs to know to stay competitive in an AI inclusive world. The book provides a wide perspective, including historical context, practical advice, recent developments, and enough technical explanation to give everyone a sense of the possibilities and limits of this exciting technology. James, thanks for coming on the show to discuss your book.
Speaker D:Super excited to be here. And I have to say I'm really excited about this podcast in particular because you guys go pretty deep in questions. So I'm excited to jump into different territory that doesn't usually get covered on this sort of thing.
Speaker C:And not only that, James, but I would add for our listeners who are listening to this episode, this was an episode that was actually live streamed on YouTube as well. So we're saying hi to any folks who are viewing us live on YouTube and you may experience a little bit of a different experience here, including jumping over our words, not being able to edit this. I had two experiences there, but it's going to be an interactive audio visual exercise that we're doing here. So thank you for joining us.
Speaker A:So James, before we dive into the book, we'd love just for our audience to learn more about you. So if you could tell us a little bit about your background, how you first got interested in artificial intelligence, and how your career led you to writing what you need to know about AI.
Speaker D:Yeah, sounds great. So just in terms of what led me this direction, I found that, well, maybe just talking about why I ended up writing the book to begin with, I really found that especially as people were getting very both worried and excited about AI, there weren't a lot of great resources for people to jump into. There was plenty of, like, great newsletters. A lot of the technical publications were quite good. But in terms of a regular person trying to make sense of what's going on, I just really didn't find any good resources out there. So when a publisher came to me and said, like, hey, let's publish a book, I thought, well, this actually seems like a really interesting area. Their initial reaction was like, okay, your pitch sounds good, it's timely. It does sound a little bit like a textbook. So we're not sure how it'll do, but let's see where we can go in terms of this. So the reason why I felt like I was able to bring something to this area is I tend to intersect a lot of different kinds of disciplines. So I have a technical background myself. I have a master's in computer science and machine learning. I was part of the investment team at Bridgewater Associates some years back. Afterwards I did a stint at Google X helping launch one of their energy, helping do the pilot launch announcement stuff for one of their energy projects out there. I did a lot of stuff in startups around Berkeley and Stanford, especially during the maker era. So I got pretty enmeshed in both the hardware ecosystem and, you know, software startup ecosystem. And at this point I'm also a investor at Creative Ventures, which is an early stage deep tech fund. We've actually been investing in AI since 2016. So before it was cool and also before a lot of the current hype and everything started to take over the market. So I think having all those different pieces of it, the business side, the technical side and just understanding the societal side, I thought that this would be a great opportunity to help people understand, hey, what the heck is going on?
Speaker C:And don't forget, James, I saw a mention in the book, and you can tell me this is true or not, that your undergraduate studies at Dartmouth were actually in philosophy. Now would you say that that philosophy degree sort of helped you at Bridgewater and then in your career beyond?
Speaker D:I would say so. And I don't know if the listeners doubt, but this is a pretty Dartmouth heavy crew here in terms of group and DAV and Hudak and I, like, we, we actually knew each other in college and stuff. But yeah, in, in terms of philosophy. So I was actually Philosophy and economics, I'd say It definitely helped being able to think about a lot of these things within an interesting historical context. Bridgewater is a kind of weird place too. We actually had quite a few investment folks who did pretty well, who were met PhDs in metaphysics. So it's kind of an odd place in terms of that. So it takes all types, I would say.
Speaker C:Yeah, that's interesting is whenever I do alumni interviews, I always kind of recommend to the students, the candidates that they, you know, sort of explore some of those, you know, other majors and classes and other. And I think for our listeners who have kids, for our listeners who might be thinking of applying to college or going to master's programs, I think sometimes opening up beyond the typical STEM field is important. But I do have a substance question for you, James, and just to kick off right into the book discussion, you know, your book provides some great historical context for the layperson. And Dave Kopeck and Dave Short here, I would say, are very technical. They understand, they have computer science backgrounds. I am one of those laypeople. And I was actually very surprised to learn that the first kind of AI bubble or where AI gained prominence was in 1988. And that's actually when the AI conference sort of hit some of its peaks. Up until recently, you really did a great job kind of sketching out the different eras of AI, and particularly these dueling schools of thought within the AI research community over the decades. There's that symbolic school versus the connectionist school, and then there comes the deep learning revolution. Can you provide a brief sketch of those dueling schools of thought around AI?
Speaker D:Yeah, totally. And this is one of those things where, especially if you're pretty deep in the field, it's hard to know actually where to start, because technically you could rewind all the way Back to like 1800s, 1900s with some of the theory. You could go back to the 1950s of the first AI conferences. The first conference on AI came together at Dartmouth, might I add. And you can. You can basically pick any sort of starting point. I thought the 1980s was interesting in part because that was actually when it went out to become a commercial. I wouldn't necessarily say success, because ultimately that one kind of inflated. That bubble inflated and then burst ultimately. But it definitely gained a lot of prominence within the commercial sector, but even within the national security sector, where I point out the US and Japan at the time were actually competing in terms of AI doing national initiatives to try to see who would be able to get further. So the interesting thing there is during that particular time Period. It was actually a fairly different school of AI. As you talked about, there are two different schools of thought in terms of how AI can go. One is symbolic, which is, has more structure, is built around knowledge, and doesn't have a lot of the challenges around hallucinations that our current AI does with deep learning. The other side is connectionist, which it was originally biologically derived. But the idea there is essentially you can take something somewhat more generic, put in a lot of information, create connections almost like brains, so like neural networks and whatnot, and then build something from that as a whole. With deep learning. At this point, that school is ascendant, essentially. But we're actually seeing a lot of these techniques get hybridized. Listeners may know about AlphaGo, that's actually one example of a system that's fairly hybridized. Deep learning has been able to scale far better than the symbolic school because it's hard to hand build these kinds of systems and figure out how to make them robust. But even so, it's interesting to see that it didn't have to go in this direction, it didn't go this direction in the 1980s. And this entire hype and boom around AI isn't totally new.
Speaker B:Yeah, speaking of hype cycles, we certainly are seeing a lot of hype around AI right now. Every other month we hear about another $100 billion investment or some kind of partnership and recent advances in LLMs. Certainly we're actually seeing it in a way that's very different than maybe what had happened in the 80s in terms of there actually are hundreds of millions of users and real, real world use cases that are happening and people that are seeing the impact in their daily lives. In the book you do talk about those previous hype cycles and winners. Could you go into those a little bit more in terms of what led to the hype, what led to the fizzling, and maybe a little bit about what you think about right now. Are we in a bubble or is it justified totally?
Speaker D:I mean, each cycle that comes along, and again, the main one being 1980s, essentially the big push there was, hey, we can actually now automate some level of knowledge work, just like right now, we can automate some level of intelligence. And because of that you can have huge productivity gains in areas that we've never been able to automate in the past. That's kind of the promise of it. The problem with the one in the 1980s in expert systems, which was the encapsulation of a lot of these symbolic systems, the way that they generally worked, is you Put a lot of an expert's knowledge into it, you put rules around it and then it would output for you the thing, the result that it's supposed to have. So one great example is a system out of Stanford called mycin. It was meant for infectious disease and disease diagnostics. The challenge with systems like that, which Mycin also showed, is that a lot of times the real world is kind of messy, right? So you don't necessarily get a perfect match in terms of here's the knowledge that we have and here's some of the rules that we have. You don't always get something that you can actually output. So in those cases the systems would actually not give you an output. It would say, I don't know which in the case of certain things like diagnostics or say like legal cases may be a good thing. I think certain folks who used current LLMs might have actually appreciated the fact that, okay, it stopped and did and give you something. The problem is in real world circumstances, having that happen a lot and then needing to go back and go, hey, let's fix this case or fix this thing where it's like it isn't giving us an output when it should gets really pricey, it gets really expensive. And if you actually get have that bog you down enough, suddenly the automation is not so useful. It becomes actually a bigger overhead and you'd rather just hire people to fill in that gap. That's what ultimately happened to expert Systems in the 80s. And, and the real interesting unlock with the current deep learning systems is they work much more loosely so they make loose connections, loose associations, things that rhyme and look okay, will work with it and it'll output it. The downside, of course it has hallucinations. But as we've seen with a lot of the different use cases and everything, it is actually better in most cases that we have something that is able to output something roughly right and then ideally get checked by a human rather than not being able to output something at all. So one analogy that I make in the book is it's like two different kinds of students. One kind of student is the one who memorizes facts and can always give you the exact right fact, the exact right year, the exact right people. Then you have a second student who doesn't really memorize facts, but kind of gets the rough sense of what the content is, right? If it's a history student, they know roughly the contours of history and kind of where things went in a test, for example, or some case where you really, really need to get the precise right answer, the first student who memorized the facts is going to be better. But in most real life circumstances you don't need the precise facts. Right. In most real life circumstances, a lot of times it's okay to actually just have a rough sense and rough idea and be able to go from there. So in a lot of real world cases, we know that the second student who has a good intuition around what the right answer is is a better real world employee or someone who does something than the first one. So that's kind of the analogy of what deep learning is like. And again, it's great in a lot of different things, but it does also get you into trouble when you do have a high stakes kind of circumstance that does require a precise answer. And some of that intuition is what I try to build in the book. Because for a lot of these systems, they're not magic, but if you read about it in the press, you don't really get a good sense of like, hey, this is what it is or what it isn't.
Speaker C:Yeah. I love how you said they're, they're all statistical models and they emphasize plausibility, not necessarily accuracy.
Speaker D:Right.
Speaker A:I love how you covered all of that historical context in the book. I think it's so important for readers. My dad did his PhD in artificial intelligence the 1980s, so he was all about expert systems. And he wrote a textbook about AI in the early 2000 and tens. And you'd be amazed how little machine learning content was in that textbook. So what I want to go forward to now is what you cover so well in the book, which is the rise of machine learning, deep neural networks, and how that led to LLMs eventually. So can we kind of go fast forward in the history to the OOs, maybe the 90s, Yann LeCun and. And how that led to LLMs.
Speaker D:Yeah, there's the way that I lay out the framework on why deep learning at that particular time, because the theories aren't new. Like, you know, a lot of this stuff for machine learning we knew back in the 80s as well. In fact, back propagation, which is one way of training our current network, did also emerge out of the 80s. So we had the techniques. So what was the thing that was missing? There's three elements that I've generally called out in terms of modern AI, I.e. models, data and compute. We had the models back then. I mean, we've improved, but we had the rough idea of like what the models were back then, but we had neither the data nor the Compute. The data came especially as we got the Internet in terms of web scale kind of data. So I, I talked to like, early employees at Google and whatnot. And as it comes to no surprise to anyone, Google obviously had utilized a lot of these techniques within ML, especially within the 2000s kind of period. So obviously PageRank some of these things. You can argue whether that's machine learning or just clever algorithms, but they utilize machine learning more and more and more over time until it became something that you could probably reasonably call AI in the current landscape, even within the 2000s and 2010s. So with all of that happening, it's like, okay, we have data, Internet scale, and then you have compute. So as everyone knows, computers have gotten faster and faster and faster over time. It's not an exaggeration to say that, like, back in the 80s, it kind of looked dumb to like, be able to say, like, hey, we're going to dump this much data into a neural network that's this many layers deep? Because you can never make that work. It's like, how, if you try to calculate it out, it's like, how many hundreds of years would you actually need to take in terms of compute to make this thing actually go through? We had the data with the Internet and we ended up with compute such that you could actually now build pretty huge models and actually have it work. One particular milestone I really like calling out, just generally speaking, is the imagenet challenge, which was a benchmark where in a competition where essentially take a bunch of different images and then be able to see if you can label it in terms of like, thousand or something labels. It's like, which. What kind of image is this? That's actually a pretty hard problem if you think about it. For a machine, it's like it's a bunch of these pixels. How am I going to tell that this is a cat, not cat kind of thing. And it's interesting because even in the early 2000s, we were actually pretty bad at that, if everyone remembers the captchas and everything, and trying to like, point out, like, is this a this thing or is that a that thing? We were trying to train up these models because a normal human could do great, whereas AI, even back then couldn't really do anything particularly well. That changed around 2015, where I like to call that out as a milestone, which neatly actually gives us 10 years of AI as well. 2015 is when AI computer vision started to hit human levels of performance. So one of our core senses, perhaps, like you can argue our most important sense Suddenly AI is actually able to match or actually beat human levels of performance, which had never been done before. So it's pretty exciting to actually see at that point. It's like, oh, all of a sudden now AI is really becoming a thing.
Speaker C:Well, it's funny, in the book you mentioned that you were that annoying person at parties who corrects people when they say that, you know, sort of AI really arrived when ChatGPT hit the scene in November 2022. And you, you bring up what you just mentioned, that it really arrived more 2012 and then started replicating human level performance in 2015. But still ChatGPT in November 2022, that's when it would start its trajectory to become sort of the Kleenex of AI, right? And folks would just associate LLMs and AI with ChatGPT. It seems like that was a big inflection point. And at that point I think you were already on the board at Uncoustics, which I'm going to ask you about later, but a really cool AI driven company. I'm wondering how did you feel personally in that November 2022 moment when ChatGPT was unleashed on the world and what were some of your, you know, now that this is not necessarily a trade secret and it's, it's coming up for mass consumption, what were your thoughts?
Speaker D:Yeah, I mean, at that time, like the OpenAI had already shown what GPT3 could do. Like, it wasn't like ChatGPT's capabilities weren't different in a way. Like it's still actually the same kind of capabilities they'd shown like a couple of years ago. The big interesting thing was putting in an interface that people could use super well. Those kinds of moments in history are always kind of unpredictable, right? It's like, when does a particular product or when does a particular service end up taking off and really capturing public imagination? That's always been kind of unpredictable within the startup landscape. Some of it is, yeah, you need to be in the right place, right time. Some of it's serendipity. My thoughts on it was when I originally saw it, I thought it was interesting that people found it again, so interesting. But I didn't quite like grasp even at the time for myself because I knew like what GPT3 could do. I'd seen a lot of the things happening. There were other interesting models coming up the pipeline. I didn't quite grasp it until like, you know, random people on the street started going like, oh, chatgpt, chatgpt, these other things. And it started spreading so Rapidly after that. Again, it's a little bit serendipity in terms of putting it out there, because you can argue, and I do, that Google really should have taken the crown on this because I don't know if a lot of folks know, but the original paper, the original techniques that made GPT3 and ChatGPT what it was, was actually invented by Google back in 2017. I've talked with a number of people within DeepMind within actually a lot of divisions of Google who are kind of like, yeah, it is kind of interesting. We just published it. And not only just published it and kind of gave away the game. We also didn't do anything with it and essentially just let it sit there until, you know, OpenAI just sort of took away the public imagination and sort of went off to the races.
Speaker B:Pushing along that path a little bit, I guess. What do you think people really should get excited about with regards to AI? And I know you spend some time in the book kind of dispelling some of the fears, but what are the legitimate concerns that they should be worried about? And if you want to spend a little bit of time on the dispelling some of the apocalyptic fears as well, I think that some of our listeners would probably benefit from that as well.
Speaker D:Yeah, totally. I mean, this goes back to the motivation and writing the book to begin with. You know, technical folks have a good. Actually a lot of people who worked in deep learning have been shocked by how far AI has come. Right. I don't want to downplay that side of it in trying to like dispel myths or like downplay it or whatnot. It is actually super impressive that something that effectively is a super fancy autocomplete can do things like follow instructions, you know, connect with different interfaces and essentially do interesting tasks and things. But at the same time, if you think about it, it's like, okay, that I could imagine that, you know, your iPhone autocomplete or whatnot, and you just keep pressing the accept button for like the next word, next word. You could imagine that a super, super good one would theoretically be able to have really interesting conversations going back and forth that emulate something that you probably would have with a person. And if you think about that, okay, being able to do tool using, be able to connect to these things, that is surprising, but it's not totally outside the realm of possibility. A lot of normal people's sense of what this is or isn't is actually much more grounded in science fiction. So I think science fiction is cool. It's Actually what brought lot of people, myself included, but a lot of folks at DeepMind, a lot of folks at OpenAI, that's what brought people to the table. It's like the imagination, the excitement of it. But the reality is this stuff isn't like Skynet. It isn't like the different AI within these different books or whatnot. What it ultimately is, is as said, it's a association machine, it's a statistical model. It's a super fancy autocomplete and if you put that in mind, you get a sense of what it can or can't do. So what it can do. Well, actually if you think about it, there are a lot of different tasks and I actually interviewed some translators in the book. Like some of those tasks that are not super high stakes and are just things especially around language, especially for LLMs, that's pretty easy to fill in. Like if you think about the typical translator on Fiverr, what kind of jobs do they generally get? Well, it's a lot of jobs like for example, hey, I'm in the eu, I now need this in like German and French as well. I don't really care if my website like reads that well or whatnot. I just, you know, need to like hit the regulatory requirement or something like that. Not super high stakes. You could hire someone at Fiverr or you could throw the entire thing into an LLM and see what it spits out. Similar kind of thing in a lot of other tasks. Now, moving away from language, the big one that I talk about in the book that I think is notable is say driving. As far as driving, it's like, are you talking about someone who's super productive and super like whatever it is? Well, no, I just want to get from point A to point B. It's like there's not much that is added from a human aspect in terms of that. But that being said, these things are still like statistical models. They're still like fancy autocomplete machines. If you're thinking that they're suddenly going to come and replace all humans. If you actually look in most tasks that are the most interesting right now, it's usually in complement with humans. So they will hallucinate and that's pretty much a thing that'll happen no matter what. They will come up with interesting ideas, but at the end of the day they're not able to actually truly be creative because again, fancy autocomplete statistical model that they can emulate having a conversation really well, but they can never actually wander outside of the data set. That they actually have. So that's one of the big things where it's like human, well, people and especially experts can actually make use of these systems to be super productive and be more generalist and be able to like be able to see these different things without needing to actually. Without it actually replacing them. I don't want to stop there because I sort of went on for a while.
Speaker A:No, that's great. And you know, one thing that I think is great about you as an author and your perspective in the book is that you're not just a technologist. You also have a background in economics and finance. You have an understanding of history and philosophy. That really comes through in the book. And in the book you create some parallels between the revolution we're going through right now and previous technological revolutions. And a lot of people fear not just replacement of humans wholeheartedly, but of course their individual job being replaced. And you spoke about that just a moment ago. A bit, but I was wondering if you could go into that in more depth and how in past technological revolutions, as you point out in the book, ultimately here we are 200 years after the Luddites and still we're at 4% unemployment. So how does that pattern keep repeating itself in history?
Speaker D:Yeah, we have a tendency to be really good at figuring out jobs for people to do. The economy retools itself as a whole. Humans are still super useful in these different things. I mean, as said before, even before some of this deep learning stuff, there are a lot of things that you just need humans to do. I don't know if people remember, but back in the day, even in terms of computer vision stuff, you had a lot of interesting projects where you had human beings basically kind of doing captcha kind of things. But it was meant to help identify the kinds of star systems, like is this a star, is this a planet, whatever. Human beings with just the abilities that we have were able to do that far better than a machine was. But even at this point, being able to check some of these LLMs at a layer of like again that human to it is potentially like super useful in a lot of these different tasks. Again, taking that historical perspective and these other things, a lot of these technologies do create a lot of short term disruption. Originally when I was starting the book, I actually, I wouldn't say poo pooed it. Like the big thing that we all lived through was actually the Internet. So that transition, even though it was disruptive in a lot of different ways, that transition was actually in a way less disruptive than A lot of other things, because to some degree you're taking things, putting them online. There was a lot of growth at the time. There's a lot of potential economic stuff. Jobs ended up pretty strong. You had some bumps along the way in terms of eventually great recession and everything, but you had a lot of interesting job growth with that. With some of the past technological revolutions, you did have the Luddites end up like, you know, out of jobs because suddenly they were skilled weavers and skilled textiles workers where machines suddenly were able to just do the jobs they could do and do it for much cheaper. The point that I make in the book though is that it's actually a little bit of a moral dilemma if you think about it to some degree it's like you feel sorry for the Luddites. You feel sorry that, you know, these hard earned skills are made obsolete. At the same time, suddenly human beings are just as a society and as a whole much better off. Back then you could, you know, you needed a entire family's income to be able to maintain. Just like having one set of clothes. Now obviously people can buy tons of different clothes and whatnot. You see each of these technological revolutions making disruption, but ultimately making everyone better off in sort of the medium and longer term. And I see a similar kind of thing with AI here because that's pretty much how every single technological revolution has worked. Again, there are certain jobs where it's like, I don't think you can argue that say translators on Fiverr or gig work, like uber driving and whatnot is not going to be affected by a lot of the current AI revolution. I think it will. But the interesting thing here, and again, I can throw out some frameworks, but it's actually kind of hard to super quickly summarize, which is why this thing is an entire book length. But there are certain kinds of jobs that will probably actually get taken over, but there are a lot that aren't. And there are probably a lot of different jobs that will actually end up having more interesting things that people can do and probably end up having people get paid more.
Speaker A:You brought up the.com era and of course that was associated with a big bubble that eventually popped. And we pressed you a little bit earlier, but I do want to press you again. Do you feel like we're in a bubble right now? And of course with the dot com bubble, it wasn't that the technology wasn't real, the Internet changed everything, but there was still a bubble nonetheless. So of course it's not that LLMs aren't real. But are we in an economic bubble right now?
Speaker D:So my stance is if we aren't in one, we will be. Because inevitably with every single major technological shift, there's always a bubble. And if you think about it, and I'll get to right now in a second, but just laying out the framework, it is actually kind of rational that it becomes a bubble. Right? As stupidly, as stupid as it sounds because it's a new technology, how are you going to look back in history and use data to extrapolate? Like is this, like how big will this get? You have no idea. It could be a small technological boom or it could be a massive one. You have no idea what it is a priori. And to some degree if you end up missing the train, that's like pretty bad, especially if you're a major incumbent. So a lot of investment, a lot of stuff goes into a lot of these technologies. That happened with the Internet, sure. That happened with railroads, that happened actually with some of the factories and things back then during the Industrial revolution, we very much have a bubble every single time. Now there's a question here of are we in a bubble right now? There are certainly bubble looking things. A lot of pretty much every single one of OpenAI's deals, which is a little bit more, more announcement and hype and magical value creation by press release more so than it is like actual thing. But if you, if you actually look at the underlying economic stats, we potentially are actually somewhat early in the cycle still in terms of total like debt spending versus like supported by revenue, the ratio is actually like fairly low right now. Even though like stocks are very, very, very historically high, especially driven by the Magnificent Seven, which are mostly we're all US companies and all mostly AI related. So yeah, you can like argue stuff in terms of valuations, but in terms of the kinds of things that create a big bubble in the sense of like tons of money going in, we're way, way, way past like expectations, land and whatnot. Hard to say actually where AI will go. It's still diffusing. We're actually seeing like major, if you look at quarter three of 2025, like US productivity stats, one data point, but we're actually seeing a little bit of a breakout in the trend where we're actually seeing a lot of productivity increase for the first time actually in a long time in terms of the us Hard to say, we may still be kind of early in cycle. That would be my argument. Especially since AI is still yet to go where probably it's most useful, which is a lot of the more real world applications like in healthcare or in materials design in a lot of these other areas.
Speaker A:Let's take a brief moment to thank our friends@Audible.com for sponsoring today's episode. You can check out great books over at audibletrial.combiz that's audibletrial.combiz to get a 30 day free trial of audible.com as well as credits towards your first purchase, check out the link. We've also put it in the show notes. It's a great opportunity to get those reads in as you start your new year with a new Year's resolution to do some more reading. What's better than listening while you're on the train, while you're in the car? To a Great book from audible.com thanks to our friends at Audible.
Speaker C:So going back to the technology revolutions and impact on jobs, you know, and you know, this is kind of two questions and I'll start with the first one. I think that one thing I liked about the book was how you bring up the idea of oracles and that oracles are the human experts who have that talent and experience to sort of supervise the outputs from AI to then catch any mistakes.
Speaker A:Catch where?
Speaker C:You know, we need accuracy, not necessarily just plausibility. Here you bring up the idea of like the senior coders versus the junior coders. We have a lot of listeners who, you know, are early in their careers and aren't yet those oracles. So I'm wondering, you know, how do they protect their roles? Is it, you know, I thought it was fascinating, your journey in, you know, after graduating from college and then jumping around from Bridgewater to different graduate degree programs, your mba, you know, sitting side by side, these technical experts and gleaning some of that information off them as well. Like what would you recommend for a junior, not necessarily a junior coder, but someone junior in their company who, you know, AI might sort of knock some of their role out of the window and they won't have that experience or those experiences to become that oracle. Like how do you bypass that and how do you sort of survive in an organization as it's shifting like this?
Speaker D:Totally. I think that's a great question. It's actually one that I'm a little bit worried about and has gone a little bit more mainstream worry, especially with software engineering, like if you don't have.
Speaker C:A chance to become an oracle, how do you survive almost totally.
Speaker D:And just stepping back in terms of that, just to give a quick illustration of this, think about it this way. So let's say you're trying to write up an industry report on, I don't know, a telecom company or something like that, right? If you are, say a, you know, investment bank senior analyst who's covered the industry for years, guess what? ChatGPT deep research can write the report for you and it'll help you a lot. Why? Because you can basically go, hey, look through it. Go, hey, this part looks right, this part looks like it needs more filling in. You forgot about this particular company in the segment or whatever it is and fix all the mistakes because the person has knowledge in the field. Take the same thing, take the same prompt, give it to a college intern and ask them to do it. They'll get the same initial report back, but guess what? They have no way of actually checking any of the stuff. And more or less their role there is kind of superfluous. They're just sort of sitting there going like that looks pretty good and probably will submit, submit it as like the final thing, right? Potentially with all the kind of mistakes, things missing, like bad like orientation, whatever. In terms of direction, AI tends to supercharge experts because no matter what it's going to have mistakes. No matter what, it's going to potentially go the wrong direction. Whatever. You as an expert though can utilize this in a far. You can basically utilize the output of AI in a super fast way where it's like I can help it like do my job super quickly. Coders have found that to be the case and like, you know, it's like okay, really, really good programmers end up being able to push out a ton of code really fast and bad programmers just get buried under like oh, my program is getting more and more broken. But I couldn't really understand what the AI was writing anyway. In which case like I can't fix any of it and just gets progressively worse and worse. It has a tendency of like widening these gaps. So going back to like what do you do? So I had an interesting conversation with actually a bunch of like tech CEOs and like tech like and exited founders and stuff. And when I brought up this issue to them during dinner they were like, oh, I don't see that like my, my interns that I have are actually doing great with AI and whatnot. And the, you know, the thing that you had that I would push back with is like, yeah, but that's because you as like exited well known founders brought on like the 0.0001% of like college interns really good at like doing A lot of these different things, super go getters, et cetera. And that is not indicative of the broader population. So we are actually already seeing like junior programmers not get brought in as much. We are seeing actually a lot of those job stats start to fall off and then that becomes a question of, yeah, how do you actually get senior programmers in this case if you don't have junior programmers? Now again, going back to like, what do you do in that circumstance where you can't magically, as a, you know, new graduate, get 5 years or 10 years of experience right out the gate? The big thing that I would suggest and actually what I've done, like at lectures when people have asked me like, how do I get into entrepreneurship, how do I do this, how do I break into vc, how do I do whatever it is, I would actually just say like, follow your personal interests and get really deep into it and push really hard. Like we talked about my philosophy degree and everything. In a way, within an AI world, the thing that's most useful actually is someone who's super smart, a super smart generalist who's able to put a interesting spin and idea on different things and be able to check these things depending on what it is in. Okay, maybe you aren't able to check this thing for super, super deep technical or whatever, but you are able to bring interesting creative ideas because you have different experience and different things that you've pushed really, really hard in these different areas. I've always suggested to people, don't kind of be the cookie cutter where you, whatever it is, it's like, okay, I know that there's this path, like, yeah, you go to this. Like some of the people like investment banking, it's like, oh, you go to this, you do this for two or three years and then you go to like an MBA and then the next step, the next step, the next step one is that's kind of boring. Like even from a resume perspective, like you see a thousand of those people every single day in terms of like resume scanning. But at the same time also it doesn't give you any interesting differentiated experience that if your value prop is not like deep, deep, deep experience. And this one thing, it also, then if you just take the same path as everyone else, it doesn't give you any interesting, intersectional, diverse, different kind of experiences that you can actually bring something to the table.
Speaker C:It's a great response. And you know, I, I thought in the final chapter of the book, you bring up serendipity and it's not just a romantic Comedy with Kate Beckinsale and I believe it was John Cusack. But you actually say that serendipity is something very important when it comes to your career and that you advised folks to really put yourself out there because you never know how you're gonna meet. You talked about a popular Austin, Texas based technology and entertainment festival, which I assume is south by Southwest, and just these dinners that you're going on. So I think a piece of advice for some of our younger listeners would also be that idea of serendipity. Dive deep into where your passion is, but also get out there as well and you know, introduce yourselves to folks and you'll never find, you'll never know what opportunities you'll find. I wanted to flip my previous question on the end too and say imagine if you are a younger executive now, partial decision maker at a more Luddite company, not necessarily a Luddite industry, but a company that does not have any AI is not really embracing this. What's your advice for someone who's trying to create that beachhead? You know, I do a lot of research for companies where I ask their clients, clients at each step in your relationship life cycle or for these solutions, would you like technology? Would you like AI that automates, assists, augments or transforms? And I even break it into those four dimensions to understand, like as a client of my client, what would you be comfortable with? And I've seen over the past three years, my clients clients are getting more and more comfortable with AI integrating into their relationship with my client.
Speaker D:Right.
Speaker C:And so I'm trying to demonstrate that to my clients. But if you're working in a firm that is not embracing these solutions yet, what is your sort of, what's your, your evidence kit, your preparation kit for a young executive who's trying to bring AI into their firm and sort of, you know, managing from the bottom up?
Speaker D:Yeah, I mean, I've seen a lot of different cases in terms of these. There's the variation between like certain folks have industry regulation or like, you know, like government contractors or whatnot, where it's very hard to bring these things in because of whatever it is. And for some of those folks who've actually asked the question on different panels and things, my suggestion is like this stuff right now, especially given it's subsidized, you can get access to it super easily. Like you can basically go out there, utilize all these tools and get a sense of like what it can do yourself. Like, just so you can get a feel of like what's the edges of the capabilities right now. So if you stay with the company and then need to make a proposal, eventually, once they're actually interested, you can be the one who can actually put together a good proposal and talk about like this is what it can or can't do. And at the same time, if you decide to jump industries go elsewhere, you also won't be stale in terms of that on the other side in terms of just like organizations that are just slow to adopt or whatnot. It's kind of like everything else. It's like as a younger executive, there's always a new technological wave of something and there's always resistance within the organization. And at the end of the day it's either the organization eventually starts catching up and you become super valuable in helping implement those solutions, or the realistic answer is also you end up going elsewhere where you can bring those skills to the table and supercharge things. I did this myself in a lot of organizations actually with just like programming, scripting, et cetera knowledge. It's surprisingly powerful where it's like you can actually eliminate a lot of low level jobs that really people shouldn't probably be doing just by like basic scripting knowledge, let alone anything deeper. But now with AI, you can actually go far, far, far further in terms of a lot of these kinds of tasks and have it be more accessible even to like people who don't have that kind of programming background as much.
Speaker C:I loved your advice. You said get your feet wet right towards the end of the book. I believe you said that about. So what you're advising is to start just analyzing some edge cases, getting a Gemini subscription and experimenting. I actually did that. I had my wedding two and a half months ago and for some of the skits that we did, I had Gemini kind of assess the comedy in my skits. And even though you say that it's statistical model, it still had a, had a pretty fine knack for some of the comedic bits.
Speaker D:Totally. Well that's the thing. It's like, and this is where having that mental model and understanding what it can or can't do helps. It's like I say it can't be creative, but when I say it can be creative, it doesn't. Like what is creativity? Right? Like creativity. A lot of different definitions. One of the ones I kind of landed on is like it's kind of different than what came before and it's also good now. Good is super subjective and you can't tell or whatever it is. I'm not saying that you can't have an AI help you brainstorm. And in fact a lot of creatives now in visual like arts industries, like basically graphic design and also just in writing as well, people are using it to brainstorm. You just can't necessarily trust the taste of what the thing spits out. So if you have again an oracle, a human actually over looking over that, it can help you actually get much further. Now again, like, you know, you're a funny guy. If you had someone who wasn't funny who was trying to use an LLM to make a funny skit, I might argue that maybe the results would not be quite as good.
Speaker C:Are you calling me an oracle, James?
Speaker D:I love this.
Speaker C:So I am an oracle after all. Let everyone know.
Speaker B:Wait, Kevin, how many skits were in this wedding?
Speaker C:So Kopeck was there, we had three skits and one of them involved a back to the future time travel escapade. And it was funny because I said the different tones in Gemini and so it was meant to be sort of a supportive tone and it was like, you're doing a great job, Kevin. I'm like, thank you very much. You know Gemini.
Speaker A:So David, I know that you as a project manager often deal with employees who are using these tools all the time. Has it started to change your hiring practices, David? And also how do you think that James. Listen to James as well. But how should managers change their hiring because of LLMs?
Speaker B:Yeah, so I would say we are definitely hiring fewer junior people at this point. I would say that's like something that I've noticed. It's not something I've been explicitly told is like a decision, but I've definitely seen fewer recs come through and we're definitely expecting people to be able to cover more. So it's a significant thing that every time I meet with my team, I generally have some product that one of us worked on that we leveraged. Gemini is the tool that we're allowed now to use internally. So for a long time we did have restrictions and our company had hired a bunch of people and done a bunch to we're going to use open platforms and build our own stuff. And they did develop some developer stuff, but it really never got out of developer environments. And so finally we just signed a deal with Google and now almost everyone can now use Gemini. And obviously it's a different kind of capability that that gives us, but it certainly increases my productivity. It allows me to cover more things than I could before and I'm expected to cover more than I was before. But yeah, James, curious What you've seen sort of industry wide and how. Yeah, if we're sort of putting, I guess, a third different hat from the two that Kevin asked you put on before. But as a hiring manager, how are you thinking about hiring people relative to AI, the skillset you might be looking for people as well as the expectations you might have?
Speaker D:Yeah, I mean, I have fairly opinionated interviewing views and this is part from Bridgewater, which if folks are familiar with, have really interesting interviewing practices that are basically impossible to memorize for. But I think especially now, it's just like it's kind of pointless trying to do an interview with someone where you're supposed to memorize something, or even in certain cases where it's like cases where you can't necessarily memorize it. But it's kind of a formulaic thing to walk through. The thing that you're much more interested in is how is someone thinking about a problem and walking through it? And you know, you can always like, you know, have the answer spit out by an LLM or whatever. But then it's like, okay, let's walk through each piece by piece by piece and then see like how the person is thinking through it, like what sort of things influence them, whatever. That's just on the interviewing side. But in terms of like the hiring side, again, I've totally seen this in terms of a lot of different organizations as well, junior folks not being brought in. I actually talked with a hedge fund manager the other day where it's just like his argument is like, yeah, he's been able to save like millions of dollars because he doesn't need to bring in any of these junior analysts who would just be doing a bunch of grunt work or whatever, deep research and other things, sometimes give like crappy answers. But then again, so do junior analysts. They're able to do a lot of these different things just with AI and utilize the fact that, you know, they're industry veterans and know what they're doing with it. So, yeah, hiring practices definitely changing. Definitely getting this stuff is getting utilized more. Again, I'd make an argument just that the kinds of people who will probably get hired and be able to like, utilize these tools are going to be people who have more interesting backgrounds, experiences, things that they can bring to the table that are just differentiated in this regard. One colleague of mine from Bridgewater actually made an argument that this is the age of the idea. Idea person, idea guy. Because now you're able to, if you have that sort of level of creativity, level of ability, like think through These kinds of things have good taste, et cetera. You can actually get pretty far with a lot of these AI tools and putting together, like, really interesting deliverables that you wouldn't necessarily have been able to do before.
Speaker C:I find that so optimistic too, because coming into your book, I was thinking that the next, you know, what's next in AI careers and AI influenced careers was more a master of prompts and a master of setting guardrails. But after reading your book and having this conversation, I definitely recognize it's more of that idea person who can inspire and be inspired, that oracle who is a master of the knowledge, who can monitor the outputs. So I will definitely change, as you say that you're the annoying one at dinner conversations with that ChatGPT question. You know, I can definitely change my talking points around AI after reading this.
Speaker D:Well, if you think about it, like, even in terms of automation, you say the same thing, right? Like when you see lines of, say, automotive workers get automated, the people who are left aren't just like people who are more and more and more formulaic or just like bang on a thing or whatever it is. You need more and more creative, generally, like upskilled in terms of technical talent, generally upskilled in terms of like, problem solving and stuff like that to manage lines of like, very highly automated automotive processes. And that's pretty much been the case with all automation over time. It's like you don't tend to automate and make things like the Luddites are a very specific example. But even in terms of the people who ended up, like, overseeing the factories and things, like, you're not literally sitting there hand doing things anymore. It's not that kind of skilled work. But you need someone to oversee the broader process that's now moving much faster. And you need to troubleshoot these different things. It's the same kind of thing. It's like all of a sudden a lot of the rote work that a statistical model literally can spit out becomes automated. And you need someone who can add more than that and add a layer of ability to understand and be creative and like, add that human level of oversight to it to be the ones to plug in, not just like someone who's turning the crank, you know, James.
Speaker A:I really love the book as a whole. I can't say there's a lot of books that I read that I found myself agreeing with over and over again. And I wonder if that means I was almost too in line for this interview, but it was such a good book. I want to Recommend it to all of my fellow faculty at Albright College, where I work, where we're launching an artificial intelligence degree this fall. And there's a lot of skepticism from, as you might imagine, and I hate to generalize like this, but older faculty about AI in general and we could have a whole episode about the effects that AI is going to have on education, of course. But one thing that I think a lot of people might find surprising about the book is that there's a bit of a skeptical tone throughout in some sense. I mean you talk about everything that that is exciting about AI, but you also are very grounded. And I'm wondering how your background as a venture capitalist kind of came into having that grounded attitude because you're seeing things that are on the cutting edge. So that gives you a really interesting perspective. And I also just as kind of a follow up and just kind of fun note you mentioned in the book sometimes some demos that really did not wow you from upcoming AI company. So if you could both speak to how your background as a venture capitalist kind of came into writing the book and might have grounded a bit of your realistic view on technology. And also if you could tell us one or two demos without telling any names of course. But why they don't wow you sometimes about AI.
Speaker D:Yeah, totally. I mean the only thing about in terms of venture, the only thing worse than being late is being early. I mean if you're late, guess what, like whatever, it's a follower or whatever. Sometimes that actually works out because you know, the second player or whatever is the one that nails it. The worst thing is to be early because you project out this thing, can do this thing, whatever it is, and then the company sits there for like two or three years without the thing actually happening and then runs out of money and dies. Or a new competitor comes along, raises a fresh round of capital and is the one that actually takes off and your old company is there languishing, tired and exhausted of capital. There's a lot of VCs who are very like optimistic about the future and obviously are always kind of leaning forward and whatnot. But especially within a deeper technical, the deeper technical areas that we play in for creative ventures, you do need to be grounded because you need to know like what the thing actually can or can't do. It's to say, not to say like current AI. Like because I, I, you're right. I do actually take much more of a skeptical tone in terms of the current book talking about the limits just because especially when I was writing it it was far more common to talk about people's, to hear about people saying that AI is limitless. We're about to hit AGI. We're about to like, you know, Skynet and or like Singularity, like all these different things rather than the other way around. It's like, no, no, no, hang on, hang on. It can do really interesting things, could be really valuable. Has a lot of these areas that are super interesting, including say like drug discovery, a lot of these, like protein synthesis stuff, like, like a lot of really, really interesting stuff. But it's not limitless and it's not even like truly depending on how you want to say it. Sentient, sapient. It's a really, really interesting tool. But that's currently all it is. And if you know what's underneath it, then you can understand like why it can't really go further than that. And you know that that's become less and less of a controversial point. Yann Lecun had like his big thing, like there's Gary Marcus's another person who's an expert in terms of the field of AI. He's also, he's always been very skeptical. He's in the symbolic school more than anything else. But you know, people who know the technology have been surprised about how far it can come. But especially for those who know the deepest about what's under the hood, it's not surprising that AI has been hitting limits. It will hit limits, it can't do everything and that you need new paradigms to make that work, work. So I just think that part is just really valuable because it takes away some of the fear from people and ideally it also unleashes people in terms of being excited about what's coming rather than just constantly worried about what it might do.
Speaker A:And I think that's a good point to kind of start to wrap up. And I'm wondering if there's anything that we didn't talk about, about the book that you want our listeners to know about the book.
Speaker D:One of the things that I do go through is that there's a lot of areas that are outside of LLMs that are super, super interesting. It's a lot of like real world kind of applications. Unfortunately it's also in like military stuff, which is why it's become such a geopolitical thing as well. Those are really, really interesting places, really, really interesting cases. And one of the frameworks I do talk about as well, that is kind of a prediction a little bit, but is something that I think will appeal to a Lot of folks is especially seeing like all of OpenAI's deals and things. If you think about it in terms of economics, not everything is just captured by companies. A lot of value that's created is ultimately socialized. I don't mean like the government comes in and like confiscates stuff and redistributes it. It's just a lot of technologies end up just making the world better. There's competitive economic ecosystem so a lot of people can implement the technologies. There's so the price goes down but then everyone's lives get better. That's my argument about where a lot of the AI stuff currently, especially with the LLMs and everything are going. We're going to be really happy to have these things. They're also not going to be totally controlled by anyone either. It's going to be like really helpful in terms of these different areas, really exciting. There's misuse potential for it too and I talk about that a little bit. But like as a whole I would make an argument that the Internet has actually made the world better. There's other aspects where it's like, okay, like double edged sword, whatever, but it's made the world better, it's made everyone's lives better in general. Like things that were unimaginable before we have at our fingertips similar kind of thing with AI. And I think people should just be pretty excited about what's coming next.
Speaker C:And one thing we didn't cover is that I know you mentioned in the book is you've been working with your wife on a women's healthcare startup. And one of my quick questions was just going to be we always hear it's tough to work with family sometimes, but for those, you know, thinking of getting into startups with their significant others or family members, you know, how did you navigate that?
Speaker D:Yeah, so in terms of. Yeah, with, with Lioness and that startup and everything, I mean the big thing is just I think it was helpful that we're always pretty honest and open with each other and like it didn't really have us step on each other's toes and whatnot. I guess I've given advice about this before. Sometimes it's worked out, sometimes it hasn't. I guess right now what I can probably mostly boil it down to is hopefully you know your significant other well enough to know whether or not that'll go well or go completely horribly. Because I've definitely seen like cases where it should go well, go completely horribly. So yeah, I think it's know yourself.
Speaker C:So this is not the place for serendipity. You want to plan in advance.
Speaker D:That's right. Know yourself and know your significant other. Because, yeah, I've definitely seen this go all sorts of ways over the years.
Speaker B:Our listeners are generally readers and your book is pretty well noted as well. So I guess any other AI books that you would recommend, and you kind of even break it down at the beginning of like, what this book isn't trying to be there. And there are good other books for sort of other aspects of AI. So, yeah, anything you would recommend for a reader.
Speaker D:So yeah, there are pretty interesting other books to jump into and everything. Though I'll give one quick plug at the moment, which is. I do also have a substack called Weighty Thoughts where I go through a lot of these different topics and everything. Ethan Mollick's book is really interesting. I think it's called Cointelligence. There's a lot of interesting, again, other books out there, but because the field is moving pretty fast, even for this one, my publisher and I really moved to make sure to get it out pretty quickly. And fortunately there's nothing stale or anything in it. But it is a field that's moving very quickly. My suggestion in the book was to first, like, okay, now that you have a grounding in the book, I would actually suggest you go out and find newsletters. In certain cases you might even be able to like, skim some of the abstracts of like some of these publications that are coming out. If you see a headline of something like really dis. Like basically something that you might disbelieve, you could probably even like skim the abstract of the paper that the news article is probably hyping up and get a sense of it. I would just really suggest because the field is moving so fast to do that, like newsletters, some of these other things, like, they're. They're good mainly because they move faster in terms of it and keep up more. And if you want to kind of keep up on the cutting edge, because the field is moving super quickly, things are changing pretty much every month to some degree. That's what I'd actually recommend. Again, lots of really interesting books, lots of really good books and stuff. But the reason why I ended up writing this in part is because, yeah, a lot of the books are on specific, interesting topics, but they don't give you the sort of general background that you need to sort of dive into it and keep up on the edge.
Speaker C:And I guess the book is only a few months old, but I really appreciated some of the conversation you had in there about future proofing the book and how you deliberately chose to redact some sections in hindsight and at some points you mentioned, well, this might get disproven in the next five years. So that's my caveat. My only criticism of the book was I did scan the QR code at the end for the errata and updates and there hasn't been any yet, which means you've only been out for about three months now. So there haven't been any yet, but I will be looking for that. So I don't normally scan rogue QR codes, but I did for you, James.
Speaker D:Good stuff. Yeah. Fortunately, again, it's like we were able to catch all the things that would have made it sound like kind of weird and whatnot before the end of it. But yeah, I'm sure something will happen and we'll have stuff up on that site.
Speaker B:It is funny that you mentioned Cointelligence because we were actually deciding between that book and yours.
Speaker A:It's true. And we picked you.
Speaker D:It is a good book, I said.
Speaker A:So you mentioned Weighty Thoughts. We're going to put a link to that, of course, in our show notes. Of course, a link to the book. Is there any other ways that folks can follow you on social media or get in touch with you about the book?
Speaker D:I'm on X A James Wang. I'm on a lot of other platforms and stuff now too, with a similar handle, but I'd say like Substack's probably one of the easier ways of getting in touch. It's a little less noisy right now versus everything else. But yeah, so yeah, look forward to folks, you know, reaching out, comments, other stuff.
Speaker A:Well, thank you so much for coming on Business Books and Company. James, it was really a pleasure reading the book and talking to you about it.
Speaker D:This was wonderful. It was a great conversation and yeah, thanks so much.
Speaker A:All right, Kevin David, is there anything that you want to plug and how can our listeners get in touch with you?
Speaker B:You can follow me on X at.
Speaker C:Davidg Short you can follow me on X at Hudax Basement H U D A K S Basement and I'm on.
Speaker A:X at Dave Kopeck D A V E K O P E C Want to remind our listeners to follow us on your podcast player of choice or if you're on YouTube. Don't forget to hit subscribe and don't forget to leave us a review too. Those reviews help. They help other people find out about the show. And we'll be back next month with another wonderful business book. See you then.
Although the field of Artificial Intelligence has been around since the 1950s, it's only in the last few years, with the rise of Large Language Models (LLMs), that the general public has truly taken notice. Apps based on LLMs, like ChatGPT, have already upended the labor market, creating productivity gains in some roles, while eliminating others. In James Wang's new book, What You Need to Know About AI, he walks us through not only how modern machine learning works, but also what one needs to know to stay competitive in an AI-inclusive world. The book provides a wide perspective including historical context, practical advice, recent developments, and enough technical explanation to give everyone a sense of the possibilities and limits of this exciting technology. Join us as we talk to James about the book and this transformative time for our economy.
Thank you to our friends at Audible for sponsoring this episode. Check out AudibleTrial.com/biz for a 30-day free trial of Audible and free credits toward an audio book like Influence.
Show Notes
Follow us on X @BusinessBooksCo and join our Amazon book club.
Edited by Giacomo Guatteri
Find out more at https://businessbooksandco.com