The Tim Ferriss Show Transcripts: Books I’ve Loved — Steve Jurvetson (#404)

Please enjoy this transcript of another episode of the “Books I’ve Loved” series, in which I invite amazing past guests, close friends, and new faces to share their favorite books—the books that have influenced them, changed them, and transformed them for the better.

This episode’s guest is Steve Jurvetson (@FutureJurvetson), an early-stage venture capitalist with a focus on founder-led, mission-driven companies at the cutting edge of disruptive technology and new industry formation. Steve was the early VC investor in SpaceXTeslaPlanetMemphis MeatsHotmail, and the deep learning companies Mythic and Nervana. He has led founding investments in five companies that went public in successful IPOs and several others that were acquired for a total of over a $100 billion in value creation.

Before founding Future Ventures and DFJ before that, Steve was an R&D engineer at Hewlett Packard and worked in product marketing at Apple and NeXT, and management consulting with Bain & Company. He currently serves on the boards of Tesla, SpaceX, and D-Wave.

Transcripts may contain a few typos—with some episodes lasting 2+ hours, it’s difficult to catch some minor errors. Enjoy!

Listen to the episode on Apple Podcasts, Spotify, Overcast, Stitcher, Castbox, Google Podcasts, or on your favorite podcast platform. 


Tim Ferriss owns the copyright in and to all content in and transcripts of The Tim Ferriss Show podcast, with all rights reserved, as well as his right of publicity.

WHAT YOU’RE WELCOME TO DO: You are welcome to share the below transcript (up to 500 words but not more) in media articles (e.g., The New York Times, LA Times, The Guardian), on your personal website, in a non-commercial article or blog post (e.g., Medium), and/or on a personal social media account for non-commercial purposes, provided that you include attribution to “The Tim Ferriss Show” and link back to the URL. For the sake of clarity, media outlets with advertising models are permitted to use excerpts from the transcript per the above.

WHAT IS NOT ALLOWED: No one is authorized to copy any portion of the podcast content or use Tim Ferriss’ name, image or likeness for any commercial purpose or use, including without limitation inclusion in any books, e-books, book summaries or synopses, or on a commercial website or social media site (e.g., Facebook, Twitter, Instagram, etc.) that offers or promotes your or another’s products or services. For the sake of clarity, media outlets are permitted to use photos of Tim Ferriss from the media room on or (obviously) license photos of Tim Ferriss from Getty Images, etc.

Steve Jurvetson: Well hello, boys and girls. My name is Steve Jurvetson and I’m an early stage venture capitalist with a focus on founder-led, mission-driven companies at the cutting edge of disruptive technology and new industry formation.

I led founding investments in five companies that went public in successful IPOs and several others that were acquired for a total of over a $100 billion in value creation. I currently serve on the boards of Tesla, SpaceX, and D-Wave, which is a quantum computing company.

Before founding Future Ventures, the venture firm I’m at now, and DFJ before it, I was an R&D engineer at Hewlett Packard and worked in product marketing at Apple and NeXT, and management consulting with Bain & Company. I was originally trained in electrical engineering going all the way through to a PhD, but not completing it.

So today I’ll present three books: the most gifted book by me—the one I’ve given to most people—the most influential book on me, and then the most important book for all, in my humble opinion.

Let me start with the book I’ve gifted more than any other. It is Scientist in the Crib by Alison Gopnik. She’s a professor of developmental psychology at Berkeley. I basically give this book to any geek friend of mine about to have their first child, because it had such a wonderful influence on me.

It is not a parenting book, but nevertheless it kindles an awe and awareness for the marvels of their minds—babies’ minds—especially in the pre-verbal years, when it might otherwise be difficult to connect. Some practical experiments come out of this once you understand that babies signal their interest in things by where they’re focused—their gaze. This is the fundamental research tool used by Gopnik and others in their practitioning of their art. And that focus shifts over time as the brain develops and they face new developmental milestones.

So for example, at birth, much of the vision system is bootstrapping, and this is everything from the color space, the distance vision, and initially edge detection—meaning for seeing the edge of an object and its three-dimensional distance, if you will. This is how you sort of can make a difference between foreground and background and make sense of the world in three dimensions. I would notice that when I could take advantage of this. Basically, even at the hospital when my son was one day old, I noticed that when I pushed his bassinet with the sleeping baby through a hospital hallway, his eyes would just pop open. Whenever I turned a certain corner, it was like clockwork. I looked up and I saw a right angle in a long, bright line of fluorescent light. So I ran down the hallway and made a right turn. And sure enough, when I closed my own eyes and looked up, I could see that sharp edge of light through my own closed eyelids.

Aha! This was like food for the baby’s developing brain. It made him happy to open his eyes to this visual treat. That was the thing he was cognitively working on most at that time. And it made it great for me to show this to others. I could get him to open his eyes for visitors by repeating this trick for them. Again, it’d be sort of a joyful way to wake a sleeping baby.

Then later, when my daughter was first learning to speak but had not mastered all the sounds, I noticed her gaze would flip around to my mouth whenever I made a buh, buh, or puh, puh sound. B or P. Buh, puh. Imagine learning those for the first time. It is a very subtle difference in mouth position, and how else could we learn this but to watch someone else? So I then had many days of enjoyable phoneme practice, as I called it, with her as she came to master the elements of speech.

I think Scientist in the Crib is fascinating, not just for the life in the crib, but for what it tells us about scientists as well. It is an inspiration for adult life. From what I can see, the best scientists and engineers nurture a childlike mind. They are playful, open-minded, and unrestrained by the inner voice of reason, collective cynicism, or fear of failure. Isaac Newton and Richard Feynman are famous examples of this.

I’ve come to celebrate the childlike mind, as I call it. And here is one of Alison Gopnik’s key conclusions from her book. And this is a direct quote. “Babies are just plain smarter than we are. At least if being smart means being able to learn something new. They think, draw conclusions, make predictions, look for explanations, and even do experiments. In fact, scientists are successful precisely because they emulate what children do naturally.”

At a recent talk I heard at The Long Now Foundation, Alison Gopnik went further to say that three- and four-year-olds do causal inference better than the best scientists we know. It’s kind of fascinating. So what is this? Well, much of the human’s brain power derives from its massive synaptic interconnectivity. The connection between neurons.

Geoffrey West in the Santa Fe Institute observed that across species, synapses per neuron—meaning how many connections each neuron has to his neighbors—it fans out with a power law with brain mass. In other words, this is something that is endemic to larger and larger brains in evolutionary landscape.

In an age of two to three years old, so when your baby is now become a young child—two to three years old—they hit their peak with 10 times as many synapses as we have as adults, literally 10 times as many interconnects as we do. And twice the total energy burn of an adult brain.

Well, it’s all downhill from there. The UCSF and Memory and Aging Center has tracked cognitive ability with age. For example, they have a delayed free recall test. Quite simply, you’re read 16 words, and after some time has passed, you’re tested in how many you can recall unprompted. From the teen years to our mid-30s, we all remembered about 12 of the 16 words. It’s pretty much a flat line on the graph. But then, about in our mid-30s the line shifts to a completely different straight line that’s declining over time until end of life. But it’s at the same slope. In other words, the pace of cognitive decline is the same in our 40s as in our 60s as our 70s and in our 80s. We just notice more accumulated decline as we get older, especially when we cross the threshold, but forgetting most of what we try to remember in our late 70s to early 80s per that graph.

But we can affect this progression. That’s a graph looking in the past. Professor Merzenich at UCSF has also found that neuroplasticity does not disappear in adults. It just requires mental exercise. The old adage of use it or lose it. So the bottom line from this in terms of adults sort of learning and recommendations is that we should embrace lifelong learning. We should do something new. Physical exercise is repetitive, mental exercise is eclectic.

And that brings us to the next book, an eclectic romp by Kevin Kelly, the founding editor of WIRED magazine, and it’s a book called Out of Control. Now this is the most influential book on me, and it has guided many of my investment theses over the last 20 years in technology development. It basically is a book that covers the dawn of the age of biology as the next phase of major technology vectors coming out of an age of physics, if you will. These biological metaphors are ripe throughout information technology, and Kevin Kelly very expertly explores the integration of these domains.

So the interesting thing is that this book was written in 1995, and it may have been 20 years ahead of its time. It was recently translated into Mandarin, and it is currently a bestseller in China as if it were written today—as if this was something that just went into a time capsule and now is really when it’s hitting powerfully on our shores.

So as an introduction to the power of evolutionary algorithms and information networks inspired by biology, Kevin Kelly basically explores what fundamentally are the underlying principles of complexity theory at the Santa Fe Institute. The properties of emergence, self-organization, what some would call the wisdom of crowds when you have many people behaving as a team and outperforming as they would perform as just individuals or that of a hive mind or how the social insects do what they do. It motivates the benefits of exploring biomimicry—basically learning from biology—especially in our information systems like neural networks, what we now call deep learning or machine learning, which are basically recapitulating, in silicon, the evolutionary and fetal development of our cognition.

So when you train these artificial neural networks, these layers are basically forming much like they do in a fetus, going back to Alison Gopnik. It basically starts with edge detection, then symmetry subsystems, eventually it builds up to facial recognition, and then identifying people’s faces. These are different layers in the neural nets that form in that consecutive order, just like our infants.

So basically if you look at where Moore’s law is taking us and where computation is taking us, we’re now at the cutting edge of computational capture in biology. We are actively re-engineering the information systems of biology and creating synthetic microbes whose DNA is manufactured from bare computer code and an organic chemistry printer.

The challenge we face in many of these synthetic biology domains is a question of what to build. So far we’ve largely just copied large tracts of code from nature. But the question then spans across all the complex systems we might want to build—from cities to designer microbes to computer intelligence—as all these systems transcend human comprehension. Basically, as we try to design more than we can comprehend, more than we can understand, we will shift from traditional engineering to evolutionary algorithms and iterative learning algorithms like deep learning and machine learning.

As we shift this engineering to the training of these iterative algorithms, the locus of learning shifts from the artifacts themselves to the process that created them. There is no mathematical shortcut to get through the decomposition of a neural network or to reverse engineer it or a genetic program. There’s no way to reverse evolve with the same ease we can reverse engineer the artifacts of purposeful design.

The beauty of these compounding iterative algorithms—by this I mean evolution, fractals, organic growth, art—that derives from their irreduciblity, their computational irreducibility. No mathematical shortcuts. It empowers us to design complex systems that exceed human understanding. In short, we are re-engineering engineering itself. It starts to look more like parenting than programming.

That brings us to The Age of Spiritual Machines by Ray Kurzweil, the inventor and futurist. I think this might be the most important book and even maybe more shockingly, I would say there was a single graph in the book that, itself, makes this book the most important book one could read and obviously therefore it’s simply this one graph. The graph of the 120-year version of Moore’s law. So let me explain what I mean by this. Also, just mention perhaps in starting, that it’s really just the first few chapters of this book I’d recommend, not the entire book that looks into the distant, distant future, like the next hundred years, but really just the background, the historical section, and then let yourself make your own conclusions.

Basically this book introduces the best abstraction of Moore’s law that I’ve seen out there. One that is understandable, meaningful, even cosmological, and has predictive power. So it is, I think, essential for tech futurism—predicting where we’re heading—as well as business planning. As most businesses become technology businesses, understanding how to predict our future becomes all the more important.

The popular perception of Moore’s law that, again, Gordon Moore from Intel who predicted computer power getting better and better, basically, is the sense that computer chips are compounding in their complexity at a near-constant unit cost. So it’s a sort of bang for the buck kind of a representation. But this is just one of the many abstractions of Moore’s law. People have all kinds of different ways of defining it. You get different answers from different people. But it relates to the compounding transistor density and true dimensions.

Other renditions of this Moore’s law just relate to speed, like: how many megahertz or gigahertz do we have in our chips? That was from the early days when people didn’t really know what they were talking about. It makes sense that as you miniaturize a chip, the distance traveled by any given signal is less; everything runs faster. Whereas some people refer to computational power, which is basically speed times density, because both benefits accrue as you miniaturize.

So for a long time this was thought to be very specific to Intel. But unless you worked for a chip company like Intel and unless you focus on fab yield optimization, you don’t really care about transistor counts. Nobody goes out and buys a million transistors, or, “Give me a billion transistors.” That makes no sense, right? Integrated circuit customers don’t buy that. They are basically consumers of technology and they buy computational speed and data storage. That’s what we care about. And quite simply, Ray Kurzweil in his book plots the calculations per second—so computational power, how many calcs per second—that you can buy for a constant dollar. So again, adjusting for inflation over a long period of time.

Ray Kurzeil’s abstraction of Moore’s law shows that computational power has followed a smooth, exponential curve for over 120 years. Basically, since the beginning of data on any kind of computer, is a straight line on semi log paper. There was years along the X-axis and a logarithmic scale of computational power per dollar on the Y-axis. And it shows a geometrical compounding curve of progress.

When we cast in these terms, Moore’s law is no longer transistor-centric, and this abstraction allows for longer term analysis. In other words, it’s not specific to Intel. What Gordon Moore, the person, observed in the belly of the early integrated circuit industry was a derivative metric. A refracted signal from a longer-term trend. A trend that begs various philosophical questions and predicts mind-bending futures through five paradigm shifts such as low electromechanical calculators and vacuum tube computers to computational power that a dollar buys has doubled every 18 months for 120 years.

Every dot on this curve is basically on the frontier of computational price performance of the day. One machine was used in the 1890 census, one cracked the Nazi Enigma cipher in World War II, if you saw the movie Imitation Game. One predicted Eisenhower’s winning the 56th presidential election.

I’ve been updating this graph since basically the time of the book, which was a while back, and have basically found—over the last 10 to 20 years that I’ve added to this curve—that the latest CPUs, and specifically Nvidia GPUs (the graphic chips), carry out this precise same curve of progress to the present day. That’s sort of extending Kurzweil’s analysis, I think 20 years past when he stopped the curve.

So every dot, every machine on this curve, represents a human drama. Prior to Moore’s law, which was first formulated in 1965, none of the people on the curve even knew they were on a predictive curve, right? It wasn’t until Gordon Moore basically came up with the Moore’s law that we would have thought to even plot such a thing. And every dot represents an attempt to build the best computer with the best tools of the day. Of course, we also use these computers to make better design software, better manufacture control algorithms, and so progress continues.

But notice that the pace of innovation, a straight line, imagine that for 120 years, is exogenous to the economy. Think about how long this has held true. The Great Depression, World War I, World War II, and various recessions have not introduced any meaningful change in the long-term trajectory of Moore’s law. Certainly the adoption rates, revenue, profits, and economic fates of each of the underlying computer companies behind the various dots may go through wild oscillations, but yet the long-term trends emerge nevertheless.

As one technology, such as the CMOS transistor and the current technology du jour, follows an elongated S-curve of slow progress during initial development, upper progress during the rapid adoption phase, and then slower growth from market saturation over time. But a more generalized capability, such as computation, which isn’t tied to one thing, storage more generally, bandwidth more generally, they tend to follow a pure exponential bridging across a variety of different technologies in their cascade of S-curves.


Well, in the modern era of accelerating change in the tech industry, it’s hard to even find a five-year trend with any predictive value yet, let alone a trend that spans centuries. I would go further and assert, as I did, that this is the most important graph ever conceived. So why? Why do I think it’s the most important graph in history?

Well, a large and growing set of industries depend on continued exponential cost declines in computational power and storage density. Moore’s law drives electronics, communications, and computers, and it’s become a primary driver in drug discovery, biotech, bioinformatics, medical imaging, and diagnostics. As Moore’s law crosses critical thresholds, a former lab science of trial and error experimentation becomes a simulation science. And the pace of progress accelerates dramatically, right? Becoming an information business and creating opportunities for new entrance and new industries. This is why as a venture capitalist, I love it.

Basically, think of an example. Boeing building aircraft. They used to rely on wind tunnels to test model aircraft design performance. Ever since CFD modeling became powerful enough to simulate this, design moves to the rapid pace of iterative simulations, and the nearby wind tunnels at NASA Ames and all around the country lie fallow. They aren’t used for aircraft design ever since the Boeing 777. The engineer can iterate at a rapid rate while simply sitting at their desk.

Now every industry in our planet is going to become an information business. I think that’s an important statement. Every industry. Consider agriculture. If you ask a farmer in 20 years in the future about how they compete, it will depend on how they use information from satellite imagery, driving robotic field optimization, to the code, meaning the programming code in their seeds. The genetic code. It’ll have nothing to do with workmanship or labor. Again, nothing to do with their workmanship or labor—the historical basis of competition, perhaps, in agriculture or a breeding line, right? It will become, eventually, an information business. And that will eventually percolate through every industry as information technology innervates the economy and makes it have a nervous system.

So interesting thing about Moore’s law. Nonlinearships in a marketplace are also essential for entrepreneurship and meaningful change. Technology’s exponential pace of progress has been the primary juggernaut of perpetual market disruption, spawning wave after wave of opportunities for new companies. Without disruption, entrepreneurs would not exist.

Moore’s law is not just exogenous to the economy. It is why we have economic growth and an accelerating pace of progress. At Future Ventures, my venture firm, we see this and the growing diversity and global impact of the entrepreneurial ideas we see each year. The industries impacted by the current wave of tech entrepreneurs are more diverse—and an order of magnitude larger—than those of the ’90s. Today we’re looking at everything from automobiles to aerospace to agriculture and energy.

So now one might ask the question—as I said, it’s almost cosmological—“Why?” Why would this trend hold for 120 years? I mean, it has nothing to do with the semiconductor industry. It has nothing to do with what we were first told by Intel and others that this was something very unique and tightly coupled to how we do integrated circuits. Why, more generally, does progress perpetually accelerate for humanity? That’s a really important thing, by the way. That wasn’t obvious to people back in the pre-agricultural period when bearded prophets could only forecast doom or the occasional flood or natural disaster wiping out humanity. That was the perception of the world. Basically struggle through; occasionally get wiped out by calamity.

We now can understand—and hopefully those in technology fully understand—that we are in a pace perpetual progress. We keep getting better culturally, evolutionarily, in the way that we live our lives and our overall happiness and the amount of human suffering in our circle of empathy. We just keep making progress. How could that be? Why is that?

Well, here’s one simple possible explanation. Coming again back to Moore’s law as one canonical example. Why does this 120-year version of Moore’s law perpetuate? Well, consider that all new technologies are a combination of technologies that already exist. It’s a recombination of prior ideas. Innovation does not occur in a vacuum. It is a combination of ideas from before, like standing on the shoulders of giants. In any academic field, the advances today are built on a large edifice of history. This is why major innovations tend to be ripe and tend to be discovered at nearly the same time by multiple people. Think about Edison, Tesla, Marconi in history all discovering massive, major, new innovations within months of each other.

The compounding of ideas is the foundation of progress—something that was not so evident to the casual observer before the age of science. Science tuned the process parameters for innovation. It basically became the best method for culture to learn, and the scientific method, I would still assert, has been the greatest advance in human history on how we accumulate knowledge and how we make progress over time versus personal beliefs. You name it, right? Just saying, “I think something’s the case” and having that be as valid as any other person’s thoughts.

So from this conceptual basis come the origin of economic growth and accelerating technological change. Think of it as the combinatorial explosion of possible idea pairings, which it grows exponentially as new ideas come in the mix. It basically grows in the order of two to the nth power of possible subgroupings by something called Reed’s law. R-E-E-D, if you want to look it up on Wikipedia. Basically explains the innovative power of urbanization and networked globalization.

It explains why interdisciplinary ideas are so much more powerfully disruptive than those that just come from the warmth of the herd. It’s like the differential immunity of epidemiology where islands of cognitive isolation—think of academic disciplines with their own boundaries and vernacular—are vulnerable to disruptive memes hopping across, much like South America was to smallpox from Cortes and the conquistadors.

If disruption is what you seek, cognitive island-hopping is a good place to start, mining the interstices between academic disciplines. And it is this combinatorial explosion of possible innovation pairings that creates economic growth, and it’s about to go into overdrive.

In recent years, we’ve begun to see the global innovation effects of a new factor: the internet. People can exchange ideas like never before. Long ago, people were not communicating across continents frequently. Ideas were partitioned, and so the success of nations and regions pivoted on their own innovations.

Richard Dawkins states that in biology, it is genes that really matter and we as people are just vessels for the conveyance of genes. It’s the same idea with memes—memes meaning ideas. We are the vessels that hold and communicate ideas, and now a pool of ideas percolates on a global basis more rapidly than ever before. So I think we’re going to be entering a period of innovation like never before.

In the next four to five years, three billion new minds will come online for the first time to join this global conversation via inexpensive smartphones connecting in the developing world to satellite links from Starlink, this new product from SpaceX, and perhaps others. These people are not currently coupled to the global economy in any meaningful way other than Unilever or Procter & Gamble. They are just out there doing subsistence farming and not communicating and not contributing the ideas that they might be able to contribute to the global conversation. This rapid influx of three billion people to the global economy is unprecedented in human history, and so too will be the pace of idea pairings and progress.

We live in interesting times at the cusp of the frontiers of the unknown and breathtaking advances, but it should always feel that way, engendering a perpetual sense of future shock. Thank you.

The Tim Ferriss Show is one of the most popular podcasts in the world with over 400 million downloads. It has been selected for "Best of Apple Podcasts" three times, it is often the #1 interview podcast across all of Apple Podcasts, and it's been ranked #1 out of 400,000+ podcasts on many occasions. To listen to any of the past episodes for free, check out this page.

Leave a Reply

Comment Rules: Remember what Fonzie was like? Cool. That’s how we’re gonna be — cool. Critical is fine, but if you’re rude, we’ll delete your stuff. Please do not put your URL in the comment text and please use your PERSONAL name or initials and not your business name, as the latter comes off like spam. Have fun and thanks for adding to the conversation! (Thanks to Brian Oberkirch for the inspiration)