0:00
/

Jacob Kimmel - Pioneering a predictive biology revolution to reverse aging

Learn more about the $1.6B startup backed by top VCs and Eli Lilly working to create epigenetic medicines to treat the diseases of aging.

Dr. Jacob Kimmel is the President of NewLimit, a biotechnology company developing reprogramming medicines to treat age-related disease and extend human healthspan. Dr. Kimmel co-founded the company with Blake Byers and Bryan Armstrong, and has since raised over $300M from top venture capital funds like Kleiner Perkins, Dimension, Founders Fund, and Khosla Ventures. They expect to initiate their first clinical trial in a few years.

Dr. Kimmel has garnered a reputation as one of the brightest minds in longevity and the intersection between artificial intelligence and molecular biology, and NewLimit is often cited as one of the most important companies building in longevity today.

Watch on YouTube. Listen on Spotify or Apple Podcasts.

Timestamps

0:00 Intro

2:40 What is aging and why do we age

8:04 Epigenome as most validated layer for molecular regulation of aging

10:08 Germline rejuvenation (how nature makes young organisms from old ones)

14:40 Creating synthetic transcription factors

17:24 Where the information for youthful function is stored

18:55 DNA damage as an overrated driver of aging

23:46 NewLimit’s therapeutic programs & the Predictive Biology revolution

33:30 Scaling laws in biology and the early signs of a race towards biology’s own ChatGPT moment

41:12 Bottlenecks to inflecting EROOM’s law

46:45 Drug delivery

51:03 Aging’s common mechanisms

58:22 Advice on studying biology

1:01:44 The topics Jacob finds most confusing

1:03:17 An idea in biology that most get wrong

1:05:31 What tools are we lacking in biology today?

1:08:36 Organ replacement

1:11:23 How to think about lifespan extension

1:14:12 Who is doing the most important work in radical life extension

1:16:00 Lightning round

Transcript

00:00 Intro

Dr. Jacob Kimmel 00:00:00

We know that just by reprogramming the epigenome, you can take an aged cell from an organism on death’s door, reset it back to a young stem-like state. From that young stem cell, you are able to generate young, functional somatic cells by redifferentiating them. You can even take that young cell that originated from that old animal and generate a whole new young animal with a normal lifespan ahead of it.

This suggests that many of the declines in function within cells and tissues with age can be reverted through the mechanism of epigenetic reprogramming. The type of medicine we imagine building would be an mRNA medicine. You can deliver transcripts for these transcription factors to individual cell types within the body. Those transcripts would then make transcription factors within the cell. These would remodel the epigenome and ideally restore youthful function in these aged cells, in a way that keeps patients healthy longer or treats an age-related disease they’ve already developed.

We have many reasons to believe that it’s possible. The skip-to-the-punchline answer is we have found hundreds of these combinations here at New Limit that have these effects today. The wonderful thing about mRNA technology is that it’s really cheap to manufacture. You can get these products down to something like $3 a dose. This means it is a technique that would allow us to build these medicines for everyone, not just a very select group of people.

There will be a race to generate these data sets. I do not think many people have recognized this or started on the race in a serious way yet. The number of people who are talking about building models like this is much larger than the number of people who are seriously trying to generate the right kind of data.

Daniel 00:01:28

Welcome to the Free Radicals podcast, where we interview the scientists and builders working to dramatically extend human lifespan and bring about a sci-fi future where humanity has full control over biology. Today’s guest is Dr. Jacob Kimmel, President of New Limit, a biotechnology company developing reprogramming medicines to treat age-related disease and extend human healthspan.

Dr. Kimmel co-founded the company with Blake Byers and Brian Armstrong. New Limit has since raised over $200 million from top venture capital funds like Kleiner Perkins, Dimension, Founders Fund, and Khosla Ventures. They expect to initiate their first clinical trial in a few years.

Dr. Kimmel has garnered a reputation as one of the brightest minds in longevity and at the intersection between artificial intelligence and molecular biology. New Limit is often cited as one of the most important companies building in longevity today. You will see why in this conversation.

I am your host, Daniel Shur, and my co-host is Eric Dai. I studied physics and neuroscience, but have since had a career in business strategy within tech, while maintaining a passion for biology and longevity. My co-host, Eric, is a bioengineer, venture capitalist, and biotech founder. I hope you enjoy this conversation.

02:40 What is aging and why do we age

Eric 00:02:40

Jacob, what is aging, and why do we age?

Dr. Jacob Kimmel 00:02:44

Small questions only, apparently, here on the Free Radicals podcast.

Aging is a loss of function in cells and tissues over time. Many molecular changes occur in cells and tissues as a function of chronological age—how long you have been on Earth past adolescence. The easiest way to define what we care about when we describe aging is that loss of function itself.

That begs the question: why? As you asked, if we zoom way out, the shortest version of the answer is the second law of thermodynamics. All systems eventually tend toward entropy. Humans, as a biological system, are a very complex set of chemistries that have various interplay.

Absent very careful regulation to keep the system at a homeostatic state, it starts to decline in its various functions over time. During primate evolution, there was some amount of selection for humans and other animals to be around for a period long enough to optimize for reproduction. However, the number of individuals who lived many decades to create selective pressure for much longer lifespans was small. Therefore, our longevity is not necessarily optimized.

You then revert back to this default explanation: complex systems will fall apart absent very ornate regulation. We do not necessarily have reason to believe that ornate regulation was evolved in our later decades. We age by default; we lose function as a result of the second law of thermodynamics.

Eric 00:04:18

You have talked in the past about the high baseline hazard ratio of our evolutionary predecessors and how this high hazard rate may have limited gradient signal flowing into the genome to select against aging. How does this relate to the theory of aging as a function of antagonistic pleiotropy?

Dr. Jacob Kimmel 00:04:37

The argument you are alluding to is that because the likelihood of dying on any given day for most of primate evolution was very high, the number of primate individuals—be they humans or proto-humans—who made it into their later decades was very small.

If you think about evolution as trying to optimize the genome for maximal fitness, the number of individuals who made it to their later decades and then provided selective pressure for alleles that engendered longer lifespans or healthspans was small. This general argument then also plays into antagonistic pleiotropy. A capsule version of the concept is that there might be alleles that provide benefit early in life, but are detrimental later in life.

If there is very little selective pressure from older individuals in the population, the positive selective pressure for that early benefit will strongly dominate selection for those alleles. There will be very little negative selective pressure pushing back against the selection of that allele, and its installation in the population writ large.

We do not have many great synoptic examples of antagonistic pleiotropy in humans. There are not many genes where it is obvious you want them early and then do not want them later. At a conceptual level, to give folks an idea of what sorts of phenomena might result here, consider hyperinflammatory activity of our immune systems later in life. This might be beneficial early on if, on the margin, in the time of primate evolution, you were better off being a little hyperinflammatory and more reactive to various infections. The downsides that occur later in life would provide potentially less selective pressure against it.

These are the sorts of phenomena one can imagine, and it comes down to these population size dynamic arguments. Our evolutionary history optimized our genome to be maximally fit at the time when most people were alive during primate evolution. Everything else simply got less selective pressure and is probably an ancillary concern.

Daniel 00:06:34

The way you are describing aging arising as a tendency towards entropy and something that simply was not solved for by evolution. Do you ascribe to any integrated theory of aging, any single pathway or process that we could identify that would explain all of the specific kinds of damage and disease that we see?

Dr. Jacob Kimmel 00:06:56

Short answer: No, I do not think there is much evidence for some unified programmed theory of aging. These ideas are very seductive, and there are many people who want to believe they are true. If you believed there was some sort of unified theory of aging, some individual gene or molecular species causing all the problems, the answer would be pretty darn easy. All you would have to do is find that molecular species, find that gene, change its function, and then we would have some big unlock, and it would all be one parsimonious solution.

Unfortunately, biology and reality in general are much messier than that. The reason for our aging is, in my view, likely to be something more akin to increasing entropy over time and the breakdown of most complex systems. You will find that, depending on where you look in the body, which cell or tissue, the ways our cells and tissues age are pretty different. I believe there are layers of molecular regulation that you can intervene on that have really dramatic beneficial effects. We work on one here at New Limit in the epigenome. However, I do not think there is just a single upstream cause you can point to, saying, “It’s all bad Gen X, and we should go after that.”

08:04 Epigenome as most validated layer for molecular regulation of aging

Daniel 00:08:06

Regarding the epigenome, do you believe the epigenome is the most validated example of a universal layer for the molecular regulation of aging, or do you think there are other potential layers that should be considered?

Dr. Jacob Kimmel 00:08:19

The epigenome and epigenetic reprogramming in general have thus far shown the largest effect sizes from interventions. To give you an example: by reprogramming the epigenome, you can take an aged cell from an organism on death’s door, reset it back to a young, stem-like state. From that young stem cell, you are able to generate young, functional somatic cells by redifferentiating them. You can even take that young cell that originated from that old animal and generate a whole new young animal with a normal lifespan ahead of it. You can even repeat that process 13 times on a loop, as one group did in Japan.

This suggests that many of the declines in function within cells and tissues with age can be reverted through the mechanism of epigenetic reprogramming. That is not to say it is totally explanatory. Many other things occur, and there are other layers of molecular regulation that also change with age and are important. However, it does tell us that intervening on the epigenome alone is sufficient to provide benefit to many potential pathologies.

There are other layers of regulation that also have some evidence for positive interventions, but the effect sizes are much, much smaller. Prominent examples include metabolic or dietary interventions. You can change the way animals are metabolizing, either by altering the amount or the content of their food sources, and get some amount of health or lifespan benefit. But in general, those effect sizes tend to be pretty modest. We are talking about a 10% bump in the median lifespan of one particular mouse strain in a laboratory. If you change to another mouse strain, maybe it is a 2% bump. Which of those better reflects humans? I do not know. But I do know these are relatively small effects on the margin when you compare it to something like epigenetic reprogramming, where you can take an aged cell and restore it to the dramatic amount of youthful function that it had many decades previously.

10:08 Germline rejuvenation (how nature makes young organisms from old ones)

Eric 00:10:09

There are a few discoveries that seem foundational to the thesis at New Limit. One of them, from our perspective, is this concept of germline rejuvenation. This is something that happens hundreds of thousands of times a day on a daily basis. It is the idea that when two mammals procreate, their ovary and sperm cells combine to form a zygote, and that zygote effectively has its epigenetic age reversed from whatever, regardless of the age of the original oocytes. It is reverted back to an age of zero. Tell us more about your views on germline rejuvenation.

Dr. Jacob Kimmel 00:10:46

Germline rejuvenation, as you highlighted, is a really interesting example where there is a natural epigenetic reprogramming event. To step back before I start talking about the names of specific genes, and to give listeners an idea of how dramatic this is: it seems prima facie obvious, but regardless of the age of two parents, the child’s age always starts at zero. This suggests that one of two things must be going on. Either the germline is magical and special and experiences no aging, and regardless of age, your germ cells remain at age zero, roughly when you were born, or you are able to reverse some features of aging that occur in the germline.

There was a theory of aging – this was more of a speculative hypothesis – that the germline was protected. In the past, this was called the “disposable soma theory,” the idea being that everything else in the body does not matter, and evolution is only optimized to protect your germline. I think there is a lot of evidence today to suggest that is not true. You can find many changes that occur with age within germ cells, both at the level of their epigenome and other molecular features.

Rather, the bulk of the evidence suggests that when a sperm and an egg fuse to form a zygote, there is dramatic remodeling and epigenetic reprogramming that reverses some of the features of aging that might arise in either of those gametes individually.

There are several ways this works. Certain enzymes are expressed after that fertilization event that directly go around and erase epigenetic marks. There are TET enzymes that remove DNA methylation marks on the genome. There are other, less direct mechanisms, such as blocking some of the molecules that write epigenetic marks. Then, as the cell divides and the embryo begins to grow, some of the parental epigenetic marks are diluted out. Through that process, followed by a series of transcription factor cascades that open and close certain parts of the genome during development, the epigenome is rewritten from its state in the two gametes to a new blank slate that exists within the young embryo as it forms, and eventually the young human that results afterwards. While this may not be exactly the same set of molecules involved during reprogramming, it is an existence proof that you are able to, by remodeling the epigenome within two cells that fuse to form that eventual zygote, restore youthful function even if the gametes themselves experience aging a priori.

Daniel 00:13:14

Do we know the difference between the process of induced pluripotency via the Yamanaka factors and germline rejuvenation, or is the latter more of a mystery? We do not know the signaling that is happening there in sufficient depth to compare.

Dr. Jacob Kimmel 00:13:29

In short order, there are many differences. Do we know all of them? Probably not. Do we know enough of them to say in a strong form that these are related, but not exactly the same thing? Also, in a strong form, yes.

To give you some intuition for it, the IPSC reprogramming work done using Yamanaka factors turns on just four transcription factors to turn any adult aged cell back into a young, stem-like state. You’re just turning on four transcription factors. The remarkable thing is that even just that simple intervention is sufficient to perform this dramatic reset.

What seems to be occurring during germline rejuvenation, this reprogramming event that occurs during fertilization, as you do the maternal to zygotic transition, involves very different molecules and different cascades of transcription factors. The Yamanaka factors play some role, but they don’t explain the whole story. You can then think about how that affects the remodeling process that’s occurring. In Yamanaka factor reprogramming, a decent amount of that results from replication and dilution, similar to later in development. There are lots of differences. We know the names of many of them, but it’s not a succinct three-bullet point list.

14:40 Creating synthetic transcription factors

Daniel 00:14:41

We know there are transcription factors that can turn a cell back into a stem cell. That’s the Yamanaka factors. We’ve also seen there are ways to turn one cell type directly into another type.

You previously mentioned there’s a concern that there’s no reason to believe that there exists a natural set of transcription factors that can turn an aged cell back into a young cell without changing the identity. That’s effectively what you’re trying to find at NewLimit for every single cell type eventually. What do you think is the likelihood that, in order to do these therapies, we’ll eventually need to find artificially engineered transcription factors rather than naturally occurring ones?

Dr. Jacob Kimmel 00:15:22

I’ll edit slightly what you highlighted from some of my previous comments. I’ve probably said something like, there is no reason to believe that natural groups of transcription factors are optimal for restoring both youthful function while not reprogramming cell identity. I think we have a lot of reasons to believe that it’s possible.

To skip to the punchline, we’ve found hundreds of these combinations here at NewLimit that have these effects today. Even absent our own data, the arguments that it should exist come from the fact that we know you can decouple reprogramming cell type and cell age in the other direction. You can reprogram an old cell into an old version of a different cell type, preserving the age while changing the type. That’s been known for many years now from folks like Marius Wernig at Stanford and Fred Gage at the Salk Institute. You can reprogram an old skin cell to an old neuron and get a model of an aged neuron in a dish. That’s an interesting set of experiments. Given that you can decouple this cell age and type reprogramming in one direction, it stands to reason that you are probably capable of decoupling them in the other. I think we’ve now provided a decent amount of evidence that it’s possible.

In terms of how likely it is that we need synthetic transcription factors, I’d argue it’s very unlikely we need them. We have many payloads today entirely composed of natural transcription factors that have dramatic effects on cell age and eventually cell function.

I do think it’s plausible that in the future, many decades from now, reprogramming medicines will eventually include synthetic transcription factors as a way of evoking these phenotypes more efficiently. As you alluded to, there’s no reason to believe that the basis set of transcription factors learned by evolution to optimize for development is necessarily the perfect set to reverse programs of aging, because that doesn’t occur in natural physiology. While these factors could be sufficient, and we’ve found many examples, you could plausibly get away with a smaller number or even evoke that process more effectively using something synthetic. I don’t think it will be necessary, but I think in a few decades, it’s likely that we will be using molecules like this.

17:24 Where the information for youthful function is stored

Daniel 00:17:27

One other thing I want to ask you about: if you think about the process by which the Yamanaka factors can take an aged cell and turn it back into a stem cell, it’s pretty absurd. Especially if we think about the process of aging as increasing entropy. You would think information would be lost, yet somehow, all the information to create every single tissue of a healthy organism is still there. What’s your intuition for where that information is stored?

Dr. Jacob Kimmel 00:17:56

This is a relatively simple answer: the genome. If you think about it, every time two gametes fuse, the entire body plan of a human organism and the diverse functions of hundreds of cell types and all of their interplay are evoked from roughly 3 billion base pairs of a particular code. It’s a highly compressed, highly nonlinear code itself. I imagine that’s what epigenetic reprogramming allows us to do: unlock some of the information stored in your genome about what the youthful epigenetic landscape should look like. We’re using transcription factors to query the genome’s notion of what that epigenetic landscape should be and then reinstalling or reapplying it in these previously aged cells.

I do think there is a notion of information loss over time in the epigenome, but that’s lost at the epigenetic layer. It’s not necessarily lost within the genome itself. That’s basically the explanation for how reproduction works: how we’re able to form new humans just from two copies of this genome that are fused in an otherwise epigenetically blank slate.

18:55 DNA damage as an overrated driver of aging

Daniel 00:18:57

What about DNA damage that happens over time as a result of aging?

Dr. Jacob Kimmel 00:19:03

I think DNA damage probably plays some role. It gets a lot of attention because it is very intuitive. Obviously, if you have mutations, that could break the DNA code and make a cell dysfunctional. Most people recognize that the most dramatic result of DNA damage is probably cancer. I think DNA damage and its accumulation are very clearly causal for cancerous arousals.

I do not think there is great evidence that most other phenotypes of aging, outside of cancers, are caused by DNA damage accumulation. There are several lines of argument that support this view. One would be that a meaningful fraction of all somatic mutations in an organism arise during development, which makes perfect sense when you consider that most mutations arise when you copy all of the DNA and make a mistake. Most cell divisions occur when you are building your body for the first time, not necessarily having to maintain it over time. The number varies depending on the tissue compartment, but something like 20 to 40% of your somatic mutations arise during development. The idea that a simple 2x on that basis is responsible for all of aging seems interesting to me. You would have to posit some very weird non-linear mutational dynamics for that to be true.

Another version of this argument, which I think is even clearer, is that there are humans who have natural variants of DNA polymerase that are more prone to mutation accumulation. We can generate animals that have versions of this as well. What you find is that both these humans and animal models are very susceptible to cancerous arousals. The patients themselves usually do not live very long because they get cancers quite early in life, often in their 40s or 50s, rather than their 70s or 80s. They do not have other phenotypes you and I would associate with aging. It is not that all of their metabolism is pathological and their skin sags. It seems to be concentrated on cancers. The same is true of many of these animal models.

I think both arguments suggest that if mutations explained the vast majority of the variation, it is inconsistent with those data. I think mutations probably matter, and if you are talking about the limit case of living to thousands of years old or something like this, it will start to become a problem. But when we think about the types of interventions we want to make in the near term to add years of health to each of our lives, I think it is unlikely we are limited just by mutation accumulation. That is a problem for a century’s worth of progress from now, rather than what we need to tackle today.

Daniel 00:21:27

On the same topic, people often think DNA damage is a central driver of aging. When they consider aging biology, at least in the mainstream, they think about telomeres and the Hayflick limit. Perhaps the Hayflick limit is not as mainstream, but these topics generally come up. Do you also think that those are overstated drivers of aging?

Dr. Jacob Kimmel 00:21:49

It is hard to do the Tyler Cowen “overrated versus underrated” assessment without knowing the common perception of these topics. From a person on the street’s view, they are probably overrated. It is one of the most common questions I get if I am talking to somebody outside of science or technology about what I work on. They ask, “Oh, telomeres. I thought we figured that out.” From that perspective, yes, overrated.

It turns out, to give some examples, that if you try to modulate telomere length in animal models, it is incredibly hard to see phenotypes. You can knock out the enzyme that extends telomeres in mice, and it takes five generations to see a meaningful phenotype in the animal. You can build five bodies worth of cell divisions before you start to see problems, suggesting that the amount of cell division occurring to shorten telomeres in these animals is not that dramatic. Mice still age. You can make arguments about mouse telomere length versus human telomere length and similar concepts, but I think it is a strong negation of the idea that telomeres are all you need to explain aging.

There are other examples where humans have mutations in telomerase machinery, either in the enzyme telomerase itself that extends telomeres, or in the RNA component that gets added to the ends of genes. Basically, they do not extend their telomeres the way others would, and they do develop diseases. It is not to say that telomeres do not matter at all, but those diseases are relatively localized. They have a preferential disposition for things like idiopathic pulmonary fibrosis, which seems to be caused by the fact that certain stem cells in the lung, which divide frequently to rebuild the epithelia, constantly encounter the outside environment every time you take a breath. These seem to exhaust their telomeres earlier in life than other stem cells. Again, they do not have the phenotypes you or I would generally associate with aging, outside of that particular lung fibrosis pathology in some cases. So, I think it is another example where if this explanation were truly sufficient, you would expect very different data than what we see. I think it is not sufficient, even if it plays a role.

23:46 NewLimit’s therapeutic programs & the Predictive Biology revolution

Daniel 00:23:48

Going back to epigenetic reprogramming and new limit, can you tell us a little about the therapies you are developing and what you think the anti-aging therapies of the future will look like?

Dr. Jacob Kimmel 00:23:58

We’re working on developing medicines based on the notion of epigenetic reprogramming. The basic idea is that we can find groups of transcription factors. These are special genes in the genome, like orchestra conductors. They run around and tell other genes when to turn on or when to turn off by modifying the epigenome. They don’t do much directly themselves. A conductor cannot play an instrument, but tells each musician when to play and when to be silent. Just as Yamanaka found four transcription factor genes that can reprogram an old cell back to a young stem cell, we can find payloads—groups of these genes—that reprogram an old cell back to a younger version of itself while preserving the cell type.

We work today primarily on hepatocytes, T cells, and endothelial cells. These are three cell types where we imagine that delivery for modern RNA medicines is tractable within humans, with existing proofs of concept. This segues into the type of medicine we envision: mRNA medicine, where you can deliver transcripts for these transcription factors to individual cell types within the body. Those transcripts would then make transcription factors within the cell, which would remodel the epigenome and ideally restore youthful function in these aged cells to keep patients healthy longer or treat age-related diseases they have already developed. Ultimately, those medicines will be fairly accessible because the wonderful thing about mRNA technology is its low manufacturing cost. You can get these products down to $3 a dose. That means it’s a technique to build these medicines for everyone, not just a very select group of people.

The core challenge we’re trying to solve is how to discover these optimal groups of transcription factors. It sounds great until you realize you have to figure out what this payload is. The challenge we faced when we started the firm is the number of combinations of these genes to search through is about 10 to the 16, even with many simplifying assumptions. 10 to the 16 is an incredibly large scientific notation number that is hard for most people to grasp. If you look it up, the best analogy is about 10,000 Milky Ways worth of stars. It doesn’t matter how smart you are; you’ll never be able to do all those experiments.

We’ve been constructing a combination of a molecular system to run large screens, and, just as importantly, an artificial intelligence system to analyze the combinations we’ve tested and help us prioritize which groups of these transcription factors to test next to see if they can make an old cell look and act like a young one, based on its gene expression and performed functions.

Eric 00:26:25

Jacob, you wrote previously about predictive biology, which you described as a distinct new mode of epistemology at the intersection of molecular biology and machine learning. Clearly, this is one of the central tenets of how you’ve built out New Limit as a company. Can you describe your framework for predictive biology and how it integrates with the work you do at New Limit?

Dr. Jacob Kimmel 00:26:46

I think it’s helpful to set it up first with the epistemic frameworks that biology has used previously. The vast majority of modern biology is based on the epistemic framework of molecular biology. People might introduce themselves as immunologists or neuroscientists, but the majority of us are playing off an intellectual toolkit that emerged in the 1930s as a result of Warren Weaver’s work. This work aimed to create a discipline that reduces biology from a series of phenomena to a series of mechanisms.

The meta-hypothesis of all molecular biology is that a relatively small number of molecules within a cell, tissue, or organism controls complex biological processes. If you see a complex phenomenon occurring, you expect to find a gene or a couple of genes that explain it. By “explains,” I mean you’ll find molecules that are both necessary for that phenomenon to occur and sufficient to reconstitute it by synthesis when introduced into an organism lacking those particular molecules.

That epistemology took us far. It allowed us to discover the basis of heredity, with DNA as the core molecule. It helped us understand how DNA transmits information into proteins within a cell, discovering mRNA as the core molecule between DNA and proteins. This helped us figure out how mRNA is decoded from a series of codons to a series of amino acids. That epistemology has been incredibly productive.

It doesn’t require a rocket scientist to figure out that there are biological processes that involve more than one or two genes interacting together. These are more complex, emergent phenomena. Predictive biology takes a top-down view on these phenomena, building predictive models of what will occur so that we can design interventions to achieve a target goal. By contrast, molecular biology works bottoms-up to write down a set of equations or hypotheses. It then stacks enough of these together to get a sufficiently predictive model of the system, where each piece of that eventual model can be explained in human language.

Predictive biology favors a predictive model rather than an explainable interpretation. The argument is that if your model is sufficiently predictive of the future – what would happen in a given set of experimental conditions – then even if a human cannot eyeball the weights of a parameterized model and understand it, you are still able to make useful predictions that allow you to engineer the system effectively. If I know what a given experimental condition will result in, I can run experiments in silico and, if nothing else, reduce the search space that I test in the real world.

This segues into how we are building things at New Limit. The combinatorial space of transcription factors is far too large to search exhaustively. We also lack biological intuitions for it. Anyone who tells you they can walk up to a whiteboard or notebook and write down the names of a few molecules likely to make an old cell younger is fooling themselves, if not just fooling you. Very few people have this ability; otherwise, they would have done it already. That is the way molecular biology would approach the problem: state a hypothesis about a few molecules likely to have the desired effect, then rip them out of a system or add them in and see what happens.

By contrast, we have approached this problem by acknowledging that I will be poor at generating those hypotheses. Rather than using my intuition to build this model from the bottom-up – outlining which transcription factor binds a particular locus and what it does – I will measure the resulting phenotypic effects. Did I make this old cell look like a young one? Did I make this old cell perform its youthful function? Then, I will learn a model that can predict the eventual results from the group of transcription factors I perturbed.

Once we have constructed such a system, we can use it to rank order the hypotheses we interrogate in the real world. We do not necessarily need to introspect the model and explain why every weight has its value. We only need to believe that the model is useful enough to narrow the hypothesis space.

This gives a sense of how we are thinking about these intractably large hypothesis spaces. It equally applies to how I view predictive biology being used in other firms that design proteins, RNA, or DNA molecules, where search spaces are likewise intractably large. Rather than the molecular biologist’s perspective of starting with a best idea and then mutating it, you can search a much sparser sampling of a very broad space. Ideally, you then learn a model that helps you optimize toward the global rather than a local optimum.

Eric 00:31:21

There are three ways to break down building a predictive biology framework around understanding life sciences and human aging: the kind of data you gather, the amount of data you collect, and the way you analyze and use that data. Among those three parts of the framework, where do you believe we have the most work to do and the most benefit to gain?

Dr. Jacob Kimmel 00:31:47

The amount of data, followed by its kind, then the analytical framework, is largely what matters. The ecosystem’s attention seems somewhat inverted from that prioritization. There is significant interest and excitement around algorithmic innovation, for good reason. With just a few clever people, meaningful progress can be made in designing biomolecules or engineering a given cell state.

However, we are limited by the availability of sufficient data to build the most performant versions of these models. In other areas of artificial intelligence, such as natural language and computer vision, many algorithms have comparable performance. They are not entirely equal, but their performance is often very similar. Thus, architectural or algorithmic decisions are often less important than the corpus used for training, provided there are sufficiently large numbers of free parameters and adequate compute going into the system.

In biology, contrasting with natural language and computer vision, we have many orders of magnitude fewer useful data points for most problems we care about. The areas where we have seen significant progress in applying these new artificial intelligence systems are those with large historical data corpuses. Examples include protein language modeling, leveraging resources like the PDB and protein sequences collected from evolutionary history through broad metagenomic sequencing efforts.

Therefore, if we want to ask questions at higher layers of abstraction in the stack – less about how to design a molecule that does X, and more about which molecules should even be designed, which groups of molecules should be modulated, how to get them into these cells, or the likelihood of success in patients X, Y, and Z – these questions will be data-limited.

33:30 Scaling laws in biology and the early signs of a race towards biology’s own ChatGPT moment

Eric 00:33:32

What is your mode of data generation and data analytics at NewLimit, and why did you settle on this particular framework for those two areas?

Dr. Jacob Kimmel 00:33:41

For data generation, we largely base our experiments on single-cell RNA sequencing and pooled functional genomic screens. Briefly, within our ML framework, we generate datasets observing cell states based on all genes expressed at a given time, as measured through RNA sequencing. This is recorded prior to any intervention or reprogramming, and then again after cells have been reprogrammed with a specific set of molecules. This can be imagined as a matrix of cell measurements before and after, with labels indicating the perturbing molecules.

We also know our target destination state for cells: we want them to appear young. It is fairly easy to obtain profiles of young cells. We collect cells from donors whose birthdays indicate youth and generate profiles. This means we know not only the effects of our perturbations, but also the direction we are trying to drive these cells within the state space. We can predict their effect on traversing that particular trajectory.

We model this by building models that ingest representations of the transcription factors we are activating. These representations are generated using existing molecular foundation models, such as protein language models and DNA language models. These work well because, as mentioned earlier, these are areas with vast historical data corpuses.

Even at the beginning of training our models, the perturbation representations are intelligent. They contain significant knowledge about which molecules are similar, which might have very different effects, and which might be relatively degenerate or interchangeable. We then train our models to predict from these TF representations the effect on a given cell phenotype, based on the number of genes it is expressing at a given time. From there, we can even predict a value judgment: how old does this cell appear, based on the gene expression effects likely to be invoked?

We train models like this on our data corpus. To support my earlier claim about being data-limited, we found that model performance scales very readily and log-linearly with the amount of data. This exhibits the same scaling law phenomena observed in natural language or computer vision. However, we are at a very early point on the curve. This means we have many orders of magnitude to traverse before we saturate the practical or tractable regime of data scaling.

This provides some sense of our problem framing. While other types of data could be collected, we chose this framework because we believe there is significant mutual information in biology. Single-cell genomics experiments allow us to perform large, pooled experiments simultaneously. In a single dish, we can test thousands of groups of transcription factors and observe their effects on gene expression.

Because there is significant mutual information in biology, gene expression effects convey most of the necessary information about cellular events. It is not everything; some electrobiologists would disagree. However, it is sufficient to determine if we are moving in the right direction. These technologies have advanced, enabling us to operate at a tremendous scale today that was not possible five years ago.

Daniel 00:36:46

Given the similar scaling laws in building these models, like those seen with LLMs, and that even at the current scale, you are already discovering potentially extremely valuable therapies, does this imply there should be a massive race among different biotechs to build up these datasets and models? Does this mean that in five years, we should expect incredible biology models, comparable to what we have with LLMs?

Dr. Jacob Kimmel 00:37:14

You shouldn’t be giving up the game this easily. I do believe that there will be a race to generate these data sets. I don’t think many people have recognized this or started on the race in a serious way yet. I think the number of people who are talking about building models like this is much larger than the number of people who are actually seriously trying to generate the right kind of data.

There are many nuances here as well. Even when you talk about generating data that maps perturbations in cells to their responses, what cells are you talking about? Are they real human cells with the right number of chromosomes, or are they cell lines with an unknown number of chromosomes, and potentially not even human? I think the number of people who have recognized the statement you just made, that there is the beginning of a data race here, are relatively small. But I anticipate that will change over five to 10 years. I anticipate you will also see many different groups trying to build models like the ones we have. They are sometimes called virtual cells. I think of them more like perturbation prediction models or genetic perturbation prediction models in different domains.

We have tried to make this problem tractable in the early innings of this data game by focusing on very specific areas of that distribution. This includes particular cell types with transcription factors as the perturbation, really trying to mimic MRNA therapies. All of those conditionals make it easier to learn your distribution because they simplify the problem. I imagine other groups will have their own flavor of the conditionals they apply and the sub-problems they try to excel at early on. Over time, you will see these models becoming more and more general as groups build larger and larger data corpora and start to expand from their early beachhead into broader portions of that distribution territory.

This is similar to how we saw early on in computer vision or natural language, these models were countries unto themselves. Computer vision models had no understanding of natural language, and natural language models had no understanding of computer vision. Today, the overwhelmingly dominant paradigm is to build these in a multimodal fashion. I think we will start to see the same thing occurring across these different problem domains within biology.

Daniel 00:39:11

In the attempt to scale these models, what parts of the stack do you expect to see the most investment? For example, do you think lab automation has significant opportunity to help drive scaling here?

Dr. Jacob Kimmel 00:39:23

The short answer is yes, there is an opportunity. However, focusing entirely on automation for automation’s sake is not the ideal way to frame the problem. I spend a lot more time thinking about something like throughput per human or experiments per human hour than I do about how many of these operations were done by a robot.

The reason I make that distinction is that one of the ways we have gotten an incredible amount of leverage is by focusing on molecular parallelization as a way of increasing the throughput per human, often exceeding by an order of magnitude what is typically achieved with robotics. The way we run our experiments today is that in a single dish, we will deliver groups of hundreds of transcription factors, testing thousands of combinations simultaneously. Each one of those transcription factors might carry a barcode. Then, by measuring which of the barcodes occur in which cells, we can determine which combination they received. So, in a particular experiment with one scientist at the bench, we can test thousands of things at the same time.

This is by moving the logic of our experiment from physical space, where everything is separated into individual test tubes, to base pair space, where we can separate it in silico using sequencing and demultiplexing. We do not require that much lab automation because the way we solved the problem was by reducing the number of liquid handling steps that occur. The number of times a human is moving liquid in that experiment is fairly minor.

However, lab automation will play a role; you can certainly combine these two things. But I would focus more on determining the mechanisms by which we can increase the number of experiments that one human can complete in a given unit of time. I think a larger component of the story over the coming years will come from molecular parallelization techniques that leverage the ability to do much of this work in parallel, using DNA sequencing as a readout, rather than the development of robotic factories performing linear operations in a classic arrayed fashion.

41:12 Bottlenecks to inflecting EROOM’s law

Eric 00:41:15

When you think through the primary bottleneck to reaching the full potential of molecular parallelization, I believe a primary bottleneck is the underlying fidelity of the model system used for experiments. How well do the experiments you are running in vitro, in a mouse model, or other animal models, actually map onto human biology and ultimately clinical results? What is your view on the current state of model system fidelity? What is required for us to reach a point where we experience a true exponential takeoff moment for biology?

Dr. Jacob Kimmel 00:41:51

I will separate those out in terms of model system fidelity. The disappointing answer is that there isn’t a ton of rigorous measurement for most of these systems. Most of the disease models we use today, at the final stage of the drug discovery funnel, involve taking a given molecule or set of molecules, putting them into an animal, and asking whether or not they work. These models have not been validated quantitatively.

These models are effectively molecular versions of a binary classifier. They try to predict what will occur in Phase Two: will it succeed or will it fail when tested in humans? While in principle the data should exist out there in the world, I don’t know of many circumstances where it’s been compiled in one place and directly measured.

Groups like ours are left to ensure we never rely on any one of these model systems, because all of them have failure modes. They are largely derived heuristically. At some phenomenological level, they resemble the disease or aging pathology that occurs in a human being. However, an animal that might weigh 100 times less is probably not a perfect model. Therefore, you want to diversify across multiple model systems. This is something we consider very early on. You also want to ensure you’re incorporating human and primate models into your work.

Something we felt very strongly about from the earliest days is that all of our discovery work should start in primary human cells from real human patients. These are not cell lines with an unknown number of chromosomes, nor are they mouse cells with the wrong genome. These are real human cells from real people who aged in the real world. We can then ask whether they are made to look and function like younger versions of themselves. We have not put a bunch of drugs into the clinic to measure how predictive those are. We think there’s good first principles rationale to believe that by incorporating human biology as early as possible, you will increase the predictive validity of that preclinical modeling system.

Your question is what would lead to an exponential takeoff in biology? It depends on which layer you imagine this at. At the layer of these preclinical models and building the virtual cell models we are discussing, you might imagine seeing that phenomenon in a shorter period of time than you think. The reason I say that is as these models improve, you get better at choosing which data points to collect.

There’s a general principle known as active learning. Specific methods include Bayesian optimization, where you can use even a lossy initial version of your model to predict which data points will improve the model most readily. This means that even while the log-linear curve would project that the marginal data point is less performant, you can counteract that by suggesting that if you use your model as part of the data collection process—rather than a blind search—you might be able to find the highest entropy points early on. Therefore, without too many leaps and bounds of the imagination, these models could start to reach rates of improvement that exceed what we’re seeing today in just a few years’ time.

As people start to gather larger and larger data sets, we’ve already seen that. For example, the most informative points in our datasets are the things that actually work, the payloads that make old cells look young. We find more of those today as a function of having these models than we would in the old alternative case.

Regarding therapeutics, what would be required for exponential takeoff? I don’t know whether we’ll see exponential takeoff in our time. I will redefine the goal to ask what it would take to bend the curve of Eroom’s Law. Eroom’s Law is the increased number of dollars required to get a new medicine.

The explanation of why that’s occurring boils down to two main factors. First, Baumol’s cost disease, which affects everything in the economy and is not specific to this problem. Second, a failure to truly understand what we’re trying to drug. We don’t know how to pick drug targets very well.

We’ve mined the low-hanging fruit on the tree. If you look at the history of drug discovery, the first things we drugged were the most obvious ones. The people working at those times in history correctly identified which problems were most tractable. Now we are climbing higher up on the tree, and we need to identify which molecules will actually work when we put them in humans more effectively.

Most things we put in the clinic fail. If you can increase the success rate even without scaling the number of clinical trials, this is a way to get multiple-fold more molecules per dollar that you are investing. Models like this could be impactful here. If you are able to search through target space much more efficiently, for targets that previously couldn’t even be tested, such as combinations of genes or complex cell states, rather than just which gene to turn off with an antibody at a given time, you can imagine starting to bend the curve there.

This is a bold prediction. Nothing has worked to bend the curve of Eroom’s Law thus far. But I feel optimistic that it could be different this time. We are working with a more powerful technological innovation than we have in the past.

Daniel 00:46:37

You mentioned types of therapies that allow you to target targets you couldn’t target before. Here’s my question: Do you see drug delivery as a major bottleneck? Delivering gene therapies is really tough to deliver anywhere other than the liver and maybe the eye. So even if we get these models improved a lot and can identify new targets, will we run into a wall on delivery?

46:45 Drug delivery

Dr. Jacob Kimmel 00:47:05

We will eventually saturate delivery routes available today. How soon that happens, I do not know. I believe we are still further ahead on delivery technology than we are on designing complex therapeutic payloads where we modulate more than one gene at a time. We are currently trying to make transcription factors work, but there is a much broader space of all possible gene-by-gene interactions, up to 100,000 genes at a time. That might sound a little foolish, but a transcription factor regulates hundreds of genes simultaneously. Why couldn’t we build a therapeutic that complicated?

The number of hundred-plus gene interventions one might think about is astronomically large. I would have to compute what it would be, and it would likely cause an overflow error in any software because the number would be too big. I think we are still in the early innings for payload design. Delivery is currently winning the race by that capacity; we can deliver to more cell types than we have effective medicines for.

At some point, I expect those lines to cross, and then delivery will become limiting. I am very optimistic that part of the reason we have not made more progress on delivery is that we have not had enough payloads to justify that innovation today. It is very hard in our current biotech ecosystem to stack multiple innovations simultaneously in a given therapeutic. You need to have high confidence that at least a few parts of your molecule will work. If you take risks everywhere, the probability of success is very low, and it is difficult to capitalize over time.

Traditionally, if people pursue novel delivery, they need high confidence in the payload. As we improve at designing these payloads, I do not think we will ever reach monogenic disease-level confidence for some novel combinatorial payload. However, as that confidence increases and we see good examples working in the liver, in T-cells, in the endothelium, in the eye, or in the CNS via intrathecal delivery, we will then gain the motivation and proof points necessary to develop delivery for other tissues where there might not have been a clear proof-of-concept example with Mendelian disease to motivate earlier delivery development.

Daniel 00:49:15

How far into the future do you think that inflection point is? Should I start a drug delivery company now, or should I wait a few years?

Dr. Jacob Kimmel 00:49:22

It depends on your underlying drug delivery technology. There is a case that you could start a drug delivery technology company today, but people need to be very careful about the business models. A few examples exist where companies have successfully built a business out of delivery itself, applied across payloads. Dyno and AAVs are good examples, where I think the team is excellent.

However, the majority of firms founded on the model of “I am just going to work on delivery technology” without a clear path to therapeutics have unfortunately not succeeded. There is a joke in biotech circles: you either live long enough to become a therapeutics company, or you die trying to do something else. I think that applies to most people. If you were to start a delivery company today, you would want a clear story about the payload you will deliver and why it is at least relatively low-risk, adjusted for the potential impact.

I emphasize impact because many people, when told to minimize risk on their payload, approach it myopically. They focus on the bare minimum risk for the payload, even if it means the ultimate impact of the resulting medicine is minor because it benefits only a few people. You need to think about that risk in a more calculated way. Expected value has two parts: how much value will you generate if your therapy works and benefits many people significantly? If the potential value is high, you can justify more risk on the probability of success, which is the second half of the expected value equation.

If you were to start a delivery company, you would need a clear, linear story there. Regarding whether you should seek your seed round now, I would need to know exactly what you are trying to do. It is not a crazy idea. I could not categorically say that anyone working on delivery should wait five years for payloads to catch up.

51:03 Aging’s common mechanisms

Eric 00:51:06

I wanted to revisit the concept of aging being a diverse process. There are very few indications, from my perspective, that unified principles of aging are applicable even within a single individual, across different cells, tissue types, and organ types.

Considering what that means for the business model of building a company in aging, is aging so fundamentally different between individuals that there are no easy and tractable means of developing universal therapies that can go through the FDA and become commercial solutions for anti-aging? What is your view on individual-to-individual variance in the process of aging?

Dr. Jacob Kimmel 00:51:46

I think there is variation, but overall, many phenomena that occur with aging are extremely penetrant within the population, meaning they affect the vast majority of people. Simple examples are things you see in everyday life, like sarcopenia, where muscles lose lean mass. Regardless of how much resistance training you do, this affects nearly everyone with age. It is almost difficult for me to imagine outliers.

In your kidneys, almost everyone loses function over time, as measured by the glomerular filtration rate. This is so dramatic that the definition of kidney disease is simply drawing a line on a glomerular filtration rate chart. If you fall below the line, you have kidney disease. They even have to move that line as people get older, otherwise everyone would be defined as having kidney disease.

These functions are incredibly penetrant and occur in every person. While there are specific types of aging pathology that might manifest in relatively small groups or specific populations, there are widespread problems that are incredibly penetrant, which represent the real opportunities. If you can fix the challenges that arise in nearly everyone, in all of us one day, and in all of our families, then there are opportunities to build some of the most impactful products ever. We are missing the forest for the trees if we focus only on what is different across individuals, rather than the overwhelming phenomena that are the same.

Eric 00:53:07

Going back to a 2019 paper that you published, you reported that some features of aging were common across many cell types, while other features of aging had cell and tissue-specific trajectories. Some of the shared features of aging you identified in that paper, using gene ontology enrichment analysis, included downregulation in protein localization and endoplasmic reticulum translocation, as well as upregulation in things like antigen processing and inflammatory pathways.

Would you describe these as potential universal mechanisms of aging across multiple cell types, tissue types, and individuals, or is something else going on here?

Dr. Jacob Kimmel 00:53:44

It depends on what you mean by mechanism. I will describe what we found there, which has been repeated by many others. These are common findings: in many cell types, at least in model organisms, the baseline level of inflammation, measured in various ways, increases. There are also changes in the proteostatic machinery.

Saying proteostasis is impaired might be a stretch. There are not many good examples of this being true across numerous cell types. Certainly, the expression levels of components in the machinery are altered with age, and that seems fairly penetrant across these organisms. There is something to be said about the types of pathologies that occur in many cell types.

I spend little time thinking about those mechanisms because they explain little of the variation with age in any given cell type. In that same paper, if not a later one, we performed a simple analysis. We asked if we needed to explain the age-related changes in transcription within a given cell, considering just a few variables: the age of the cell’s donor, the cell type, and the cell type-age interaction.

Age alone explains little variation. I forget the exact numbers, but it is less than 10%. In contrast, the cell type and cell type-age interactions – meaning the factors unique to a cell’s identity as it ages – overwhelmingly explain the pathologies that develop.

These data suggest a model where a few things go wrong everywhere, but this is a relatively small part of the story. Most problems that arise in cells and tissues are unique to those cells and tissues. We will need to address these problems in a relatively unique way as well.

This also aligns with your earlier questions about programmed versus emergent views of aging pathology. If aging were programmed, with the same genome everywhere, you might expect these common mechanisms to explain a larger fraction of the variation. However, when we examine this, there are not many examples of such phenomena occurring universally that could be addressed by a single, permanent fix.

Eric 00:55:53

Considering the viewpoint that much of the evidence around aging points to progressive wear and tear, damage accumulation, and dysfunction of the underlying biology of the original system, rather than programmed obsolescence: Is the epigenetic layer – the epigenome – a universal regulator of cell and tissue identity? If so, could systematically reprogramming the epigenome back to a more youthful state effectively undo the damage and wear and tear of progressive aging, or is that stretching it too far?

Dr. Jacob Kimmel 00:56:28

Universal is likely too far. I would not make a crazy claim, but for the vast majority of cell types in the body, we imagine some benefit from epigenetic reprogramming. It is easier to list the exceptions than the places where this would likely work.

In the majority of cell types experiencing turnover in life, where cells are nuclear and long-lived, you are likely to see benefit from epigenetic reprogramming. In all cell types where changes in the epigenome with age have been measured, they have been quite pronounced. This suggests a general approach to restoring youthful function.

Data for restoring function across many cell types is scant. However, we are developing this data internally, and in the cell types we have examined, we are able to find payloads that have this effect. We are optimistic in the long run.

In terms of the exceptions, there are a couple of places where epigenetic reprogramming may not be all that is needed, even if it is sufficient. These might include neurons of the brain that never divide, where nuclear pore proteins don’t turn over or get replaced. Most proteins do, which is why we focus less on individual protein translational modifications as mechanisms of aging. But those nuclear pore proteins live with you for your entire life. They might get damaged in a way that cannot be changed simply by remodeling the epigenome.

Accumulated DNA mutations are another example. If a person is at the outer limits of their plausible lifespan, having accumulated many mutations in critical cells and becoming prone to cancer, simply changing the epigenome might not have as direct an effect within those cells. It could still prevent cancer. There is an interesting side argument as to why that would be the case.

It is important that we go in eyes wide open, recognizing that biology is not so simple as to say that methylation marks on DNA and chemical modifications of histones explain everything. They explain enough to make impactful products, but probably not the entire story of what goes wrong with age.

58:22 Advice on studying biology

Daniel 00:58:26

I would like to step back and ask if you have any general advice for young scientists who want to understand and contribute to this space. Listening to you speak on other podcasts, your incisiveness and clarity about biology is unique. I am curious if you have any particular tips on how you learned to think about these things, or advice for how others should approach this space.

Dr. Jacob Kimmel 00:58:55

Appreciate the incredibly high praise. I do not know if I have great advice to give most people, but I can at least tell folks how I learned, and perhaps that is useful to somebody.

Being very incisive and skeptical with your own education is important when learning a concept. It is easy, especially with our current biology pedagogy, to take things at face value. A textbook might state that process X works because Y does Z. Ultimately, if you review textbooks from 50 or 60 years ago, you will find that some of those explanations were perhaps lossy around the edges.

Being focused on questioning why you believe what you believe and if you find those arguments compelling, pays dividends. Spending more time than most people think is reasonable, simply trying to convince yourself that a given argument is true, pays dividends. The skill of interrogating an argument and evaluating its merits will pay off in your career in science or technology, regardless of where you go.

That is probably one of the two core skills you are trained to do in a PhD. You learn to run experiments. Depending on your lab, you might learn to write code. However, that knowledge decays in value over time. The core skills are learning to convince yourself of arguments, discriminate between compelling versus poor explanations, and ask effective questions that unlock success.

Another point is that often the simplest explanations are the best. If you find yourself twisting into complex, ornate arguments to gain conviction around a principle, it is not always wrong. However, Occam’s razor suggests it is less likely to be correct than a more parsimonious explanation, which is easier to communicate.

A good test, not only for yourself but also when communicating with others, is to ask how simply you can explain a topic. If you are unable to explain something simply, it is usually because there are gaps in your understanding, not because the person across from you is incapable of receiving the information, or because the topic is incredibly complicated with no way to reduce it.

Often in scientific training, we are given the opposite instruction: to lean into erudition, use upper-register diction, employ technical terminology, and build barriers around knowledge that others may lack.

There is also a judgmental aspect where if you ask a question that is perceived as foolish or stupid in a meeting, where you do not know something others might already, you can be belittled for it. That is counter to scientific progress and also counter to your success in communicating science effectively.

Those are my two highest-level pieces of advice. I am happy to dive deeper on anything if you think I have knowledge that might be useful.

1:01:44 The topics Jacob finds most confusing

Daniel 01:01:48

What is a topic in biology that you cannot explain well or that you find most confusing?

Dr. Jacob Kimmel 01:01:54

That is a good question. There are many topics in biology.

I find metabolism, actual carbon metabolism, to be very difficult to reason about. One way to frame this is that there is a graph where metabolites are nodes and edges are parameterized as enzymes, representing the transitions between them.

That particular graph is dense and conditioned on many layers of biology. Getting to a place where I can cogently explain why a metabolite turns into another in one setting but not another is very challenging for me.

More talented people who know more about that field can probably do that well. However, the density of that graph eludes very narrow, simplistic explanation for me. Perhaps that is because we lack a comprehensive conceptual understanding of the field, or it is a limitation in my own brain.

That is at least one example. There are plenty of others.

I find certain topics in human genetics challenging. There are techniques like Mendelian randomization that rely upon instrument variables. This is a way of teasing out causality from genetics and measuring dose responses of different alleles.

Even when I write out all the math and understand it in the moment, it is one of those topics where the idea does not fit well into my hippocampus. I have to continually relearn it every time I encounter it in papers.

There are various problems like this, but those are a couple where I would probably be the worst advocate for those concepts.

1:03:17 An idea in biology that most get wrong

Eric 01:03:21

What is a popular theory in life sciences or biology that you personally believe is incorrect or should be updated?

Dr. Jacob Kimmel 01:03:30

There are many of those. How controversial do you want me to be?

Eric 01:03:35

We can be as controversial as you would like, Jacob.

Daniel 01:03:37

We are going for the TikTok shorts.

Dr. Jacob Kimmel 01:03:39

Going for the TikTok shorts.

We as a community should critically evaluate where and why we ask certain questions. The most general, least offensive way to frame this is to say that I believe much of biology and our methods of inquiry are very path-dependent.

We have a question about how a system works, and the way we ask it is largely dependent on the tools that came before us. How were these tools set up, often 100 years ago, to allow us to ask that question, rather than starting from first principles and asking: if this question were all I cared about, what is the right system, organism, intervention, and readout?

Often, we take those three parameters of an experiment and fix two of them because we happen to have the tools available.

I find that we could be more judicious as a community in asking: are we performing the experiment this way, or asking this question, because it is of the highest importance among all things we could interrogate, or because it is the most readily accessible?

Too often, we lean into something being readily accessible.

There are certain examples where, for instance, a model organism might have been chosen a century ago because it offered a very elegant way to interrogate one particular question. It then gets repurposed over and over again into settings where the justification for why that model organism is being used is perhaps strained.

I think we should be more judicious with ourselves about whether we should continue investing resources in areas like this or pioneer new ways of asking a question. If the mechanisms or tools we need to ask a question today are different from what they were in the past, I personally think we should lean more towards spending resources to set up the right tools, rather than trying to pigeonhole the ones we have.

There are many other examples, but that type of opinion, if stated at certain academic conferences, would definitely get me kicked out.

1:05:31 What tools are we lacking in biology today?

Eric 01:05:35

What tools are we lacking in biology today that you believe we should go and build?

Dr. Jacob Kimmel 01:05:41

There are a bunch of these. In general, we are bad at measurement. In particular, we are bad at measurement over time. A bit alluding back to the foundation of most modern biology, molecular biology, is biochemistry. Biochemistry usually starts by ripping something open and then counting up the numbers of certain molecules you find inside. Unfortunately, most of life is not static snapshots. It is a dynamic system that evolves over time. If we are always ripping systems open and killing them in order to ask what happens when they are alive, we are fundamentally limited.

The formal way of explaining this is that there is a concept in statistical mechanics known as ergodicity. You assume that if you measure individuals within a population at one point in time, the various states they occupy actually represent the temporal trajectory of one individual in that population. That assumption is core to huge fractions of all modern biology that is being done. It largely comes from the fact that we are very bad at measuring time series. We are bad at building things like molecular recording systems, so we can actually ask in a single organism or a single cell: What happened before? What came next? Where are you today? We can only do these snapshots. That is one example of better temporal recording and measurement systems.

I think we are still quite bad at delivery. While we have enough tools to build what I think will be incredibly impactful medicines from the fundamental discovery perspective, it is still very hard, often, even in a tool setting, to deliver the molecules you want to deliver to the cells you want to deliver them to at the right time. That means that many questions are answered in suboptimal ways because we are not able to actually get the given genetic cassette, the given program, or the modification to the genome we want into the right place at the right time. The same is true of those edits themselves.

I think we are much better today at editing the genome than we were even just 20 years ago. I still think we have many limitations. If you talk to folks today about trying to edit in 100 kilobases into the genome, that is a comparatively small edit relative to the size of a human genome. Yet, it is an incredibly challenging task that has been done maybe a handful of times. If we ever want to get to the place where we can truly decode genome biology, I think we need much better editing methods. We need to be able to make insertions on the same scale of genomes. We need to be able to add and delete chromosomes quite readily. These are the types of tools where, if we had them tomorrow, a whole category of questions would be unlocked that today no one even bothers to ask because they know very early on that it is intractable. I could keep rattling like this, but those are a few examples.

Eric 01:08:17

Another question about the field of longevity. What do you think of this concept, Jacob, of longevity escape velocity? Do you think we are within reach in our lifetimes of achieving longevity escape velocity?

Dr. Jacob Kimmel 01:08:34

I do not think about it very much, and no.

1:08:36 Organ replacement

Eric 01:08:38

I love that.

Daniel 01:08:41

There is another field within longevity that I want to ask you about. We have mostly been talking about epigenetic reprogramming. There are also approaches of replacement, which can take many different forms: growing organs, putting those in to replace old tissue. How do you think about that field?

Dr. Jacob Kimmel 01:08:59

I think it is an interesting direction, and there are very near-term applications. If you could grow a kidney ex vivo, you would be able to help thousands of patients who would otherwise not have access to a transplant. The same is true of HSCs. Many people need bone marrow transplants but cannot find a donor in time. There are really impactful applications there.

I think there are some hypotheses that have been put forward in the field: we will slowly swap out each of our organs in a Ship of Theseus-style experiment until our entire body is young, perhaps excluding the brain. I think those lack significant evidence that they could be achieved today. Consider this: one thing you can look for to demonstrate this might be true is that transplants are quite tractable in isogenic animal lines. These are all largely clones of each other. You do not have to worry about immune rejection, so you cannot use that as an excuse for why it has not happened.

Yet, we do not see examples of making animals live dramatically longer as a function of transplants, because it turns out that ripping out your organs and replacing them is an incredibly traumatic procedure. Therefore, for this broader hypothesis of transplanting new organs and tissues over time to achieve this Ship of Theseus renewal to work, you need to solve the problem of trauma from surgery, which I think is a very tall order. It is not impossible, but I do not think it is as parsimonious and simple as sometimes portrayed, with the idea of, “Why bother trying to fix the tissue? You will replace it. You will get a fresh lung.” It turns out there are many complications from those sorts of procedures, and you cannot ignore those. If it is not so difficult, why has it not ever been done in animals where immune rejection issues are not a problem? Somebody needs to be able to make a strong argument for why those data are the case as they are today if they want to advocate for the more extreme replacement notion.

I think in general, it will be far easier for patients and the healthcare system if you can fix cells where they are. You have the vast majority of the information you need. The vast majority of base pairs and the vast majority of genomes in your body are still exactly as they should have been when you were born. Therefore, if we are able to restore the function encoded by your genome within cells that are currently present—whether by epigenetic reprogramming, as I will selfishly postulate as most plausible, or by some other mechanism—I think that is a much more pragmatic approach to getting interventions for longevity in the next decade or two, as opposed to the next few centuries.

1:11:23 How to think about lifespan extension

Daniel 01:11:27

Epigenetic reprogramming seems very promising. Where do you think it caps out? Why are you not fantasizing about longevity escape velocity every day?

Dr. Jacob Kimmel 01:11:38

There are many reasons I do not fantasize about longevity escape velocity regarding the limits of epigenetic reprogramming. There are various theoretical arguments one can make here. Given the hazard rate of humans, if you held it at 55, what is the probability you would exist to a certain age? Those numbers give you estimates from 120 to 150. Can we then drive the hazard rate down? One can play these numbers games for a long period of time.

I will take two examples: a near-term number I think about often, and a slightly longer-term one I consider less. The number I think about often is 110, because many people, in absolute terms, have lived to 110. Even more people have reached 100. There is no fundamental physical or biological reason that it shouldn’t be the median lifespan.

I consider how we can go from our current average of roughly 75 in the Western world to 100 or 110. This would add several decades of healthy life, which we know is entirely possible. I believe this is truly what is on the table in the next several decades and would represent one of the largest transformations in all of medicine in human history. Even that relatively modest goal is still a massive ambition.

Another number I consider is the lifespan of a bowhead whale, which is at least a few hundred years. This is an existence proof of a mammal with roughly the same body plan as humans – though with a very different genome – that is able to live for at least another century beyond us. While arguments suggest the brain will certainly degrade by a certain period, at least one mammal has managed to accomplish this for roughly twice what humans can. I am not sure how translatable that is, but it at least demonstrates that this is not some physical impossibility due to reasons we are failing to appreciate.

These are the numbers I consider. We are so far away from longevity escape velocity, or people living to be a thousand, that I find it is not only not super useful to talk about—because it will not happen in the near term—but it can actually be counterproductive. There are potential negative implications of people living a very long time. I do not subscribe to them or think they are true, but many people feel there are negative implications.

As soon as you start talking about people living forever, people lose attention on what we are discussing here and now: helping your family members and yourself be healthier longer, to enjoy the time you have with one another, and to truly have the most precious commodity in life, which is time. They start focusing on issues like planet overpopulation or the morality of living that long. Those questions are reasonable enough to debate, but they are not on the table today. I spend less time trying to talk about what might happen a millennia from now.

1:14:12 Who is doing the most important work in radical life extension

Eric 01:14:17

What are some of the researchers, company builders, or investors you believe are doing the most important and rigorous work in the field of radical life extension?

Dr. Jacob Kimmel 01:14:25

Regarding life extension, I will take this in a different direction: most of the people building impactful medicines that will eventually be called longevity medicines are not branding them as such today. For example, I think about incretin mimetics from Eli Lilly. They are building molecules that, by the best health economics estimates, will probably add a couple of years of healthy life for the median American. That is huge from a single medicine — a couple of years of healthy life that all of us can expect. I consider medicines like that among the first longevity drugs.

Other people doing impactful work are unlocking new modalities and ways to build medicines, even if we do not necessarily have the payload or program we would like to deliver in hand today. I think of pioneers like the people at Alnylum, who unlocked RNA medicines as an entire modality. They not only managed to have higher success rates in their therapies than the vast majority of the industry, but they also took on many firsts simultaneously. They achieved the first siRNA medicine and the first LNP RNA medicine, working through all those technical challenges. They have now gifted the rest of the industry a platform and a set of modalities that can address a much broader target space than before.

When I think about people working on healthspan extension, I actually think more about those branded today as being in the traditional drug development world, rather than those loudly branding as longevity.

Eric 01:15:49

Great. We are at roughly the end of our time together. To wrap up today’s conversation, Jacob, we would love to go through a few lightning round questions. Daniel, would you like to kick us off?

1:16:00 Lightning round

Daniel 01:16:05

What age do you think we will live to and why?

Dr. Jacob Kimmel 01:16:08

Each of us.

Daniel 01:16:09

It was a general ‘we’, but pick each of us individually. I can tell you a little about my habits and lifestyle. I drink very little alcohol.

Dr. Jacob Kimmel 01:16:19

Each of us on the call can expect, adjusted for our current age and socioeconomic factors, to live a few years above the life expectancy typically projected, as a result of technological innovations. I am bullish that over the coming decades, we will see the rise of various longevity medicines that each independently add a few years of healthspan, the same way incretin mimetics might already have done. Collectively, you could start to see those actually showing up in your own lifespan projection.

Eric 01:16:51

Who do you think is doing the most important work in longevity right now?

Dr. Jacob Kimmel 01:16:55

That is a hard question. I am broadly going to say technologists—those allowing others to ask more questions—rather than picking any one group focused on a particular hypothesis, set of payloads, or company.

Daniel 01:17:10

Who do you think is doing the most damage to longevity?

Dr. Jacob Kimmel 01:17:13

I will not name any specific names, but generally, in a scientific field, when people make claims not substantiated by evidence, it does more damage to the field, even if it gathers more attention in the interim. There is a trade-off between effectively messaging the potential this science has to improve our lives, and being very realistic about the milestones we are likely to hit in the near future and what the science can actually achieve today. A number of groups go a little too far over the line in promising the future, without necessarily grounding people in what is currently plausible. Much of that is motivated by a desire for attention, more so than trying to promote the field broadly.

Eric 01:17:54

For individuals who are interested to contribute to longevity right now, where do you recommend they get started?

Dr. Jacob Kimmel 01:18:00

New Limit.

I am contractually obligated to give you that answer.

Daniel 01:18:06

Let’s say they get rejected by New Limit. What should they do? Should they go to the private sector, or into academia?

Dr. Jacob Kimmel 01:18:13

It depends on your career stage. If you are early in your career, I recommend folks do a PhD as a way to learn how to be a scientist first. You do not need to focus your early training on the eventual application you will go after.

As a bit of a secret, regardless of what your PhD program may tell you, you are probably not going to work on the topic of that degree for the rest of your life. I would really focus on getting a good scientific foundation: how to ask good questions, how to design experiments that give unambiguous answers, and how to learn to interrogate whether an argument is sound or weak.

Focus on looking for scientists and work you admire, then trying to work with those individuals. Science is still a medieval mentorship system where the best way we know to train you to be a scientist is to set you up where you can follow a good one around for about five years and figure out how they do what they do.

Daniel 01:19:04

AI applied to biology: overhyped or underhyped?

Dr. Jacob Kimmel 01:19:10

Still underhyped. There is a lot of what one might call hype in the field. However, if you think about the domains in which artificial intelligence will have positive benefits in our economy, and you buy into some of the most optimistic visions, you can imagine a decline in scarcity in many different sectors.

As a function of that decline in scarcity, resources go to whatever the most precious product or good would be. I believe that is health and time. I also believe if you imagine just for unit progress, what creates the most value for humanity, it is improving our own health. This gives us more time to do the things we would like to do, to spend time with the people we care about, to develop our skills and our crafts, and reach a height of performance we might not achieve with shorter lifespans, for example.

We are in such early innings that there is a lot of low-hanging fruit for the application of these systems to the life sciences. Every marginal gain is tremendously impactful, more so than in many other sectors of the economy.

Eric 01:20:10

Do you have a personal routine that you adhere to for wellness and longevity?

Dr. Jacob Kimmel 01:20:15

I usually tell folks it is all the boring stuff your grandmother told you. Try to eat food, not too much, mostly vegetables. Try to exercise; resistance training is good. Sleep. These things are the best prescription we have today.

There are reasonable pieces of evidence in animal models to try much more extreme lifestyle interventions, but they come with tradeoffs. The effect sizes in animals are small enough that I personally do not feel those tradeoffs are warranted. That does not mean every other lifestyle intervention people are doing is necessarily fake; it is just a personal calculus of the risk and reward.

Daniel 01:20:52

If you were not building New Limit, what biotech would you build?

Dr. Jacob Kimmel 01:20:55

It is a good question. There are lots of things I might do. Most recently, I have thought about what cell therapy will look like a century from now and how we will deliver molecules in a more general way.

There will be an opportunity — perhaps not next year, as this is not necessarily the time to raise a seed round — but over a very long time horizon, to use cells as delivery vehicles. That might be something close to what the final solution looks like.

There is a lot of room for building more general versions of these types of virtual cell models that we have been talking about. New Limit is the best beachhead I could imagine for getting a start there. It is the most valuable therapeutic area with the most likely payloads to work, in the area where it is easiest to gather data. There are various other flavors of a company around that idea that make sense.

There are just a few ideas; I have a list somewhere of other things I might do. There are many interesting ideas in agriculture that have been underexplored because we have used many traditional molecular selection mechanisms and not a lot of molecular engineering. There is a lot of interesting biomanufacturing to be done there, as well as ways to improve food yields and nutrition that have not been as explored as one might think.

Eric 01:22:01

Wonderful. That concludes today’s conversation. Thank you for joining us for the conversation today, Jacob.

Dr. Jacob Kimmel 01:22:06

Great. It was fun to chat with you all. Thanks for the time.

Daniel 01:22:08

Thank you for listening to this episode of the Free Radicals podcast. If you enjoyed this episode and would like to support us, the most helpful thing you can do is to share this with a friend that you think might enjoy it too.

Please also leave us a five-star review on Spotify, Apple Podcasts, and like and subscribe on YouTube. It would mean a lot. I am Daniel Shur, and my co-host is Eric Dai. Thanks for listening.

Discussion about this video

User's avatar

Ready for more?