11

human intelligence amplification @ Berkeley Genomics Project

ActiveGrant
$15,110raised
$250,000funding goal

Project summary

We aim to accelerate the creation of strong human germline genomic engineering.

For children born using this technology, many major diseases would be prevented, which currently end or diminish the lives of many millions of people. The spans of life, health, and cognitive health would be greatly extended. Potentially, parents could choose to nudge the personality traits of their future children--e.g. making brave, kind, curious, reliable, determined people. What is definitely feasible is increasing IQ, which, while not being anywhere near to everything that matters even about specifically cognitive capacity, is nevertheless a minimum viable pathway to making there be many more people who can achieve groundbreaking scientific and philosophical insights. Crucially, this technology is humanity's best hope for making a generation of people who will be able to fully deal with the existential threat of AGI.

Our plans are to tap into techno-optimist, humanist, and existential-derisking capital---financial, political, and human---to accelerate the remaining scientific discoveries and technological innovations that are prerequisite to safe, accessible, powerful germline engineering.

Strategy behind this project

The supersupergoal of this project is to decrease existential risk. Technical AGI alignment is likely far too difficult for the current generation of humans (see e.g. https://www.lesswrong.com/posts/nwpyhyagpPYDn4dAW/the-field-of-ai-alignment-a-postmortem-and-what-to-do-about). The only hope is to delay the creation of AGI, at least until humans can solve AGI alignment.

Pursuant to that, the supergoal of this project is to accelerate strong human intelligence amplification. The main way this helps is by making there be smarter people who can solve AGI alignment. A secondary benefit is to offer a vision for humanity's imminent thriving through intelligence that does not require making AGI.

The only strong human intelligence amplification method that is both likely to work and likely to be feasible soon is strong human germline genomic engineering; see https://www.lesswrong.com/posts/jTiSWHKAtnyA723LE/overview-of-strong-human-intelligence-amplification-methods.

Therefore the goal of this project is to accelerate strong human germline genomic engineering.

What are this project's goals? How will you achieve them?

(This is copied from https://berkeleygenomics.org/.)

Our mission is to unlock the promise of safe, accessible, and powerful germline genetic engineering for humanity.

Our plans:

  • Publicly present the case in favor of making human germline engineering technology soon.

  • Work out and describe how to make this technology in a safe, socially beneficial, widely accessible, and effective way.

  • Through dialogue with scientists, the public, and policymakers, create innovation-positive ethical guidelines and legal regulation for germline engineering.

  • Generate social momentum and help potential funders, scientists, and entrepreneurs to coordinate.

How will this funding be used?

Salary for me for up to two years, payment for research contractors, payment for event operations.

With minimal funding I can, you know, have an apartment for an additional month. Full funding would enable me to focus on work instead of fundraising and to contract more help to go faster.

Who is on your team? What's your track record on similar projects?

It's me and my cofounder Rachel Reid.

Rachel is focusing on running events. in 2025 we've held 3 talks from experts on aspects of germline engineering, with more coming. In June we're hosting a summit: https://berkeleygenomics.org/events/ReproFro2025.html

I don't have a track record on similar projects. In the past 3 years I've studied the field (reading, writing, networking, fundraising for other people). In 2025 I made http://berkeleygenomics.org/, and wrote a book on technical methods for strong germline engineering: https://www.lesswrong.com/posts/2w6hjptanQ3cDyDw7/methods-for-strong-human-germline-engineering

What are the most likely causes and outcomes if this project fails?

Causes of failure:

Outcomes:

The most likely outcome of failure is simply that nothing especially useful happens. Some more articles will be written which could be marginally helpful. There might also be some mean news articles written about the project.

There are potential perils to success, described here: https://berkeleygenomics.org/articles/Potential_perils_of_germline_genomic_engineering.html

How much money have you raised in the last 12 months, and from where?

$0 from nowhere, 0 explanations given. I'm just burning through my meager personal savings.

donated $7,000
šŸž

Rafe Kennedy

about 20 hours ago

I think that if we make sufficiently smart humans, and they come of age before we have lost the future to AI, then x-risk will be much much lower. I think Iā€™m at something like an expected 50% reduction in x-risk, given this works and timelines are long enough. Tsvi has longer timelines than I do. Maybe heā€™s right, or maybe AI will hit a wall, or maybe we will manufacture a slowdown.

As I currently see it, our best hope for the future is that we stop developing AI for a long while. Iā€™m glad many people are working on that, but I think itā€™s also important to work on the complements to that: plans for ā€œwinning the peaceā€, as a friend of mine put it. If we make it to a substantial pause, my guess is it will still be urgent to get to more robust existential safety, and weā€™ll be glad for starting earlier.

So Iā€™m donating $7k to Berkeley Genomics. I am considering giving more, but I also do value my own financial flexibility pretty highly for x-risk reduction.

I have some concerns with this plan that havenā€™t been that thoroughly addressed by Berkeley Genomics[1].

For example, I think that most people interested in germline engineering want to only create people who are predicted to be a bit smarter than anyone who ever lived. Though this seems like a wise and good deontological constraint to me, Iā€™m worried about its consequences. It seems wise and good because our predictors and our linearity assumptions probably break down if we try and push on them too hard, and we risk doing something counterproductive or creating an unethically harmed child. But I also worry that this makes this project less differentially helpful for alignment over capabilities than Iā€™d like.

I think at some level of smarts, you spontaneously realise that AI loss-of-control is a problem (assuming Iā€™m right about that). But Iā€™m not sure at what level of pure smarts that happens. I think, for example, Von Neumann had higher g than the founders of the AI X-Risk field, but I fear that Von Neumann might have been a capabilities researcher. Slightly smarter than Von Neumann isnā€™t obviously a level at which you spontaneously notice AI X-Risk.[2]

Another concern: I understand that we donā€™t have very good predictors for personality traits, which, Iā€™m told, are less well modelled by an additive assumption on the variants. I think it would be good to screen on malignity of personality; I suspect it might be unethical to exert a lot of specific control over a childā€™s personality for instrumental reasons, but I think it would be good to at least check that the child isnā€™t unusually cruel or domineering or something.

I personally have some ethical uncertainty about genomic engineering. Iā€™m unsure about the appropriate deontology for relating to choosing traits of the unborn, and some people I take seriously are worried about risks of dystopias[3]. So it seems great that Tsvi is writing a lot about the ethics of this approach. I think that this team seems unusually likely to stop their work if they uncover good enough arguments that itā€™s a bad thing to do (whether because itā€™s immoral or because it wonā€™t work). AFAICT, their initial advocacy approach is carefully and publicly building sane ethical and technological frameworks. I expect that Berkeley Genomicsā€™ writing will be the most helpful stuff I read in the next year for thinking about the ethics of germline engineering.

Iā€™ve spent a decent amount of time reading and thinking about this. It seems like a good sign that, a lot of the writing that best addressed my concerns was written by Tsvi over the last few years. For example, when I was trying to think about how plans that take a long time interface with timelines, I found this post by Tsvi helpful. There are definitely a bunch more things Iā€™d like to add into the model in the blogpost to reason about this, but I like that he put in the legwork to do the first quantitative modelling pass!

Iā€™m currently deferring fairly hard on the technical picture. Iā€™ve spent some time trying to understand the problems and approaches as Tsvi sees them, but not that much time trying to redteam him or question his assumptions. I hope to spend more time thinking about the technical side in the future.

I feel pretty excited about broadly supporting Tsvi, and also about the specific focuses of Berkeley Genomics. I hope they succeed at reaching their goals!

[1]: Though Tsviā€™s been super up for engaging me on concerns I have! I just have to find the time and availability.

[2]: I guess they wouldnā€™t have to really ā€œspontaneouslyā€ notice this problem, but rather come to the correct conclusion (whatever that may be) given the arguments already present in the world.

[3]: Though Iā€™m not, on my inside view, worried about that yet.

tsvibt avatar

Tsvi Benson-Tilsen

about 16 hours ago

@Rafe Thank you!

Responded to some points here: https://x.com/BerkeleyGenomic/status/1909101431103402245

donated $50
šŸŒ½

Mati Roy

8 days ago

It's very little money, but symbolic. I think it's very important and you seem to be doing solid (and very neglected) work! I hope I can support more in the future!

tsvibt avatar

@matiroy Thanks Mati!

donated $60
Romain_D avatar

Good Idea maybe it can work

Austin avatar

Austin Chen

23 days ago

Approving this project as compatible with our charitable mission of furthering public scientific research! Tsvi has a track record within the rationalist community, and this agenda seems intriguing; I hope it goes well.

donated $5,000
šŸ‹

This is an obvious thing to be trying. And would be sad if this didnā€™t get the minimum funding. If there are other viable alternatives that people can see are plausible, it could make pausing AI more palatable.

I donā€™t know how realistic this is, but your writing is well thought out and you seem fairly intelligent and I want to encourage you to keep doing this

Good luck !

tsvibt avatar

@rahulxyz Thanks! Much appreciated. We'll make it happen. Maybe.

donated $50
Joona avatar

Joona Heino

24 days ago

would love to see progress and what better way to bring it about

donated $200
šŸŒ½

Luca D.

25 days ago

!

donated $200
šŸŒ½

Luca D.

26 days ago

the project makes sense

donated $1,000
Kaarel avatar

Kaarel HƤnni

27 days ago

Instead of trying to make an alien god that is nice to us throughout its unfolding, I think we should indefinitely be becoming smarter ourselves. Becoming somewhat smarter faster can help us collectively understand that we shouldn't be making an alien god (assuming I'm indeed right about this) in time before an alien god is created, help us (figure out how to) reorganize society so that an alien god is radically less likely to be created per unit of time, and help us solve many of the various other problems (like destitution, disease, and death in general) we're facing ourselves. Also, becoming smarter and understanding more is cool.

tsvibt avatar

@Kaarel Thanks for your offer!

I'm unsure whether I agree with this strategically or not.

One consideration is that it may be more feasible to go really fast with [AGI alignment good enough to end acute risk] once you're smart enough, than to go really fast with convincing the world to effectively stop AGI creation research. The former is a technical problem you could, at least in principle, solve in a basement with 10 geniuses; the latter is a big messy problem involving myriads of people. I have substantial probability on "no successful AGI slowdown, but AGI is hard to make". In those worlds, where algorithmic progress continually burns the fuse on the intelligence explosion, a solution remains urgent, i.e. prevents more doom the sooner it comes.

But maybe good-enough AGI alignment is really extra super hard, which is plausible. Maybe effective world-coordination isn't as hard.

But I do mostly agree with this in terms of long-term vision.