I think that if we make sufficiently smart humans, and they come of age before we have lost the future to AI, then x-risk will be much much lower. I think Iām at something like an expected 50% reduction in x-risk, given this works and timelines are long enough. Tsvi has longer timelines than I do. Maybe heās right, or maybe AI will hit a wall, or maybe we will manufacture a slowdown.
As I currently see it, our best hope for the future is that we stop developing AI for a long while. Iām glad many people are working on that, but I think itās also important to work on the complements to that: plans for āwinning the peaceā, as a friend of mine put it. If we make it to a substantial pause, my guess is it will still be urgent to get to more robust existential safety, and weāll be glad for starting earlier.
So Iām donating $7k to Berkeley Genomics. I am considering giving more, but I also do value my own financial flexibility pretty highly for x-risk reduction.
I have some concerns with this plan that havenāt been that thoroughly addressed by Berkeley Genomics[1].
For example, I think that most people interested in germline engineering want to only create people who are predicted to be a bit smarter than anyone who ever lived. Though this seems like a wise and good deontological constraint to me, Iām worried about its consequences. It seems wise and good because our predictors and our linearity assumptions probably break down if we try and push on them too hard, and we risk doing something counterproductive or creating an unethically harmed child. But I also worry that this makes this project less differentially helpful for alignment over capabilities than Iād like.
I think at some level of smarts, you spontaneously realise that AI loss-of-control is a problem (assuming Iām right about that). But Iām not sure at what level of pure smarts that happens. I think, for example, Von Neumann had higher g than the founders of the AI X-Risk field, but I fear that Von Neumann might have been a capabilities researcher. Slightly smarter than Von Neumann isnāt obviously a level at which you spontaneously notice AI X-Risk.[2]
Another concern: I understand that we donāt have very good predictors for personality traits, which, Iām told, are less well modelled by an additive assumption on the variants. I think it would be good to screen on malignity of personality; I suspect it might be unethical to exert a lot of specific control over a childās personality for instrumental reasons, but I think it would be good to at least check that the child isnāt unusually cruel or domineering or something.
I personally have some ethical uncertainty about genomic engineering. Iām unsure about the appropriate deontology for relating to choosing traits of the unborn, and some people I take seriously are worried about risks of dystopias[3]. So it seems great that Tsvi is writing a lot about the ethics of this approach. I think that this team seems unusually likely to stop their work if they uncover good enough arguments that itās a bad thing to do (whether because itās immoral or because it wonāt work). AFAICT, their initial advocacy approach is carefully and publicly building sane ethical and technological frameworks. I expect that Berkeley Genomicsā writing will be the most helpful stuff I read in the next year for thinking about the ethics of germline engineering.
Iāve spent a decent amount of time reading and thinking about this. It seems like a good sign that, a lot of the writing that best addressed my concerns was written by Tsvi over the last few years. For example, when I was trying to think about how plans that take a long time interface with timelines, I found this post by Tsvi helpful. There are definitely a bunch more things Iād like to add into the model in the blogpost to reason about this, but I like that he put in the legwork to do the first quantitative modelling pass!
Iām currently deferring fairly hard on the technical picture. Iāve spent some time trying to understand the problems and approaches as Tsvi sees them, but not that much time trying to redteam him or question his assumptions. I hope to spend more time thinking about the technical side in the future.
I feel pretty excited about broadly supporting Tsvi, and also about the specific focuses of Berkeley Genomics. I hope they succeed at reaching their goals!
[1]: Though Tsviās been super up for engaging me on concerns I have! I just have to find the time and availability.
[2]: I guess they wouldnāt have to really āspontaneouslyā notice this problem, but rather come to the correct conclusion (whatever that may be) given the arguments already present in the world.
[3]: Though Iām not, on my inside view, worried about that yet.