Rebecca Petras
A system of collective action is necessary to help tech workers safely speak out about concerns
Sterlin Lujan
Grant to Get the Institute Moving at Lightspeed
Murray Buchanan
Leveraging AI to enable coordination without demanding centralization
Orpheus Lummis
Non-profit facilitating progress in AI safety R&D through events
PauseAI US
SFF main round did us dirty!
Piotr Zaborszczyk
Reach the university that trained close to 20% of OpenAI early employees
Michel Justen
Help turn the video from an amateur side-project to into an exceptional, animated distillation
Tyler Johnston
AI-focused corporate campaigns and industry watchdog
ampdot
Community exploring and predicting potential risks and opportunities arising from a future that involves many independently controlled AI systems
Ekō
Case Study: Defending OpenAI's Nonprofit Mission
Nuño Sempere
A foresight and emergency response team seeking to react fast to calamities
David Conrad
We are fostering the next generation of AI Policy professionals through the Talos Fellowship. Your help will directly increase the number of places we can offer
Jørgen Ljønes
We provide research and support to help people move into careers that effectively tackle the world’s most pressing problems.
Jordan Braunstein
Combining "kickstarter" style functionality with transitional anonymity to decrease risk and raise expected value of participating in collective action.
Zaid Moosa
Uncovering microplastics’ hidden climate change footprint
Alex Lintz
Mostly retroactive funding for prior work on AI safety comms strategy as well as career transition support.
Oliver Habryka
Funding for LessWrong.com, the AI Alignment Forum, Lighthaven and other Lightcone Projects
Claire Short
Program for Women in AI Alignment Research
Liron Shapira
Let's warn millions of people about the near-term AI extinction threat by directly & proactively explaining the issue in every context where it belongs
Apart Research
Support the growth of an international AI safety research and talent program