Mikolaj Kniejski
Do ACE-style cost-effectivness analysis of technical AI safety orgs.
Iván Arcuschin Moreno
Iván and Jett are seeking funding to research unfaithful chain-of-thought, under Arthur Conmy's mentorship, for a month before the start of MATS.
Orpheus Lummis
Non-profit facilitating progress in AI safety R&D through events
Piotr Zaborszczyk
Reach the university that trained close to 20% of OpenAI early employees
Michaël Rubens Trazzi
How California became ground zero in the global debate over who gets to shape humanity's most powerful technology
ampdot
Community exploring and predicting potential risks and opportunities arising from a future that involves many independently controlled AI systems
Ekō
Case Study: Defending OpenAI's Nonprofit Mission
Piroska Rakoczi
I would investigate whether LLMs used in AI agents have got any signs of being a person.
Dr Waku
CANCELLED Cover anticipated costs for making videos in 2025
Jørgen Ljønes
We provide research and support to help people move into careers that effectively tackle the world’s most pressing problems.
Alex Cloud
Oliver Habryka
Funding for LessWrong.com, the AI Alignment Forum, Lighthaven and other Lightcone Projects
Claire Short
Program for Women in AI Alignment Research
Liron Shapira
Let's warn millions of people about the near-term AI extinction threat by directly & proactively explaining the issue in every context where it belongs
Carmen Csilla Medina
Damiano Fornasiere and Pietro Greiner
PIBBSS
Fund unique approaches to research, field diversification, and scouting of novel ideas by experienced researchers supported by PIBBSS research team
David Corfield
Site maintenance, grant writing, and leadership handover
Jesse Hoogland
Addressing Immediate AI Safety Concerns through DevInterp
Apart Research
Support the growth of an international AI safety research and talent program