OpenAI launched a safety fellowship


OpenAI has announced the OpenAI Safety Fellowship, a pilot programme that will fund a cohort of external researchers to conduct independent work on AI safety and alignment.

The programme runs from 14 September 2026 to 5 February 2027. Fellows will receive a monthly stipend, computing resources, and mentorship from OpenAI researchers, and are expected to produce a significant research output, a paper, benchmark, or dataset, by the programme's end.

Applications close on 3 May, with successful candidates notified by 25 July.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Priority research areas include safety evaluation, robustness, scalable mitigation strategies, privacy-preserving methods, agentic oversight, and high-severity misuse domains.

OpenAI has specified that fellows will receive API credits but not access to internal systems. The programme is described as a pilot, and the company says it is open to researchers from computer science, social sciences, cybersecurity, privacy, and human-computer interaction, emphasising research ability and technical judgement over academic credentials.

The announcement was posted to OpenAI's social media accounts at 12:12 PM on 6 April. Hours earlier, The New Yorker published a major investigation by Ronan Farrow and Andrew Marantz reporting that OpenAI had dissolved both its superalignment team and its AGI-readiness team, and had dropped safety from the list of its most significant activities on its IRS Form 990 filings.

The investigation also reported that when the journalists asked to speak with researchers working on existential safety, an OpenAI representative replied: ‘What do you mean by existential safety? That's not, like, a thing.' Farrow noted the timing of the fellowship announcement explicitly on social media.

The pattern of safety team dissolutions at OpenAI is documented. The superalignment team, announced in mid-2023 with a pledge of 20% of the company's compute over four years, was dissolved in May 2024 after co-leads Ilya Sutskever and Jan Leike departed.

Leike wrote on departure that safety culture and processes had ‘taken a backseat to shiny products.' The AGI Readiness team was then dissolved in October 2024 when its leader, Miles Brundage, left.

The Mission Alignment team, Superalignment's successor, was disbanded in February 2026 after 16 months. By early 2026, the people most associated with safety oversight at OpenAI had largely departed or been moved into roles with undefined responsibilities.

The New Yorker investigation also reported that the word ‘safely' had been deleted from OpenAI's mission statement in its IRS filings.

OpenAI has not publicly responded to the specific claims in the New Yorker investigation. The Safety Fellowship, as structured, directs external researchers toward safety questions at arm's length from the company, rather than restoring internal safety infrastructure.

Whether an external fellowship programme is a meaningful substitute for in-house alignment research is a question the AI safety research community is likely to debate in the weeks ahead.