News

A former OpenAI researcher speculates half of the ChatGPT maker's super alignment team departed because the company is on the precipice of hitting the AGI benchmark, but it can't handle all that ...
The firm wants to prevent a superintelligence from going rogue. This is the first step. OpenAI has announced the first results from its superalignment team, the firm’s in-house initiative ...
A new report, dubbed the “OpenAI Files,” aims to shed light on the inner workings of the leading AI company as it races to ...
OpenAI initially had about 30 people working on AGI safety, but 14 of them have left the company this year, said former researcher Daniel Kokotajlo.
“At least circumstantially, these changes — the shifting emphasis to for-profit, turnover at the top, as well as the dissolution of OpenAI’s super alignment team that focused on AI risk ...
The recent dissolution of OpenAI’s super alignment team due to resource constraints highlights the challenges in ensuring the safe development and deployment of AGI.
An internal OpenAI strategy document titled “ChatGPT: H1 2025 Strategy” describes the company’s aspiration to build an “AI ...
“At least circumstantially, these changes — the shifting emphasis to for-profit, turnover at the top, as well as the dissolution of OpenAI’s super alignment team that focused on AI risk ...
OpenAI’s mass resignations raise urgent questions: ... From co-founders Ilya Sutskever and John Schulman to Jan Leike, the former head of the company’s “Super Alignment” team, ...
Previously he worked on the super-alignment team at OpenAI. He sees the scale-up of AI technologies is "growing rapidly," so much so that he believes there can "Transformer-like breakthroughs with ...