News

OpenAI has disbanded its “superalignment team” tasked with staving off the ... In an accompanying blog post, the company claimed superintelligent AI “will be the most impactful technology ...
OpenAI has disbanded its Long-Term AI Risk Team, responsible for addressing the existential dangers of AI. The disbanding follows several high-profile departures, including co-founder Ilya ...
Microsoft (NASDAQ:MSFT)-backed OpenAI is dissolving its "AGI Readiness" team, which advised the startup on its capacity to handle AI and the world's readiness to manage the technology, according ...
OpenAI has lost nearly half of the company's team working on AI ... the most capable and safest AI systems and believe in our scientific approach to addressing risk." OpenAI, the spokesperson ...
In May, OpenAI disbanded its Superalignment team — which OpenAI said focused on "scientific and technical breakthroughs to steer and control AI systems ... in place and the risk levels that ...
I think it’s just people sort of individually giving up,” – Daniel Kokotajlo, ex-OpenAI staffer While the Superalignment team is tasked with handling a wide range of AI risk factors ...
OpenAI, in response to claims that it isn’t taking AI safety seriously, has launched a new page called the Safety Evaluations ...
“We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk,” said OpenAI spokesperson Liz Bourgeois in a ...
A group of AI whistleblowers claim tech giants like ... systems and believe in our scientific approach to addressing risk,” OpenAI said in a statement. “We agree that rigorous debate is ...