News

OpenAI’s Superalignment team, charged with controlling the existential danger of a superhuman AI system, has reportedly been disbanded, according to Wired on Friday. The news comes just days ...
In July last year, OpenAI announced the formation of a new research team that would prepare for the advent of supersmart artificial intelligence capable of outwitting and overpowering its creators.
OpenAI has disbanded its Long-Term AI Risk Team, responsible for addressing the existential dangers of AI. The disbanding follows several high-profile departures, including co-founder Ilya ...
Microsoft (NASDAQ:MSFT)-backed OpenAI is dissolving its "AGI Readiness" team, which advised the startup on its capacity to handle AI and the world's readiness to manage the technology, according ...
OpenAI has lost nearly half of the company's team working on AI ... the most capable and safest AI systems and believe in our scientific approach to addressing risk." OpenAI, the spokesperson ...
In May, OpenAI disbanded its Superalignment team — which OpenAI said focused on "scientific and technical breakthroughs to steer and control AI systems ... in place and the risk levels that ...
OpenAI had already disbanded its super-alignment team in May, which had been working on assessing the long-term risks of AI. The head of the team at the time, Jan Leike, criticized at the time ...
Relationships between AI giants begin to shuffle as OpenAI is no longer locked into a deal with Microsoft Azure for cloud ...
After leaving, Leike said on X that the team had been "sailing against the wind." "OpenAI must become a safety-first AGI company," Leike wrote on X, adding that building generative AI is "an ...
Should talking to an AI chatbot be protected and privileged information, like talking to a doctor or lawyer? A new court ...