OpenAI’s Superalignment team, charged with controlling the existential danger of a superhuman AI system, has reportedly been disbanded, according to Wired on Friday. The news comes just days ...
In our content, we occasionally include affiliate links. Should you click on these links, we may earn a commission, though this incurs no additional cost to you. Your use of this website signifies ...
OpenAI has effectively dissolved a team focused on ensuring the safety of possible future ultra-capable artificial intelligence systems, following the departure of the group’s two leaders ...
OpenAI on Friday said that it has disbanded a team devoted to mitigating the long-term dangers of super-smart artificial intelligence (AI). It began dissolving the so-called “superalignment” group ...
OpenAI dissolves 'superalignment team' led by Ilya Sutskever and Jan Leike. Safety efforts led by John Schulman. Departures raise doubts about AGI dedication. Emotional ChatGPT version unveiled ...
Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world of machine learning, along with notable ...
Sam Altman is in the hot seat after a former OpenAI executive raised concerns about AI safety. The CEO said OpenAI had "a lot more to do" to address red flags raised by Jan Leike. OpenAI has been ...
On Monday, OpenAI announced the formation of a new "Safety and Security Committee" to oversee risk management for its projects and operations. The announcement comes as the company says it has ...
OpenAI's Superalignment team, responsible for developing ways to govern and steer "superintelligent" AI systems, was promised 20% of the company's compute resources, according to a person from ...
The decision to rethink the team comes as a string of recent departures from OpenAI revives questions about the company’s approach to balancing speed versus safety in developing its AI products.