A recent leadership shift at OpenAI raises pivotal questions about Artificial General Intelligence (AGI) and its governance. Sam Altman's departure as CEO follows diverging views within OpenAI's board on what AGI truly means and how to handle its potential arrival.
According to OpenAI, AGI is a "highly autonomous system that outperforms humans at most economically valuable work”. However, Altman recently set a higher bar for AGI, including the "discovery of new types of physics" during a recent talk at the Cambridge Union, England.
This definition matters. With OpenAI's partnership with Microsoft, the tech giant has access to AI models below the AGI threshold.
The broader Altman's AGI definition, the more advanced AI models fall into Microsoft's ambit, as they're classified as pre-AGI.
Internal developments suggest OpenAI might be closer to AGI than expected, leading to a strategic divide.
While Altman seemed inclined towards broader distribution, including Microsoft's utilization, others, like Ilya Sutskever, responsible for AGI alignment, favored a more cautious approach.
This clash of visions — between unleashing potential AGI advancements and ensuring rigorous safety and alignment — might have catalyzed Altman's exit.
The OpenAI board, committed to "safe AGI that is broadly beneficial”, had to make a tough call.
Stay tuned as the event continues to unfold.