Technology

OpenAI co-founder devotes new company to ‘safe superintelligence’

OpenAI co-founder Ilya Sutskever this week announced a new artificial intelligence (AI) venture focused on safely developing “superintelligence.”

The new company, Safe Superintelligence Inc. — SSI for short — has the sole purpose of creating a safe AI model that is more intelligent than humans.

“Building safe superintelligence (SSI) is the most important technical problem of our​​ time,” the company announced in a social media post. “We’ve started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.”

Sutskever left OpenAI last month in the wake of a failed ouster of CEO Sam Altman that he backed. The attempted move, which Sutskever later said he regretted, led to a period of internal turmoil centered on whether leaders at OpenAI were prioritizing business opportunities over AI safety.

In addition to Sutskever, SSI is co-founded by former Apple AI lead Daniel Gross and OpenAI engineer Daniel Levy.


“We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs,” the trio said in a statement. “We plan to advance capabilities as fast as possible while making sure our safety always remains ahead. This way, we can scale in peace.”

They said a focus on safety alone means “no distraction by management overhead or product cycles” and that their goals are “insulated from short-term commercial pressures.”

Sutskever told Bloomberg that SSI won’t put out any product or do any work other than producing a superintelligence. He declined to disclose the company’s financial backers or how much the effort has raised.