OpenAI Co-Founder Launches New Startup


This post is by Om Malik from On my Om


It might as well be an episode from “Days of Our (AI) Lives. Ilya Sutskever, co-founder of OpenAI, who left the company earlier this spring, is back in action with a new startup, Safe Superintelligence (SSI). Daniel Gross, former AI lead at Apple, and researcher Daniel Levy are co-founders of the company.

Sutskever is a well-respected and revered researcher who was part of Geoffrey Hinton’s core research group and also at Google. His exit from OpenAI was closely followed. Bloomberg, which first reported the news, is scant on details about the what and how of the company. What does “safe” mean when it comes to superintelligence? Sutskever cryptically explains:
“By safe, we mean safe like nuclear safety as opposed to safe as in ‘trust and safety.’”
You might remember Ilya as a key player in the OpenAI board-led ouster of Sam Altman as chief executive officer. Later, he reversed course, faded into the background, and eventually left OpenAI. He wasn’t quite thrilled with the direction of OpenAI, which is looking to scrap its egalitarian roots for a more capitalistic model. The new startup seems to correct some of the sins of OpenAI, that is if you read between the lines.
“This company is special in that its first product will be the safe superintelligence, and it will not do anything else up until then. It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck in a competitive rat race.” Sutskever, in an interview with Bloomberg.
This positioning is interesting and curious, though I agree that “super intelligence” isn’t a business opportunity. It’s likely more about power and control, and monetary gains (Read more…)