News Highlights:
- Safe SuperIntelligence aims to create superintelligent machines that surpass human intelligence while ensuring safety.
- The company focuses on solving the dual technical challenges of safety and capabilities in superintelligence.
- SuperIntelligence will advance capabilities rapidly while prioritizing safety to ensure scalable and peaceful development.
A new company, Safe SuperIntelligence, has been launched to produce superintelligence, a machine that is more intelligent than humans, in a safe way.
Formed by ex-OpenAI founder and chief scientist Ilya Sutskever, Daniel Gross and Daniel Levy, Safe SuperIntelligence wants to make superintelligence in a safe way.
“Superintelligence is within reach,” reads a statement from the company. “Building safe superintelligence (SSI) is the most important technical problem of our time.”
SSI has a single goal and a single product.
“We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.
“This way, we can scale in peace.”
The company is registered in the US, with offices in Palo Alto and Tel Aviv, and is actively recruiting.
Sutskever was one of the OpenAI board members who was involved in forcing Sam Altman out of the company in November.
Safe SuperIntelligence aims to create superintelligent machines that surpass human intelligence while ensuring safety.
The company focuses on solving the dual technical challenges of safety and capabilities in superintelligence.
SuperIntelligence will advance capabilities rapidly while prioritizing safety to ensure scalable and peaceful development.