"AFTER SAM ALTMAN was sacked from OpenAI in November of 2023, a meme went viral among artificial-intelligence (AI) types on social media. “What did Ilya see?” it asked, referring to Ilya Sutskever, a co-founder of the startup who triggered the coup. Some believed a rumoured new breakthrough at the company that gave the world ChatGPT had spooked Mr Sutskever.
Although Mr Altman was back in charge within days, and Mr Sutskever said he regretted his move, whatever Ilya saw appears to have stuck in his craw. In May he left OpenAI. And on June 19th he launched Safe Superintelligence (SSI), a new startup dedicated to building a superhuman AI. The outfit, whose other co-founders are Daniel Gross, a venture capitalist, and Daniel Levy, a former OpenAI researcher, does not plan to offer any actual products. It has not divulged the names of its investors.
You might wonder why anyone would invest, given the project’s apparent lack of interest in making money. Perhaps backers hope SSI will in time create a for-profit arm, as happened at OpenAI, which began as a non-profit before realising that training its models required lots of expensive computing power. Maybe they think Mr Sutskever will eventually convert SSI into a regular business, which is something Mr Altman recently hinted at to investors in OpenAI. Or they may have concluded that Mr Sutskever’s team and the intellectual property it creates are likely to be valuable even if SSI’s goal is never reached.
A more intriguing hypothesis is that SSI’s financial supporters believe in what is known in AI circles as the “fast take-off” scenario. In it, there comes a point at which AIs become clever enough to themselves devise new and better AIs. Those new and better AIs then rapidly improve upon themselves—and so on, in an “intelligence explosion”. Even if such a superintelligence is the only product SSI ever sells, the rewards would be so enormous as to be worth a flutter.
The idea of a fast take-off has lurked in Silicon Valley for over a decade. It resurfaced in a widely shared 165-page paper published in June by a former OpenAI employee. Entitled “Situational Awareness” and dedicated to Mr Sutskever, it predicts that an AI as good as humans at all intellectual tasks will arrive by 2027. One such human intellectual task is designing AI models. And presto, fast take-off.
The paper’s author argues that before long America’s government will need to “lock down” AI labs and move the research to an AI-equivalent of the Manhattan project. Most AI researchers seem more circumspect. Half a dozen who work at leading AI labs tell The Economist that the prevailing view is that AI progress is more likely to continue in gradual fashion than with a sudden explosion.
OpenAI, once among the most bullish about AI progress, has moved closer to the gradualist camp. Mr Altman has repeatedly said that he believes in a “slow take-off” and a more “gradual transition”. His company’s efforts are increasingly focused on commercialising its products rather than on the fundamental research needed for big breakthroughs (which may explain several recent high-profile departures). Yann LeCun of Meta and François Chollet of Google, two star AI researchers, have even said that current AI systems hardly merit being called “intelligence”.
An updated model released on June 20th by Anthropic, another AI lab, is impressive but offers only modest improvements over existing models. OpenAI’s new offering could be ready next year. Google DeepMind, the search giant’s main AI lab, is working on its own supercharged model. With luck these will be more deserving of the I in AI. Whether they are deserving enough to convert gradualists to the explosive creed is another matter." [1]
1. Thinking fast and slow. The Economist; London Vol. 451, Iss. 9403, (Jun 29, 2024): 56, 57.
Komentarų nėra:
Rašyti komentarą