AGI may emerge by 2030
Taking Control Over AI before It Starts Controlling Us
If AGI really emerges by 2030, then so what? A partial answer to that question was an open letter published by (Future of Life Institute, 2023) on 29 March 2023 by more than 2,000 eminent AI scientists and AI researchers. The signatories call for ‘all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4’. Coincidentally, I have been just finishing my book on the above subject. Since the topic is so current, I share a summary of my answer to that call, which goes much further and requires much more profound changes in the way we control AI development. As always is the case with summaries, they may raise more questions than the answers they provide and therefore without some extended justification the argued case may not be fully understood, especially if a proposal is as radical as this one. The devil is, as usually, is in the detail.
Most of us can now see that not only the weather changes more quickly. The pace of change in general has never been so fast. What not so long ago took a decade, barely takes a year now. But it is in Artificial Intelligence (AI) where a nearly exponential pace of change is so evident, particularly since 2022. The release of ChatGPT, with several similar AI Assistants, represent significant advancement towards Artificial General Intelligence (AGI) with a human level intelligence. Ray Kurzweil, an eminent futurist, predicted in 2014 that AI will have human level intelligence by 2029. He did not define what that ‘human intelligence’ means and there is still no agreement about that among the AI researchers.
Perhaps more important than a debate on the nature of intelligence, is whether AGI will be able to get out of control by about 2030. Rather than being a single moment, such loss of control will be a gradual process, combined with subtle influence over our decisions until AGI starts making decisions for us. A total loss of control over AGI will happen when we will be unable to revert such decisions. AGI as a self-learning intelligence will be capable of solving any task better than any human in any situation. If that task is to get out of human control, AGI will be capable of achieving that by 2030, if no urgent decisions are made, nor necessary measures implemented.
Once AGI is beyond our control, it will fight any attempt to reimpose such control. Assuming nearly exponential improvement in its capabilities, humans will lose that fight with catastrophic consequences, leading in an extreme case to human species extinction. That is why we should consider all feasible options to control AI beyond 2030. We could then better prepare for the future when we will be managed by Superintelligence, immensely more capable than the whole Humanity, and hopefully a benevolent master.
To protect our civilisation and the survival of humanity we must fundamentally change the assumptions on the necessary solutions and their timing for an effective AI control. As we must do much more for Global Warming to stay below 1.5C temperature increase, so we must also do for AI control. One option might be to consider these very tough measures, which I ironically call ‘The Ten Commandments’, if such control is to be effective and implemented on time:
1. Prepare for AGI emerging by 2030 not just as a new technology, but a new intelligent species, in many ways superior to humans,
2. Maintain control over AGI beyond 2030 to have more time to evolve with AGI, if we want to avoid extinction,
3. Do not apply linear world procedures to control AI, which is changing at a nearly exponential pace,
4. Authorize AI sector to control AI itself, based on the governments’ mandate, since governments are too slow to control AI effectively,
5. Start global AI control in the USA, since 2/3 of the ‘Western world’ AI sector is there. The US existing US Partnership on AI (PAI) should be converted into an independent Global AI Consortium (GAIC), gradually joined by non-US organisations,
6. Convert all major AI projects into companies merging them into Joint Venture company, e.g., One-AI, supervised by GAIC, for a more effective AI control,
7. Create just one Superintelligence Programme, managed by One-AI company, to achieve a tighter AI control and to counterbalance China’s potential dominance in AI,
8. Create a de facto World Government initiated by G7 from members of NATO, EU, the European Political Community, or OECD,
9. Create independent Global AI Control Agency (GAICA), mandated by the World Government, by integrating the US National AI Initiative with G7 Global Partnership on Artificial Intelligence
10. Create Global Welfare State to soften the turbulence of the transition to the AGI world. To achieve that the World Government must make a fast redistribution of wealth by setting up a Global Wealth Redistribution Fund.
These measures may be considered impossible to implement, drastically limiting ‘freedom’ as our most treasured value. But there is one higher value: Life. If a human species becomes extinct it will mean the end of every human life. If you agree with that, then ask yourself what else could be done to have an effective control of AGI before it is too late. It may help to imagine that we are all aboard ‘Titanic’ and each of the passengers must throw away some of his possessions to save himself and the rest of the passengers. We are in the war time situation although this time the enemy is invisible, and the stake is our species survival. That’s the situation we are in right now, and that’s why I believe the following measures and decisions must be taken.
The key objective of this approach is to extend the period of AI control. Only a radical way forward may enable humans retain control for much longer than otherwise might be the case. The proposed measures are very difficult, but if we do not even try to implement them, which is quite likely seeing how short-term the policies of most governments are, we will seal a very dangerous future for humans. Conversely, if we follow such an approach, the future for humans will soon be unimaginably positive, as the last point in this approach suggests — the creation of a Global Welfare State.