Member-only story

Do Not Pause Advanced AI Development — CONSOLIDATE AI

Tony Czarnecki
20 min readNov 5, 2024

--

London, 18 November 2024

Image: DALL-E

Executive summary

In recent years, numerous publications have raised alarms about Artificial General Intelligence (AGI) or Superintelligence as an existential threat. Last year the Future of Life Institute published an open letter Pause Giant AI Experiments, calling for suspending the development of the most advanced AI models for six months’. PauseAI continues that appeal worldwide. In October 2024, two publications ’Narrow Path, by Control AI and Compendium on The State of AI Today’ by Conjecture, want such a moratorium to last decades. They also propose to reduce the AI threat by feasible measures right now, e.g. by curbing the supply of advanced chips, energy or AI algorithms. Additionally, both documents provide evidence that the existential threat coming from AI is real and near, underlying a growing divide between the AI’s exceptional pace of development and humanity’s readiness to manage it.

However, unlike the above measures, all other proposals, like a moratorium on the development of the most advanced AI models, depend on a literally global implementation. That seems unrealistic as it ignores political realities. If such an implementation was possible, the UN would have done it. But since UN is dysfunctional, we could only consider partial global implementation…

--

--

Tony Czarnecki
Tony Czarnecki

Written by Tony Czarnecki

The founder and Managing Partner of Sustensis, sustensis.co.uk - a Think Tank for a civilisational transition to coexistence with Superintelligence.

No responses yet