Managing Human’s Evolution

Tony Czarnecki
4 min readNov 4, 2020

We have heard a lot about an existential threat of Climate Change. But this is only one of about a dozen of such existential risks. Among them, the most severe is the threat of Artificial Intelligence and especially, its mature form — Superintelligence. This is an existential threat of an entirely different magnitude, which can either make our species extinct by a direct malevolent action, or by taking control over the future of Humanity.

This risk is also different from Climate Change, because it may come much earlier, within the next few decades. Secondly, we cannot stop (uninvent AI) — the proverbial genie is already out of the bottle. Incidentally both Superintelligence (immature) and Climate Change have a tipping point at about 2030. There is a minimum 20% chance that one of the existential risks will materialize by the end of this century, making our species extinct. However, some experts, like prof. Martin Rees, the Astronomer Royal, or the late Stephen Hawking, assess such a risk as at least 50%. If our civilization is to survive, we need to apply some powerful risk mitigation strategies.

We may of course be lucky ones and by design or by a favourable cause of proper actions, we may develop a benevolent Superintelligence. Then the question will be how will we evolve as a species? Will we live alongside Superintelligence, as its junior partner, will we somehow be able to control it, even if it becomes millions of times more intelligent than us, or will we simply morph with it, either as Transhumans, or in a digital form, after our minds would have been copied and become part of it.

Irrespective of the way in which it may happen, we would prefer to be running the show, rather than being spectators. For that we need the means to control the process of maturing AI into Superintelligence. We have already entered unknowingly the period, which I call the “Transition to Coexistence with Superintelligence”. In practice, we have about one decade to put in place at least the main safeguards to control the Superintelligence’s capabilities, to protect us as a species and develop it as a friendly Superintelligence, which will become our partner. Therefore, Humanity should have a Mission, based on the revised Universal Values of Humanity and accepted by a significant majority of nations (it is unrealistic to expect that all nations would sign up within this decade). It should determine a strategy to avoid humans’ extinction and prepare for our gradual evolution into a new species, for example:

Avoid extinction and evolve into a new species by developing a friendly Superintelligence

One of the key preconditions for implementing such a Mission is the creation of a supranational powerful organization that would be acting on behalf all of us, as a planetary civilization (considering that the UN cannot realistically play that role). However, I believe it is too late for this option since it would have taken several decades to create such an organisation and even then, it might not have all the required powers. Realistically, we must accept (the sooner the better) that the world will probably not act as a single entity, at least not immediately. Since we must act now, the option is to count on the most advanced international organization, which would initially act on behalf of the whole world, although it would only include some countries. I have argued my case extensively in my book Who could Save Humanity from Superintelligence? The organisation, which might fulfil that role most effectively seems to be the European Union, followed by NATO and as a fall-back option, by China. Saying that, the support of other organisations such as the UN, or WTO will be vital. Whichever organisation will lead Humanity, it should be guided by a Vision on how the Humanity’s Mission could be delivered (at least the aspect related to Superintelligence) such as:

Maintain a global control of existential risks, especially the development of Artificial Intelligence

To deliver such a Vision we must teach and instil in AI the best human values and preferences until its mature form — Superintelligence — becomes a single entity, millions of times more intelligent than humans and yet remaining our partner. That process of maturing the current AI over a few decades should start straight away. Therefore, we must agree as soon as possible a kind of a Roadmap for Humanity’s evolution, perhaps similar to the one I propose on the Sustensis website, which contains five stages:

  1. Managing existential Risks
  2. Creating a friendly Superintelligence
  3. Reforming Democracy
  4. Making a transition to a federated world
  5. Evolving with Superintelligence

We have started the most uncertain period in the existence of the human kind. You can make your own judgment whether this is an exaggeration or an understatement by browsing the Sustensis website, beginning with existential risks. Then move on to Superintelligence and further tabs on the right. The further down you go in each top level tab, the more detail will be provided.

Tony Czarnecki
Sustensis

--

--

Tony Czarnecki

The founder and Managing Partner of Sustensis, sustensis.co.uk - a Think Tank for a civilisational transition to coexistence with Superintelligence.