Human extinction or Posthuman evolution?

For the vast majority of us the most important goal is to live a long and healthy life, and prepare a similar life for our children, grandchildren, etc., ad infinitum. The problem is that there is no such infinitum. We take it for granted that, as a species, we will exist forever. Very few of us consider that over 99% of all species, which once existed, are now extinct. How can we be an exception?

This is the question I have asked in my first book “Who could save Humanity from Superintelligence?’ The paradox is that the probability of a human extinction from natural causes (e.g. an asteroid impact) in this century is less than 0.00002%, whereas it is between 20% — 50% from man-made causes. In my second book “Democracy for a Human Federation — Coexisting with Superintelligence“, I have identified democracy, as the key element needed for our civilization to survive man-made existential risks, including the risk of developing a malicious Superintelligence. In my third book, “Becoming a butterfly”, I conclude that Humanity must not only manage various existential risks, which it has itself created, but also manage its own evolution. The analogy in the book’s title is not perfect, since caterpillars and butterflies are the same species. However, what is almost identical, is the process of metamorphosis, which humans may have to go through, while evolving into a new species.

We are the only species, which is consciously capable of minimizing the risk of its extinction and control its own evolution in a desired direction. We have already been doing it in some way over millennia by controlling our evolution but in a cultural and social sphere, which has also strengthened our resilience to extinction. But today we may be able to control our physical evolution into a new species, while retaining a cultural and social pedigree. We will only be successful if we do it gradually, using a process of transformation similar to a caterpillar becoming a butterfly. To summarize, in the next few decades, our future may evolve in one of three ways:

  1. Some existential events may combine, like pandemic, global warming and a global nuclear war, resulting in a complete extinction of the human species. There is 20–50% of that happening within this century (Hawking, Rees)
  2. Some existential events, which also includes developing an immature malicious AI, may bring our civilization to near extinction with some humans surviving. A new civilization may then be built on the remnants of this civilization, reaching the current technological level within a century, and facing similar existential risks as we do now. That cycle may continue even for a few centuries, in which case the human species’ extinction may be delayed. This probability is higher than 50% over more than a century
  3. Finally, the third option is that we develop a friendly Superintelligence, which would broadly adopt our best human values, and which would not only help us minimizing existential threats but would also create a pathway for human species evolution.

I believe this third option is the only good alternative that is available to us. If so, what should we then do, and what should be the priorities, assuming we only have just one decade to implement such an option? I believe we need to focus our efforts on three areas almost simultaneously, rather than in any particular order:

  1. Carrying out a deep reform of democracy. That means starting with a thorough review of key human values, rights and responsibilities and then modifying the way, in which we are governed as citizens of this planet and this site explains, why it is so important in the context of our survival as a species
  2. Building a planetary civilization. We cannot rely on the United Nations to fulfil that role and it is justified further on this website. That requires a new organization that would act as a de facto World Government. However, we do not have the time to create such an organization from scratch. The only option left is for the most capable, already federated organization, to take on such a role and act on behalf of the whole civilization
  3. Ensuring a global regulatory governance over the development of AI. This should continue until the AI matures into a friendly Superintelligence, when it will become our partner and ultimately guide our evolution into a new species. To achieve that, we need to create an international organization, with a complete control on the scope and capabilities of AI. A large part of this book covers this area.

I realize how unrealistic these objectives seem to be, especially if we only have about one decade, by which time all three elements safeguarding the future of Humanity should be in place. However, sometimes only when a case is made quite bluntly that we get motivated to solve a problem. After all, that is the purpose of this website — to add some suggestions to a common melting pot of ideas and identify the best options for humans to avoid extinction and instead, control our evolution.

Tony Czarnecki is an economist and a futurist — a member of the Chatham House, London, deeply engaged in global politics and the reform of democracy, with wide range of interests such as politics, technology, science and culture. He is also an active member of London Futurists. This gives him the necessary insight into exploring complex subjects discussed in the three books, of the POSTHUMANS series. He is the Managing Partner of Sustensis, London — a Think Tank for inspirations for Humanity’s transition to coexistence with Superintelligence.

I am the founder of Sustensis, a Think Tank providing inspirations and solutions for Humanity’s transition to the time of coexistence with Superintelligence.