How might Transhumans control Superintelligence?

Tony Czarnecki
23 min readAug 30, 2022

Will 2030 be a tipping point for Artificial Intelligence control?

Tony Czarnecki

Managing Partner, Sustensis

www.sustensis.co.uk

London, August 2022

The late physicist Stephen Hawking, Microsoft founder Bill Gates and SpaceX founder Elon Musk have all expressed concerns about the possibility that AI could evolve to the point that humans could no longer control it, with Hawking theorizing that this could “spell the end of the human race”[1]. Other AI researchers have recognized the possibility that AI presents an existential risk. For example, professors Allan Dafoe and Stuart Russell, both eminent AI scientists, mention that contrary to misrepresentations in the media, this risk does not have to arise from spontaneous malevolent intelligence. Rather, “the risk arises from the unpredictability and irreversibility of deploying an optimization process more intelligent than the humans who specified its objectives. This problem was stated clearly by Norbert Wiener in 1960, and we still have not solved it.”[2]

Elon Musk has been urging governments to take steps to regulate the technology before it is too late. At the bipartisan National Governors Association meeting in Rhode Island in July 2017 he said: “AI is a fundamental existential risk for human civilization, and I don’t think people fully appreciate that.” He also added that based on what he had seen, AI is the scariest problem. Musk told the governors that AI calls for precautionary, proactive government intervention: “I think by the time we are reactive in AI regulation, it’s too late”.[3]

If we consider that 99.9% of all species have disappeared[4], then why should we be an exception to the Fermi’s Paradox, of which one of the explanations is that no civilisation has contacted us, because once they had achieved a certain level of technological advancement, they destroyed themselves. So, if we want to avoid an extinction, we must mitigate existential risks such as climate change, pandemics, nanotechnology, global nuclear wars and most importantly the threat arising from developing a hostile Superintelligence. It is this threat that could be the most imminent of all because it could annihilate the human species, possibly within the next few decades. But let me first describe what I mean by Superintelligence, sometimes called Artificial Superintelligence (ASI).

What is Superintelligence?

The difficulty for an average person to differentiate between IT and AI is perhaps of lesser importance than understanding what the term Artificial General Intelligence, defined here as Superintelligence, really means. Confusing Superintelligence with a Terminator-type robot may be especially troubling if it concerns politicians. After all, these are the people whom we must convince that there is little time left before we may lose control over the maturing AI. That lack of awareness and understanding may stem from the reason that is quite difficult for most people to imagine Superintelligence. The media may be responsible for much of that misunderstanding by trivializing AI. However, it is also the result of poor, very narrow education. So, here is how I define Superintelligence.

First, Superintelligence must have a body. We already have all the necessary elements such as data, processors, memory, interfaces, communications, sensors, including artificial morphic neurons. But currently all these building blocks of more advanced AI are perhaps thousands of times slower and far less capable than a mature Superintelligence.

What we do not have yet is a mind of this single entity, because that would require its intelligence to acquire cognition. Once it achieves that, it may then gradually turn into a conscious entity, although there is no agreement among AI researchers whether such an advanced intelligent agent must be conscious before it becomes superintelligent.

So, what we have now, are individual, relatively unsophisticated robots. However, ultimately there will be just one Superintelligence — a single entity, with its own mind, immeasurably exceeding all human intelligence. For such a digital intelligence to have any experience it will have to interact, perhaps consciously, with the environment. It will do so in various ways and through numerous representations.

Such a global networked Superintelligence could control billions of sensors and robots. It will also represent itself as avatars, holograms, or as emotional humanoids, such as an advanced AMECA robot, shown in January 2022 at the CNET exhibition in Las Vegas and created by the Engineered Arts in Britain. Finally, it will also be linked to conscious Transhumans, who play a key role in how I imagine humans may most effectively control Superintelligence.

In the view of most of AI scientists, once AI becomes a mature Superintelligence, achieving Singularity, humans will be under its total control. That alone will be an existential threat for humans because we will lose control over our own destiny. Whether such a mature Superintelligence becomes a threat to a human species depends largely on how, or if at all, it was nurtured in line with human values before we will have lost control over it. If such a mature Superintelligence has slightly misaligned objectives or values with those that we share, it may become hostile towards humans. Therefore, we must protect ourselves from such a scenario becoming a reality.

Humans versus Artificial Intelligence today

Howard Gardner has identified 8 human intelligences[5]. These are: Linguistic, Logical/Mathematical, Spatial, Bodily-Kinaesthetic, Musical, Interpersonal, Intrapersonal, and Naturalist. In at least four of these — Bodily-Kinaesthetic, Logical/Mathematical, Musical and Spatial, AI already exceeds humans.

Domains in which humans still excel over Artificial Intelligence today

I have estimated how well a narrow AI intelligence currently matches human intelligence in each of the eight intelligences. In Linguistic Intelligence in some areas, it is vastly superior to humans (e.g. the number of languages it is able to translate simultaneously with fewer and fewer errors). However, humans are still immensely superior in Interpersonal, Intrapersonal (understanding yourself, feelings etc), and Naturalist areas. That is very closely related to cognition, the most difficult domain for AI to learn. However, the pace of progress in AI measured by the number of significant breakthroughs, which impact the entire industry has been truly astounding. Here are some of the most significant developments over the last 20 years:

  • 2006 — Convoluted Neural Networks For Image recognition (Fei Fei Li)
  • 2016- AlphaGo — Supervised ML, Monte Carlo, Tree Search + neural networks (DeepMind)
  • 2017- AlphaZero — Unsupervised ML (DeepMind)
  • 2017- Tokenized Self-Attention for NLP -Generative Pre-trained Transformers (like GPT3) (GoogleBrain, OpenAI
  • 2021- AlphaFold — Graph Transformers (graphs as tokens) predicting 3D protein folding (200M proteins only 200K known) — 10300 possibilities (GoogleBrain)
  • 2022 (March) — Artificial neurons based on Photonic quantum memristors (University of Vienna)
  • 2022 (2 April) — White Box — Self-explainable AI, Hybrid AI (deep re-enforced ML and rule-based (French Nukka lab)
  • 2022 (4 April) — PaLM, Pathways Language Model, NLP with context, re-prompt and reasoning (Google Research)
  • 2022 (11 May) — LaMBDA — Google’s multi-modal AI agent (e.g. it can operate physical robots apart from typical NLP functions)

These breakthroughs have helped AI researchers to apply them in various domains, as illustrated below, in which AI skills quite often vastly exceed human level intelligence and skills. That has also been reflected in the sensory processing, which may be crucial for developing AI’s cognitive capabilities.

This does not include yet the impact of progress in AI-related hardware. For example, the number of tokens (1,000 tokens are a broad equivalent of 1 human neurons) has been rising faster than exponentially over the last 4 years, increasing from 300M (BERT in 2017) to PALM — 650B in 2022 and 1.6 trillion (Wu Dao 2.0 in 2022). With the current pace of development, the number of neuron-like tokens should equal 86B neurons in a human brain by 2024, which would require about 86 trillion tokens.

However, if we include the super-exponential pace of development in synthetic neurons, based on memristors, and quantum computing, we can expect even faster acceleration of AI capabilities. We may quite soon create AI, which can exceed human level intelligence and skills in some areas while being incompetent in the tasks, which every toddler can solve. This relentless progress in AI capabilities may lead to humans’ losing control over the AI’s self-learning capabilities, directly impacting our ability to control its goals. Once this tipping point is reached, the consequences for our civilisation and indeed for the future of a human species will be enormous. Therefore, AI scientists should at least agree on what might be the most likely date when humans may lose control over AI.

Will there be a human level Artificial Intelligence by 2030?

The first problem we face when attempting to control AI is that we need to convince the public and most importantly, the world leaders, that such an invisible threat is real. One may call a maturing Superintelligence ‘an invisible enemy,’ assuming it turns out to be hostile towards humans, similarly as the current Covid-19 pandemic. Calling Covid an invisible enemy was an excuse used by governments that it was not possible to see the threat as coming, hence they were not responsible for the consequences. Governments seldom see that spending money now to minimize the risk of potential future disasters is an insurance policy. The implications of such short-termism in controlling AI development are profound. In the worst-case scenario, given an immense power of Superintelligence, it would be enough for such an agent to make a single error to cause the humans’ extinction.

The second problem is that not many AI experts are willing to say when Superintelligence is most likely to emerge. That allows politicians to dismiss any calls for taking serious steps towards controlling Superintelligence, saying it is hundreds of years away, so we do not have to worry about it now. The predictions by AI scientists and leading practitioners are generally vague without a clear definition of what is meant by Superintelligence. Ray Kurzweil is perhaps an exception. Being one of the most reliable futurists, he says that a mature Superintelligence may emerge by 2045[6]. At the AI conference in 1995, the participants estimated that it may emerge in two hundred years[7]. But four averaged surveys of 995 AI professionals published in February 2022 indicate that the most likely date for a mature Superintelligence is about 2060, just 15 years after the Kurzweil’s prediction[8] getting close to his prediction. In any case, if his predictions are correct, most people living today will be in contact with Superintelligence, which may be our last invention, as the British mathematician I. J. Good observed in 1966.

Perhaps even more important than the time by when Superintelligence emerges is an approximate time by when humans may lose control over AI, operating as a global system. Here again, AI scientists and top AI practitioners prefer not to specify such time, using instead more elusive terms like ‘in a few decades or so.’ However, without setting a highly probably time when we may lose control over AI, the world leaders will not feel obliged to discuss this existential risk for humans, which such a momentous event may trigger. Therefore, those who see that problem, should be bold enough to spell out the most likely time and justify it. Ray Kurzweil is again an exception here, saying in June 2014: “My timeline is computers will be at a human level, such as you can have a human relationship with them, 15 years from now,”[9] i.e., by 2029. Since then, he has been sticking to that date.

For me, the loss of control of AI can be compared in some way to the loss of control over the operation of the Internet. No country can switch off the Internet. Doing so, would be theoretically possible but it would mean a civilisational collapse, although even then such a switch off may still be incomplete. We may soon face a similar situation with a globally networked AI, controlling billions of sensors and millions of robots. We can safely say that a desktop computer power will increase 1,000 times by 2030 (from 2014), reaching the intelligence level of an average human, if measured by the no. of neurons, and vastly exceeding our memory and processing power (Ray Kurzweil’s reasoning). But that does not include the potential progress in neuromorphic neurons, quantum computing and several other related areas, which will immensely increase the capabilities of such an intelligence.

Therefore, I have taken 2030, as the most likely date by which humans may lose an effective control over AI, which I would call an Immature Superintelligence. This is the AI’s tipping point, likely to happen at the same time as for the climate change. Such an AI may have an intelligence of an ant, but immense destructive powers, which it may apply either erroneously or in a purposeful malicious way. There may be several such agents by the end of this decade, who might even fight each other, especially if deployed by some psychopathic dictators, hoping to achieve AI Supremacy and use it to conquer the world.

However, it is not so much important, who specifies a concrete date but that such a date is widely publicised and supported by eminent AI scientists. There is a saying ‘What is not measured is not done’ illustrated by the fact that despite many attempts for fighting the climate change no real progress was made until very recently. It was always argued that a potential climate change impact was far away. Only when at the Paris conference in 2015 and at COP26 in Glasgow in 2021, when a firm target of a maximum 1.5C temperature rise was set, have we started to see concrete global action. But COP26 also specified a pivotal date 2030, as a tipping point beyond which we may lose the battle for controlling climate change[10]. Importantly, both indicators, 1.5C or 2030, are just best guesses as it would be for losing humans’ control over AI. Notwithstanding that, a global AI control is urgently needed and should be measured by some critical thresholds. Additionally, no advanced AI system should be released without being primed with Universal Values of Humanity and its long-term goals. The warning signs of humans potentially losing control over AI might be when one, or all, of these events happen:

  • Number of artificial neuromorphic neurons exceeds the no. of neurons in a human brain
  • Incidents when AI network of globally connected robots goes out of control leads to global chaos
  • AI processing speed measured in flops exceeds the performance of a human brain
  • First simple cognitive AI Agent emerges

Realistically it will be more difficult and dangerous for humans when these AI thresholds are surpassed than when the global temperature increases above 1.5C. These warning signs about humans losing control over AI may start the process of human species’ evolution or extinction. For humans harnessing AI may be like climbing a big mountain. Humans may perish during this endeavour. But if we are properly equipped, we will succeed in delivering a friendly AI and reach the world of unimaginable abundance and opportunities.

Options for implementing a continuous, global control of a maturing Superintelligence

If we are to successfully control AI development, then IPCC and COP26 conference should become a template for setting up a similar controlling approach. Currently, conferences on controlling AI are concerned with threats, which are trivial in comparison, such as face recognition impacting our privacy. This has of course some importance. However, by focusing on these issues, the real dangers, to which we may be exposed, are hidden. Revealing them would require putting stricter control on large companies developing AI, reducing their profits, similarly as it happens now in the carbon economy. But AI has much wider and imminent impact than climate change, covering every domain of human life from a peaceful use to military applications. Deep interests to protect national industries make the creation of such a global AI control agency even more doubtful. Therefore, I have proposed three options, showing different impact of AI on humans depending on the scope of control.

Option 1: No Global AI Control

The first option ends in 2030, which I would consider a tipping point for controlling AI. It assumes that there would hardly be any global AI control. In such case, humans will be progressively under a greater control of a maturing Superintelligence. If it becomes hostile to humans, it may trigger an early human species’ extinction.

Option 2: Global AI Governance Agency (GAIGA) controlling Superintelligence?

The second option assumes that we may have a global agency, which would control the emerging Superintelligence. I propose to create an agency, modelled on the International Atomic Energy Authority (IAEA) in Vienna. IAEA was created in 1957 in response to deep fears but also hopes regarding the use of nuclear technology. The Agency was set up as one of the UN organizations. Its mandate was to work with the Member States and multiple partners to promote safe, secure, and peaceful use of nuclear technologies.

Such an agency should be responsible for specifying key parameters and target dates for implementing critical AI controlling measures, like International Panel on Climate Change (IPCC). Such key measures might include the no. of global networks controlled by a single AI agent, no. of sensors and robots controlled ultimately by a single AI centre, the processing power of a desktop computer exceeding the power of a human brain, etc. The tipping point for a potential loss of control over AI might be reached when its performance reaches these target measures.

The agency should enable a global comprehensive control over AI by about 2025 because, since a maturing AI may reach its tipping point’ by about 2030, although like with climate change, it is only an approximate date. That is still on the assumption that the pace of AI development runs at the current rate and no significant invention accelerates the process of a maturing AI even further. In any case, for such a control to be effective, we cannot wait until 2030; we must start it much earlier. We need to maintain continuous and comprehensive control over AI chips, weaponized AI, robots, neural networks, brain implants etc.

So, who would set up such an Agency? It should of course be the United Nations by default. But its Security Council, which as the current war in Ukraine shows, is unable to implement any global order. That does not mean that the UN has not been trying to do its best in many vital areas, such as nuclear disarmament, biological weapons, health, education, or culture. It has also initiated some ground-breaking research into the AI control and put forward some interesting proposals at various UN events under the auspices of its Interregional Crime and Justice Research Institute (UNICRI). For example, in July 2018, in Singapore, it organised the first global meeting on AI and robotics for law enforcement, co-organized with INTERPOL. In April 2019 a Joint UNICRI-INTERPOL report on “AI and Robotics” was published. The problem is that this proposal has remained just that — a proposal.

Therefore, realistically we cannot count on the UN being capable of creating such an Agency for the reasons explained earlier, although in the turbulent times of the Ukrainian post-war period and a potential political transformation of Russia, the UN’s role cannot be completely ruled out yet. But then we would still have China sitting at the Security Council and it is highly unlikely that it would allow such an Agency an unfettered access to all scientific labs developing most advanced AI systems, as the UN’s Covid inspection in China has proved.

Additionally, considering that it took 23 years from the Rio conference to Paris COP21 conference, we may have to wait well over a decade before such an Agency starts operating in earnest, and assuming the Security Council votes unanimously to establish such an agency in the first place. That would be far too late, which unfortunately is rather a typical modus operandi for governments and the UN. In such circumstances we would have to select an organization, which would provide a de facto control (i.e., not completely global) over AI. That means excluding China and Russia. Despite all the problems the European Union has, it is probably the most experienced organization, which might take up this challenge. When the European Union becomes a Federation, which in current circumstances, created by the war in the Ukraine, may finally happen, one of its agencies should become a de facto Global AI Governance Agency (GAIGA), in the absence of a genuine global control.

How could then GAIGA control the process of a maturing Superintelligence? Can we control it effectively at all? Without going into details, Nick Bostrom mentioned in his seminal book ‘Superintelligence’, that none of the capability control and other methods can guarantee an effective AI control. Even Stuart Russell’s recently proposed ‘The human preference method’ does not guarantee 100% control either. So, what further options do we have? In my recent book, ‘Becoming a butterfly’[11], I have presented a framework for a maturing AI, where some of these AI controls are applied simultaneously.

It starts with instilling Universal Values of Humanity as an ethics code in a form of a Master chip controlling the agent. Such chip would be distributed under license and implanted in all advanced AI agent’s ‘brains. The human values must be derived from an updated version of the UN Declaration of Human Rights, combined with the EU Convention on Human Rights and perhaps other relevant, more recent legal documents in this area. Irrespective of which existing international agreements are used as an input, the final, new Declaration of Human Rights would have to be universally approved if these values are to be truly universal. But even if such values do not become genuinely universal (China, Russia, N. Korea, Iran etc. will not accept them), they should nevertheless be binding. That means, that the European Union should make decisions in this area, and progressively in other areas, as it may become a de facto World Government, especially, when it becomes a Federation.

The second step in the Framework is nurturing AI as a child in the real environment. There are already some good examples in this area of AI control, like the introduction of humanoid robots into Japanese care homes. The top AI companies are already collecting AI experiences in human preferences. For instance, some legal firms can use the GPT-3 agent to prepare cases for presentations at the court[12]. Tesla has been routinely gathering the ‘experiences’ of its cars and then uploading those cars with the preferred actions to avoid collisions[13]. AI-controlled robots are used at laboratories, in the office, and in factories. Such as Maturing Framework, monitored by a Global AI Governance Agency, may increase the chances of delivering a benevolent Superintelligence.

So, GAIGA might control AI development, although these controls themselves cannot ensure failsafe result. However, it would still be far better than no control at all. Unfortunately, as mentioned earlier, we need such an agency right now, whereas the probability of this happening on time is quite low. Therefore, what we need is an ad hoc organization, which would provide an interim, continuous, although imperfect control, until the time such a global Agency is fully operational. That is covered in detail in option three.

Option 3: How could Transhumans Control Superintelligence?

Elon Musk’s metaphor “If you can’t beat them — join them!”[14] perhaps best describes the third option of controlling Superintelligence by implanting Brain-Computer-Interfaces (BCI) into humans and thus creating Transhumans. Actually, our brain consists of three brains — Reptilian Brain, Limbic Brain and Neocortex with nearly 70 distinct functional areas. We have currently at least three basic methods to read the brain’s electromagnetic waves.

The first one reads changes in the Local Field Potential (LFP). An example could be Elon Musk’s Neuralink brain implants. The second method uses Electro Encephalogram (EEG) embedded in a special helmet to read brain activities. This well-tried method is being used to read people’s thoughts and to instruct the brain on how to manipulate things, like typing with thought alone[15]. Finally, we can also use Electro-Corti-Graphy (ECOG). This method uses special neuromorphic chips implanted as a digital interface on the surface of the brain.

Within the next few years all these methods will enable the creation of cognitive BCI devices for the first Transhumans. There are already about 10,000 Transhumans in Sweden alone, who have implanted chips in their hands allowing them to use such chips for verifying their identity instead of using passwords for banking, purchases or passing security gates. But BCI devices, which will make a real difference will not be those controlling limbs, organs or curing diseases. They will of course immensely help millions of people, but the real difference is that these BCI devices will enhance human’s mental capabilities, i.e., memory, processing speed, decision making and sensory processing. Those who wonder how it could be possible, should consider that we are already partly Transhumans. Our smart phones give us enormous extra intelligence, which we could not dream of even a few years ago. The only difference between the future Transhumans and us is that our extra intelligence is currently external. Since BCIs are digital devices, their capability will increase nearly exponentially. For example, the electrodes’ density should increase in this decade by about fifty times to what it is today.

Transhumans’ brain with a wireless access to a much more advanced AI than LaMBDA today, will be able to process and store in the cloud whatever it decides to remember, and then retrieve it instantly. Their cognitive capabilities will increase so much that they will become the most intelligent and capable people. They may thus become invaluable, if selected by international bodies, such as the UN, to help resolve civilizational problems.

Initially, the first Transhumans will emerge from within the AI community and from the outset they should be under the control of a global organization. However, as mentioned earlier, this will almost certainly not happen on time. Therefore, we need a transitionary body, which should start operating in the next 2–3 years at the latest. Therefore, I propose to create a Global AI Consortium (GAIC), which would be modelled on the Internet’s W3C Consortium. In over 30 years of its operations, W3C has proved how well such a body, independent from governments, could function, maintaining global control over all key activities of the Internet[16].

GAIC would operate in a similar way as W3C. Its members will usually be large companies, involved in the AI development. GAIC members would select the GAIC committee, which in turn would approve key AI research companies engaged in the most advanced AI development. These companies will propose top AI scientists, neuroscientists, philosophers, psychologists, and other key developers as their candidates for Transhuman Governors.

Once approved by GAIC, they will be progressively fusing wirelessly more of their brain cognitive functions with the maturing Superintelligence, controlling its main decision centre. Such a ‘Master Switch’ will be wirelessly controlled by all networked Transhuman Governors. Activating it for major decisions would require the consent of the majority of the connected Transhumans. Probably no verification of their decisions will be needed since they will be able to read each other thoughts of the part of their brains integrated with the Master Switch. They may have to sacrifice their privacy — their contribution to the increased safety of all humans. Other important functions of Superintelligence might also be controlled in this way.

In a few decades, the body of Transhumans will become increasingly non-biological and their brain more digitally integrated with the emerging Superintelligence. By the end of this century, the whole brain of the willing Transhumans may be digitized and fully fused with a purely digital Superintelligence. Unless there are some physical obstacles e.g., related to porting consciousness onto digital chips, an entirely new, non-organic species will emerge, which might then be called Posthumans. By the way, such a progressive mind uploading through BCI may be more realistic than all-at-once copying of human mind into its digital equivalence.

What might be the consequences of giving Transhuman Governors so much power? There are at least two negative ones. The first one is the intelligence’ superiority since these selected people will have significantly extended cognitive capabilities in just a few years’ time. Their memories, processing power and the speed of their decision making might be perhaps even a thousand times faster than the top human experts. They will be above anyone’s capabilities in any area of science or knowledge. In relative terms, they will be almost omniscient.

But political implications may be even more critical. Since it will be impossible to control all BCI implants (they may be produced not just in the USA but also in China or Russia), very rich individuals and some political leaders will almost certainly get them. Thus potentially, any dictator may become a Transhuman himself, although he will not be connected to the ‘approved’ Superintelligence. This can be achieved in a few years’ time and people might not even notice it.

However, there are also some positive consequences. The first one is the ability to control a maturing Superintelligence from inside at a hardware level, which might be the most resilient method of AI control. Secondly, Superintelligence controlled by Transhuman Governors, will deliver unimaginable benefits to humans, creating the world of abundance.

It is difficult to say for how long GAIC would be the sole controller of Transhuman Governors and to which organization it will pass that control. In an ideal situation, it should be the United Nations playing the role of the World Government. But, as mentioned earlier, the UN is unlikely to be capable of effective control. However, if UN somehow gets the real global powers of governing in the next decade, it would of course be the natural choice for this organization to select the Transhuman Governors.

By about 2030 we may finally have a de facto Human Federation created from the expanding European Federation. At this stage, in the absence of an effective control by the UN Agency, the control of Transhuman Governors would most likely pass from GAIC to the earlier mentioned Global AI Governance Agency (GAIGA). This will, at least in principle, democratize the decisions made by Transhuman Governors on behalf of all humans. However, in practice such a control of Transhuman Governors would be largely irrelevant, since their decisions might be far better than those made by the most capable humans. It will be in our own interest to let them decide what is best for us. The only meaningful decisions of the Human Federation would be the selection and deselection of Transhuman Governors. However, that should not be based on political grounds. Therefore, it should be GAIGA rather than the Human Federation, which should take the ultimate decisions on selecting Transhuman Governors. To reduce the risk of biased decisions, the number of Transhuman Governors would probably extend to thousands, each having the same rights.

It is also possible that our civilizations will not be able to form a Human Federation and its executive arm, the World Government, at all. In such case, Transhuman Governors, gradually more tightly fused with Superintelligence, would play the role of an actual World Government anyway. Any attempt to remove such Transhumans by force would only made matters worse, since then Superintelligence would be out of any control.

A Transhuman Government selected and not elected would be a big dilemma for our civilisation. Democracy as we know it would only be relevant at a lower level of decision making, perhaps till the middle of this century. From about 2050 the Human Federation will be there to implement the decisions of the Transhuman Government, over which it may have no longer any control. Human species’ fate will be in the hands of Transhuman Governors who by the end of this century may become completely digitized becoming Superintelligence themselves. This will pave the way for more humans morphing with Superintelligence and thus starting the evolution of the human species on a grand scale.

Superintelligence will deliver unimaginable benefits to all people. As it matures, the first change people may notice in the next decade, if this scenario comes to fruition, is that there will simply be no wars. That on its own will increase the wealth growth. Productivity will soar, perhaps doubling the current growth rate of the world’s annual GDP. That may pay for rebalancing the average income worldwide and for regenerative medicine, which may very quickly extend a healthy life span perhaps by decades. Superintelligence will enable individualized AI-assisted education. It will facilitate personal fulfilment. People will be able to accomplish most of their wishes, such as developing skills in the arts, music, literature, climbing mountains, and do whatever else that interest them. All existential risks, including the climate change will be minimized or eliminated by the maturing Superintelligence.

And then, Ad Astra Per Aspera — “To Stars Through Aspirations.”

P.S. You can find more detailed information on the subjects covered in this article on Sustensis website: www.sustensis.co.uk, which is also supported by dozens of videos.

References:

[1] BBC News, https://www.bbc.co.uk/news/technology-30290540 , 2/12/2014,

[2] Allan Dafoe and Stuart Russell, Technology Review, https://www.technologyreview.com/2016/11/02/156285/yes-we-are-worried-about-the-existential-risk-of-artificial-intelligence/, 2/11/2016,

[3] Camila Domonoske, Elon Musk Warns Governors: Artificial Intelligence Poses ‘Existential Risk’, 17/7/2017

[4] D. Jablonsky, Nature, Volume 427, Issue 6975, pp. 589, 2004

[5] Howard Gardner: ‘Multiple intelligences and related educational topics’ 2013, https://howardgardner01.files.wordpress.com/2012/06/faq_march2013.pdf

[6] Ray Kurzweil, in an interview with ‘Futurism’, https://futurism.com/kurzweil-claims-that-the-singularity-will-happen-by-2045, 10/05/2017

[7] The seventh conference on innovative applications of artificial intelligence, https://aaai.org/Press/Proceedings/iaai95.php, 21/8/1995

[8] Cem Dilmegani, When will singularity happen? 995 experts’ opinions on AGI, https://research.aimultiple.com/artificial-general-intelligence-singularity-timing/, 3/2/2022

[9] Ray Kurzweil, in an interview with NBC: https://www.nbcnews.com/tech/innovation/top-google-engineer-says-computers-will-be-humans-2029-n128926, 11/6/2014

[10] COP26 — Together for our planet: https://www.un.org/en/climatechange/cop26

[11] Tony Czarnecki, “Becoming a Butterfly: Extinction or Evolution? Will Humans Survive Beyond 2050?”, London, 2021.

[12] GPT-3 in legal tech, https://www.jdsupra.com/legalnews/gpt-3-in-legal-tech-insights-from-the-3183642/, 15/12/2021

[13] Vikram Singh: Tesla: A data driven future, https://digital.hbs.edu/platform-digit/submission/tesla-a-data-driven-future/, 23/3/2021

[14] Elon Musk, Twitter, @elonmusk, 9/7/2020

[15] Shelley Fan, A New Brain Implant Turns Thoughts Into Text, https://singularityhub.com/2021/05/18/a-new-brain-implant-turns-thoughts-into-text-with-90-percent-accuracy/ 18/5/2021

[16] About W3C — https://www.w3.org/Consortium/

--

--

Tony Czarnecki

The founder and Managing Partner of Sustensis, sustensis.co.uk - a Think Tank for a civilisational transition to coexistence with Superintelligence.