Member-only story

Tony Czarnecki
6 min readNov 29, 2023

Could we create a morally good AI, which would never threaten us?

Image generated by DALL-E

To mitigate the risk of Superintelligence, which I perceive as a single, global, most advanced AI system, acting against our interests or even becoming outright malevolent, we must exercise early control over its development as it becomes increasingly more intelligent. To do that, we need a global and immediate implementation of the mechanisms controlling Superintelligence. This requires diverse approaches, which may collectively, better control the evolving “mind” of Superintelligence.

One such innovative approach is proposed by Yann LeCun’s, Chief AI scientist at Meta. His views on controlling AI are optimistic, including solving the so called alignment problem, i.e., aligning AI’s goals and motives with human values and preferences. He maintains this opinion in an interview with Financial Times made in October 2023, where he suggests that “several ‘conceptual breakthroughs’ were still needed before AI systems approached human-level intelligence. But even then, they could be controlled by encoding ‘moral character’ into these systems in the same way as people enact laws to govern human behaviour.” This is broadly in line with the opinion of another super optimistic AI scientist, Gary Marcus. It contrasts with the prevailing view among AI researchers who maintain that controlling a superintelligent AI might be impossible, as it is impossible for a monkey trying to control a…

Tony Czarnecki
Tony Czarnecki

Written by Tony Czarnecki

The founder and Managing Partner of Sustensis, sustensis.co.uk - a Think Tank for a civilisational transition to coexistence with Superintelligence.

No responses yet