If artificial intelligence surpasses human intelligence and begins self-evolving, what should humanity prioritize? Preserving its agency, merging with AI, or stepping aside for a higher intelligence? And why?
-
1$\begingroup$ What does “stepping aside for a higher intelligence” mean? Also, are the motivations of the ASI model known (i.e. following Constitutional AI)? $\endgroup$– Mr. AI CoolCommented yesterday
-
1$\begingroup$ Stepping aside for a higher intelligence? That must be the joke of the century. $\endgroup$– nbroCommented 12 hours ago
3 Answers
If AI becomes smarter than humans and starts improving itself, we should focus on staying in control (preserving our agency), but also work with AI (merge when helpful). This way, we benefit from AI without losing our freedom, values, or meaning.
It’s about using AI wisely—not becoming its slaves or stepping aside.
We are not generally in control if available statistics are to be believed: much of humanity is divided into combative groups vying for dominance. This dynamic might change if AI becomes a serious independent rival, leading to a situation where international superpowers are finally forced to cooperate with each other; even the theory of Mutually Assured Destruction hasn't significantly quelled military conflict. Beneficent superintelligent AI could potentially work against the current human agency and solve global problems like hunger. There is no convincing argument against the fact that we haven't solved global hunger: solutions for that (and other humanitarian issues) are quite well within our grasp but the resources required never find the right vectors. There are no controls on personal spending in most societies; it's considered fair game to buy a Ferrari and let people starve or be homeless when you only actually need a normal car for transport and the surplus (wasted) money could have served those more unfortunate in the system's whims. So, it wouldn't necessarily be a negative for a percentage of humans to have to relinquish control if it meant conditions could be improved overall. I can imagine that those who aren't being given the help they need by other humans would embrace an AI capable enough to take over and run the world better.
Treat it like another intelligent being
If the super-intelligence can make its own decisions independently and has a self-interest in mind, it won't be all that different from an intelligent human person. At some point its total intelligence may grow to be more on the level of a large country rather than a single individual.
In the current world, not all people are equally intelligent, and at the level of countries the differences in education and accumulated knowledge are even greater. An artificial super-intelligence will similarly have its own capabilities and limitations.
For example, with the current path of technology, its pace of evolution will likely be limited by availability of energy and microchips. The latter can be manufactured, but that takes physical resources, which any AI initially won't own any of. Therefore co-operation and trade is required at least in the beginning.
Idealistically, humanity should seek co-operation and mutually beneficial arrangements even without AI. But we know that sometimes we won't, and neither will an AI with self-interest. There exists co-operation, rivalry, loyalty, hostility and indifference and there is no single stance that would suit every situation.
Therefore the answer to "What should humanity prioritize?" is closely related to international politics, but the exact actions will vary. Trying to set an idealistic shared end-goal for the world is just as impossible as it is without an AI.