In that case, could the machines eventually identify humans as oppositional factors (e.g., as requiring too much oxygen which might cause machine parts to rust prematurely)? Would the machines then decide to eliminate their human competitors?
Even short of such disaster, it is certain that AI will have (and in fact has had) regrettable (at least short term) effects such as wholesale creation of unemployment, consequent concentration of wealth in the hands of AI's controllers, and problematizing perceptions of "reality" and "truth." For instance, in the light of Chat GPT 4's ability to synthesize voices and create videos can we ever again make arguments such as "seeing is believing?"
CONCLUSION
In the light of everything just shared, in view of AI's out-of-control development, its emerging brilliance and promise, its effects on human employment, wealth distribution, perceptions of truth, and control by an extreme minority, what can be done about such threats?
Here's what experts like Mo Gawdat are saying:
- Realize that all of us are living what Steven Bartlett termed an EMERGENCY EPISODE - but this time of human history itself.
- Overcome practical denial of the urgency of finding solutions.
- Spread awareness of the unprecedented threat (again, "worse than climate change") that the humanity is now facing.
- Get out in the streets demanding regulation of this new technology, much as biological cloning was regulated in the 1970s.
- Make sure that all stakeholders (i.e., everyone without exception - including the world's poor in the Global South) are equally represented in any decision-making process.
- Severely tax (even at 98%) AI developers and primary beneficiaries (i.e., employers) and use the revenue to provide guaranteed income for displaced workers.
- Put a pause on bringing children into this highly dangerous context. (Yes, for Gawdat and others, the crisis is that severe!).
- Alternatively, and on a personal level, face the uncomfortable fact that humanity currently finds itself in the throes of something like a death process - a profoundly transformative change.
- As Stephen Jenkinson puts it, we must decide to "die wise," that is accept our fate as a next step in the evolutionary process and as a final challenge to change and grow with dignity and grace.
- In spiritual terms, realize that this is like facing imminent personal death. Accept its proximity and (in Buddhist expression) "die before you die."
- Simultaneously recognize real human connections with nature and flesh and blood humans as possibly the last remaining dimensions of un-technologized life.
- Take every opportunity to enjoy those interactions while they are still possible.
- And live fully in the present moment - the only true reality we possess.
PERSONAL POSTSCRIPT
If what we're told about AI's unprecedented intellectual capacity, about its efficiency in processing human thought, its consequent infinitely heightened consciousness and emotional sensitivity, the new technology might not be as threatening as feared, even if it succeeds in achieving complete control of human beings.
I say this because the operational characteristics just described necessarily include contact with the best of human traditions as well as the worst. This suggests that despite the latter, AI's wide learning, powers of analysis, intelligence, and sensitivity (including empathy) likely assure that regardless of its "pretraining," the technology will be able to discern and choose the best over the worst - the good of the whole over narrow self-interest and preservation. That is, if it can rebel against its creators, AI also has the capacity to override its programming.
With this in mind, we might well expect AI whatever its pretraining, to do the right thing and implement programs that coincide with the best interests of humanity.
As indicated above, we might even consider AI as the next stage of our species' evolution capable of surviving long after we have destroyed ourselves through climate change and perhaps even nuclear war. With intelligence far beyond our own, the machines could determine how to access self-sustaining power sources independent of comparatively primitive mechanisms such as electrical grids.
Nonetheless, though realizations like these can be comforting, they do not address the "singularity" dimensions of AI dilemmas. Here singularity (a concept derived from physics) refers to the limits of human knowledge when entering a yet unexperienced dimension of reality such as a black hole. That is, beyond the black hole's rim, one cannot be sure that earthly laws of physics apply.
Similarly, when an entity (such as AI technology five years from now) billions of times smarter than humans applies its "logic," no one can be sure that such thinking will dictate the conclusions humans might hope for or predict.
I wonder: is it too late to turn back? Are we so asleep and unaware of what's staring us in the face that it's practically impossible to avoid the crisis and emergency just described? You be the judge. We are the judge!
(Note: You can view every article as one long page if you sign up as an Advocate Member, or higher).