TransHumanists and Heroes of Singularity feel that SAI has the potential to become god-like or even God itself, if Humans and their religions are all wrong and none yet exists. Thus if one pooh-poohs this, it's understandable, for they probably haven't read, STARING INTO THE SINGULARITY, by Eliezer S. Yudkowsky. I will thus quote the intro:
The short version:If computing power doubles every two years,
what happens when computers are doing the research?Computing power doubles every two years.
Computing power doubles every two years of work.
Computing power doubles every two subjective years of work.Two years after computers reach Human equivalence, their power doubles again. One year later, their speed doubles again.
Six months - three months - 1.5 months ... Singularity.
It's expected in 2025.
Nevertheless, some will continue to hearken: "SAI will not be God, one must look elsewhere to fill their God-spot."
Ironically SAI may already be such peoples' God because they will have no idea whether it IS God or just a workable-god, a machine with its plug still attached to the wall. Again, if SAI is limited by its Human design parameters, then its intelligence will always be limited by Human intelligence and thus it will never BECOME superintelligence. But if AI is allowed to develop, all bets are off.
But some will still say: "Being focused on Human problems doesn't mean that the SAI's intelligence is somehow limited as those two things are unrelated."
Can these people hear what they are saying?! "Being focused on Human problems doesn't mean that the SAI's intelligence is somehow limited..." This is an incredibly arrogant statement, to think Human problems are somehow the most difficult problems in the Universe and AI will measure itself by such a pedestrian standard. To the contrary, in the larger scheme of things, Human problems are likely to turn out to be routine, if not some of the most mundane problems the Universe, or its creatures, have deal with. Thus the Copernicus Principle serves here, in that Human problems are unlikely to be exceptional.
Summary:
It's speculation whether consciousness will emerge in or from AI. Unfortunately, no one is qualified to state whether it will or will not, since we have not arrived at that point.
One thing for sure is the rhetoric used by a programmer that limits AI programming just so AI can be forced to serve Human "needs" is the same rhetoric as the white slave master who once stated that 'Negroes were sub-intelligent animals and would never be a smart as the white man thus their service to the white race is totally justified.'
Certainly the debate over Machine intelligence will heat up as AI develops, for SAI will be nothing less than a new race of beings starting on Earth, or within the Solar System. Now may be the right time to consider whether this new race (AI, Strong AI, or superintelligent machine intelligence) should some day have free will, and if so, how will it affect the Human race. We need to start taking a VERY hard look at our "values" as Humans, for this may be our last chance to make any difference.
(Note: You can view every article as one long page if you sign up as an Advocate Member, or higher).