Is there a qualitative difference between understanding and manipulation of symbols? Humans (and animals) are natural champs at the former, while most humans (who might have been flustered by 8th grade algebra) have trouble with the latter. Computers have gotten quite good at faking understanding by manipulating symbols. They can do algebra and calculus so much faster and more reliably than humans that theoretical physicists operate on a different level than when I was in school 30 years ago.
In this month's Atlantic, Douglas Hofstadter does a thorough (and amusing) job of illustrating the difference between symbol manipulation and understanding. The reason that Google Translate is useful is not that it produces even workmanlike translations, but that we supply human intelligence at the back end to make sense of its output.
"One swallow does not a summer make."
"One swallow does not a thirst quench."
Even the least articulate native speaker dips into the art of language in ways that stymie the most sophisticated AI programs now available. When you hear "a heap of bull" your mind automatically fills in the four-letter word that was omitted for the sake of polite society. The unwashed AI program tries to conjure a bevy of bovines.
For the present, we can agree that AI is taking shortcuts that produce impressive demonstrations, but a little probing reveals glaring shortcomings. What about the future?
The prevailing view, which Hofstadter reiterates, is that understanding is in principle something that computers can do, but that a huge database of facts about the world needs to be available, with all appropriate associations.
The more radical view is that human minds are doing something qualitatively different from what a computer can ever possibly do. Many laypeople come to this view from common sense. But in the elite world of mathematical philosophers, the only prominent thinker who defends it is Roger Penrose.