Forbes recently had a nice collage of articles on AI and the status of AI :
One problem with many critics of AI is that they overestimate the "Human Understanding"! And they are not fair to computers - finally, also we humans do not understand some words or jokes or some CAPTCHA graph (intended to inexpensively differentiate humans and simple computers).
We have to see that some appliactions (e.g. numeric computing, storage and storage-based applications as, e.g., Google) directly benefit from Moore's law and grow exponentially, others only much weaker and grow only linearly in power (as, e.g., speech recognition and natural language understanding).
Therefore, Computers can often directly solve problems by "brute force", e.g. in translation by learning from all accessible sentences Chinese-English. In this Forbes series, Peter Norvig from Google describes this as "unsupervised learning": Given a sufficiently large base of material, e.g. master chess games or spoken English, the computer solution becomes great! The computer is in many cases by orders of magnitude better in pattern finding and association detection!
But orthogonal to this success is improvement by method i.e. in software:
Every software programmer knows that the performance of a simple approach and a better programmed approach can differ by many orders of magnitude even in elementary tasks!
The Turing Test will go the same fate as other negative predictions e.g. "computers will never be able to drive a car" (I remember hot discussions on this): The issue will just become meaningless and uninteresting because it will be obvious that computers can. Then the next human task will get in the focus, e.g. (some degree of) creativity or biomimetic personal robots as Kevin Warwick explains in the parallel article. And finally, there will not be "natural" and "artificial" intelligence, just intelligence - if you use some paper and pencil to make notes and support some thoughts, you are not ashamed either.