|
Weather at the Frozen North
This is my personal blog. My professional blog is The Customer Service Survey I've written a book called Gourmet Customer Service. You can buy it on Amazon. (in)Frequently Asked Questions AIM Screen Name: DFNfrozenNorth
Categories
Statistics
Last Updated: Aug 07, 2008 03:29 PM
|
Wednesday - October 05, 2005 at 02:59 PM inStrong AI
A couple of skeptical articles about Kurzweil's new Singularity book (one here, the other here) reminded me of a point I've been meaning to write about for a long time: the flaw in Strong AI.
I'm not going to take the time to develop my thesis in great depth today (no time), but I'll sketch it. Someday I'll come back to it in more detail. For those not familiar with the concept, "Strong AI" refers to the idea--common in science fiction and some more speculative researchers--that a sufficiently advanced computer can achieve self-awaress to the same extent as a person. This notion has its passionate adherents, and a sizable contingent of people who think it is utter bunk. I'm in the Utter Bunk category for lots of reasons. To begin with, we don't understand what makes something or someone sentient (self-aware). Is sentience an emergent property of hypercomplicated networks? Is it inherent (to a greater or lesser degree) to everything in the universe? Does it require some sort of quantum mechanical entanglement? Is it just an illusion? If sentience is just an illusion, it's a damn convincing one. [Sadly, while that particular counterargument is pithy, it leads to the kinds of discussions you have in a freshman dorm after about four beers. A better argument is that the presence of an illusion presupposes sentience, so claiming that sentience is an illusion is paradoxical.] Without any sort of understanding why we experience the world in this glorious 3-D cinematic surroundsound we call "life," it seems weird to claim (without evidence or proof) that the same property can be achieved merely through complicated algorithms. But the deep flaw in the Strong AI hypothesis is the unstated assumption that a simulation in a Turing Machine is the same as the reality being simulated. For those not familiar with the concept, a Turing Machine is a mathematical abstraction of a digital computer, and all digital computers (as built under current technology) are essentially Turing Machines. Turing Machines have some neat mathematical properties, but they are inherently more limited than an arbitrary system in the real world. Computers can do a very good job of simulating the real world (Stephen Wolfram has an unproven but plausible conjecture that a sufficiently powerful computer can simulate the real world to an arbitrary degree of precision), but a simulation is not reality. Or, quoting Lee Gomes in the above-referenced Wall Street Journal column, "We have increasingly powerful computer models of the weather. But you can run one of them in your backyard until the cows come home and you're not going to make any rain." We may build computers capable of simulating certain aspects of human intelligence, but that does not make the computers sentient any more than simulating a thunderstorm will relieve drought in Oklahoma. To think otherwise smacks of mysticism and magical thinking, the same sort of thinking that leads to cargo cults. Strong AI proponents are guilty of confusing the abstract representation of a thing with the thing itself. Posted at 02:59 PM | Permalink | | | |