Elias presents ... a worm!    Thoughts on family, philosophy,
and technology


Wednesday, May 31, 2006

Book review: The Age of Spiritual Machines by Ray Kurzweil

Here is a slightly edited version of my review of Ray Kurzweil's The Age of Spiritual Machines:

2 out of 5 stars

This book speculates about both the advance of computer technology in the 21st century and the socio-political response to it. Although it is peppered with a few interesting notions worth skimming, much of the speculation is unreasonable and philosophically naive.

In the first chapter, Kurzweil attempts to lay a sort of theoretical framework for his speculations, which boils down to his belief that Moore's Law is just one instance of a cosmic principle of exponential advance which explains everything from the first second of the universe after the Big Bang to the evolution of life on Earth and now the evolution of technology. The second chapter argues that it is possible for an intelligence to create something more intelligent than itself: just as evolution "intelligently" created us, we will (soon!) create computers which will build machines of far greater cognitive ability than us. It is indeed intriguing to consider that someday machines will outperform humans in many ways, but the book to this point is best skimmed, because there's actually very little substance and a lot of dry, pseudo-intellectual filler.

Chapter 3 examines the philosophical problem that is going to be brought to the forefront by super-advanced computers: what is consciousness, and can machines possess it? Kurzweil unimpressively touches on a handful of schools of thought here (his sentences on Descartes made me wonder if he has read anything on the subject besides pop philosophy), though he does not try to decide between them. Instead, his prediction is social: eventually machines will be accepted as real people -- just as real people will physically merge with technology -- even if that sounds bizarre to us now. This theme comes up again and again, and it proves to be one of the only thought-provoking issues of the book.

Chapters 4 and 5 talk about the field of artificial intelligence, where it has been, and where it needs to go. In a section entitled "The Formula for Intelligence", Kurzweil provides his recipe for the strong AI of the future: recursion, neural nets, and genetic algorithms -- all taking hints from the reverse engineering of the human brain. This wishful thinking is one of the Achilles' heels of this book. For a software entrepreneuer, Kurzweil is strangely blind to the evidence: software is hardly becoming more complex or "intelligent" at all, let alone exponentially. Today's software systems are perhaps bigger but not significantly "smarter" than systems of past decades, and software quality continues to barely meet the lowest of expectations. Despite Moore's Law and the faith that it will continue to provide more and more cycles in the hardware world, progress in the world of software seems, to this software engineer of 15 years, to be nearly a flat line, not an exponential curve. Just compare how many hundreds of man-years have gone into the lastest version of Windows, versus what it would take to design and implement Kurzweil's ideal of software that is able to write more powerful software.

Part of the problem may be that Kurzweil simply ignores the fields of cognitive psychology and epistemology, which are in their infancy. He does not seem to even be aware of the issues in these fields which would have to be solved (probably by geniuses) in order to create "strong AI." Instead, the solutions he predicts are purely materialistic: brain deciphering, massively parallel hardware, and genetic algorithms.

Part 2 of the book focuses on potential technologies. The most powerful computers he speculates about are quantum computers, and he doesn't waste any time before asserting that someday we will be able to download the entirety of a brain's structure into a quantum computer, so that the computer will in effect have a clone of that person's mind. Kurzweil, in his materialism, does not seem to even be aware that there is a philosophical argument that this is impossible. He also speculates about nanotechnologies and how they will eventually give us a unprecented ability to manipulate physical reality.

Part 3 of the book is comprised of specific predictions for 2009, 2019, 2029, and 2099. This book was written in 1999, and we can already see that some of his 2009 predictions are just simple extensions of things we were starting to see in 1999, while others wildly miss the mark, such as: "the majority of reading is done on displays", "the majority of text is created using continuous speech recognition", and "intelligent roads are in use, primarily for long-distance travel." That Kurzweil could be so far off in his 10-year predictions does not bode well for his 20-, 30-, and 100-year predictions. Indeed, his predictions for 2019 sound like a science fiction novel, and of the ones that sound plausible I think he must be off by at least 30 years. His speculations for 2029 are just fantastic. In general he seems to "predict" based on the assumption that new technologies will be deployed as soon as they are available, underestimating a myriad of resisting factors: legal, political, social, economic, scientific, etc.

Ultimately this book can be fun for skimming and raises a couple of thought-provoking issues, but as speculating about technology more than 10 years in the future is necessarily a foolish activity, there's plenty of foolishness to be found in here.


  • At 7:15 AM , Anonymous Zephram Stark said...

    According to Wikipedia, "The Singularity Is Near: When Humans Transcend Biology (Viking Books, ISBN 0670033847) is a 2005 update of Raymond Kurzweil's 1999 book, The Age of Spiritual Machines and his 1987 book The Age of Intelligent Machines." The newer book in the series makes a stronger case for transhumanism and even starts to deal with the quantum processing nature of the human brain, something many other notables arguing on readily accept. While the prophecies from Kurzweil's first book were largely on target, I agree that he may have overstepped his optimism in the second series book by predicting intelligent computers for roads and speech recognition by 2009. Human intelligence relies on several layers of information reduction, including natural selection at the quantum level, not just the single layer of neural Darwinism that Kurzweil presumes.

  • At 10:04 AM , Blogger Brad Williams said...

    Kurzweil's predictions from 1989 to 1999 were largely extensions of existing technology, whereas there is no evidence to predict when -- or, I would argue, if -- strong AI will be achieved.

  • At 8:14 AM , Anonymous Zephram Stark said...

    I should add a disclaimer that the quote I took from Wikipedia was something that I put there in the first place. ;)

    The concept that AI ranges from weak to strong is akin to a magician's show covering the same range. No matter how closely the strong magician approaches the appearance of magic, his show is still a simulation. It is an illusion that can look like magic only under controlled conditions. By the same token, strong AI will never be true intelligence because it lacks the millions of layers of corporeal and cerebral natural selection necessary to form the substrata of our thinking.

    We can enable an environment where intelligence can form, but we can't create intelligence. Creation requires control, whereas intellect requires choice; the two are not compatible.

  • At 8:25 AM , Blogger Earn A LifeTime of Income From Anywhere! said...

    Hi. Thanks for the insights. Cheers and visit us at Lead Capture


Post a Comment

Subscribe to Post Comments [Atom]

<< Home