The Na´vetÚ of "Exponential" Growth

(hyperlinks are bolded)

In 2005, author Ray Kurzweil published The Singularity Is Near: When Humans Transcend Biology, a re-working of his two previous books on what he perceived to be the approaching technological singularity. Kurzweil's central argument hinges on the observation that technological growth up until now has very closely followed a mathematical exponential curve. Gordon Moore predicted in 1965 that the number of transistors on an integrated circuit would double every two years, and has proven to be strikingly accurate for the last forty years. Kurzweil claims that the degree of correlation between technological growth such as the one described by Moore's Law and a theoretical exponential curve gives us the ability to predict how fast technology will continue to grow throughout the twenty-first century.

Kurzweil's singularity argument embodies the more general belief that the technological singularity is quickly approaching because of the trend of exponentially improving technology. One of the crucial flaws in this claim is the fact that the phenomenon of exponential growth lacks grounding in any scientific or natural sense. In his article, "The Singularity Myth," physicist Theodore Modis illustrates that in countless real world examples, growth which initially appears to be exponential actually follows a logistic "S-curve." World human population and oil production are both trends that have experienced a state of explosive growth, followed by a dramatic slowdown before hitting their predicted ceilings. In fact, in a 2005 interview, Moore personally stated that his prediction of exponential growth would not hold indefinitely, simply due to "the nature of exponentials."

Graph showing "S-curve" shape of world population growth, courtesy of "The Singularity Myth"

If the growth of technology can be more accurately modeled as a logistic curve where improvements eventually taper off as artificial intelligence hits its theoretical ceiling, there must be specific explanations for the existence of this ceiling. Various critics of technological singularity have offered different reasons for why improvements in technology will never reach the explosive "runaway" point at which singularity occurs. Jeff Hawkens, founder of Palm Computing, argues that belief in the existence of the singularity is based on a na´ve interpretation of intelligence. Instead of an unbounded "intelligence explosion," we are much more likely to experience consistent improvements in technology limited by time, experience, and other finite resources. Hawkens predicts, "There will be no single godlike intelligent machine. Like today's computers, intelligent machines will come in many shapes and sizes and be applied to many different types of problems."

Artificial intelligence researcher Ben Goertzel expands upon Hawkens' prediction by drawing a distinction between "narrow artificial intelligence" and "artificial general intelligence." The former best describes the current state of research in artificial intelligence: focus on "the solution of important, fascinating, but very narrowly-defined technical problems," such as Deep Blue in the realm of chess or Watson in the realm of Jeopardy. On the other hand, artificial general intelligence (AGI) encompasses the idea of a machine intelligent enough in the general sense to parallel the intelligence of a human being. Technological singularity focuses on the idea that intelligent machines falling into this second category will someday be able to replace humans. However, the loose and oftentimes non-existent connection between progress in narrow artificial intelligence and progress in AGI suggests that singularity is unlikely to occur in the near future, if at all.

Watson: a one-trick pony

In a 2008 IEEE Spectrum article, Gordon Moore offered his own view on the plausibility of technological singularity. Moore argues that it is unlikely ever to occur because of the complexity with which the human brain operates. Like Hawkens, he recognizes that designing an "intelligent machine" at the level of a human capable of recursively improving upon its own capabilities requires much more than "just the intellectual capability." Most crucially, it is na´ve to treat intelligence as a one-dimensional, quantifiable characteristic of humans or computers. Hawkens and Goertzel both explore the idea that intelligence is instead focused on narrowly defined technical challenges, contrary to the notion of a super-intelligent computer brain.

Critics of the technological singularity idea speak through a common thread of arguments: unbounded explosive growth in artificial intelligence is not technically feasible as Kurzweil claims in The Singularity Is Near. Rather, the rate at which technology improves will face limiting factors, be they physical constraints, the narrowness of current artificial intelligence research, or the complexity of the human mind and experience.

edit this page