[ Jocelyn Ireson-Paine's Home Page | Publications | Dobbs Code Talk Index | Dobbs Blog Version ]

An Arc Through AI Space

Here are some quotes that reflect the past — and perhaps the future — of AI. I first put them on the Web for the old AI Expert Newsletter: they're revised here, mostly to correct dead links.

Many smart people have been thinking about the AI problem for a long time. There have been many ideas that have been pursued by sophisticated research teams which turned out to be dead ends. This includes all of the obvious ideas. Most grand solutions proposed have been seen before (about 70% seem to be recapitulations of Minsky proposals).
From an answer to the claim "I have the idea for an AI Project that will solve all of AI..." in part 1/6 of the comp.ai FAQ by Mark Kantrowitz, Amit Dubey and Ric Crabbe, 1992-2004.

There has been a long-standing opposition within AI between "neats" and "scruffies" (I think the terms were first invented in the late 70s by Roger Schank and/or Bob Abelson at Yale University). The neats regard it as a disgrace that many AI programs are complex, ill-structured, and so hard to understand that it is not possible to explain or predict their behaviour, let alone prove that they do what they are intended to do. John McCarthy in a televised debate in 1972 once complained about the "Look Ma no hands!" approach.
From Must Intelligent Systems Be Scruffy? by Aaron Sloman, 1990.

I.e. the neat/scruffy distinction may be much older than the labels.

E.g. during most of the 1970s there was an evident and conscious difference of approach (neat vs scruffy) between most of the work done at two of the leading AI labs: people at Stanford University (and SRI?) (inspired by McCarthy and Nilsson, among others?) tended to make a lot of use of logic, theorem provers and general purpose methods (e.g. logic-based planners), whereas work on AI at MIT (led by Minsky and Papert in those days) tended to be characterised by the notion that clean and general methods of representation and general-purpose algorithms could not work, so that a lot of domain-specific knowledge and know-how and representational apparatus was required.

Aaron Sloman in article 35861 of the comp.ai newsgroup, replying to a question about neats versus scruffies, 29 January 1996.

One mathematical consideration that influenced LISP was to express programs as applicative expressions built up from variables and constants using functions. I considered it important to make these expressions obey the usual mathematical laws allowing replacement of expressions by expressions giving the same value. The motive was to allow proofs of properties of programs using ordinary mathematical methods. This is only possible to the extent that side-effects can be avoided. Unfortunately, side-effects are often a great convenience when computational efficiency is important, and "functions" with side-effects are present in LISP. However, the so-called pure LISP is free of side-effects, and (Cartwright 1976) and (Cartwright and McCarthy 1978) show how to represent pure LISP programs by sentences and schemata in first order logic and prove their properties. This is an additional vindication of the striving for mathematical neatness, because it is now easier to prove that pure LISP programs meet their specifications than it is for any other programming language in extensive use. (Fans of other programming languages are challenged to write a program to concatenate lists and prove that the operation is associative).
The implementation of LISP, in History of Lisp, by John McCarthy, 1979.

... elegance is necessarily _unnatural_, only achieveable at great expense. if you just do something, it won't be elegant, but if you do it and then see what might be more elegant, and do it again, you might, after an unknown number of iterations, get something that is very elegant.
Discussion in comp.lang.lisp on Filk, puns, and other time wasting, by Erik Naggum, 1998.

Programming in Lisp is like playing with the primordial forces of the universe. It feels like lightning between your fingertips. No other language even feels close.
Glenn Ehrlich, The Road to Lisp Survey Highlight Film.

((What ((is) with (all)) of (the) ()s?) Hmmm?)
From a Slashdot interview with Lisp and Scheme implementor Kent Pitman, November 8th, 2001.

As the release of AutoCAD 2.1 loomed closer, we were somewhat diffident about unleashing Lisp as our application language. This was at the very peak of the hype-train about expert systems, artificial intelligence, and Lisp machines, and while we didn't mind the free publicity we'd gain from the choice of Lisp, we were afraid that what was, in fact, a very simple macro language embedded within AutoCAD would be perceived as requiring arcane and specialised knowledge and thus frighten off the very application developers for whom we implemented it. In fact, when we first shipped AutoCAD 2.1, we didn't use the word "Lisp" at all — we called it the "variables and expressions feature". Only in release 2.18, in which we provided the full functional and iterative capabilities of Lisp, did we introduce the term "AutoLisp".
AutoCAD Applications Interface: Lisp Language Interface Marketing Strategy Position Paper, by John Walker, 1985.

"AI winter" is the term first used in 1988 to describe the unfortunate commercial fate of AI. From the late 1970.s and until the mid-1980.s, artificial intelligence was an important part of the computer business — many companies were started with the then-abundant venture capital available for high-tech start-ups. By 1988 it became clear to business analysts that AI would not experience meteoric growth, and there was a backlash against AI and, with it, Lisp as a commercial concern. AI companies started to have substantial financial difficulties, and so did the Lisp companies.
From The Evolution of Lisp by Guy Steele and Richard Gabriel.

The scruffies regard messy complexity as inevitable in intelligent systems and point to the failure so far of all attempts to find workable clear and general mechanisms, or mathematical solutions to any important AI problems. There are nice ideas in the General Problem Solver, logical theorem provers, and suchlike but when confronted with non-toy problems they normally get bogged down in combinatorial explosions. Messy complexity, according to scruffies, lies in the nature of problem domains (e.g. our physical environment) and only by using large numbers of ad-hoc special-purpose rules or heuristics, and specially tailored representational devices can problems be solved in a reasonable time.
From Aaron Sloman's Must Intelligent Systems Be Scruffy?

In rule-based, or expert systems, the programmer enters a large number of rules. The problem here is that you cannot anticipate every possible input. It is extremely tricky to be sure you have rules that will cover everything. Thus these systems often break down when some problems are presented; they are very "brittle". Connectionists use learning rules in big networks of simple components — loosely inspired by nerves in a brain. Connectionists take pride in not understanding how a network solves a problem.
Marvin Minsky, from Scientist on the Set: An Interview with Marvin Minsky, in Hal's Legacy, edited by David Stork, 1996. Quoted on the AAAI Reasoning page.

Despite all the progress in neural networks the technology is still brittle and sometimes difficult to apply.
From the abstract to Initialization and Optimization of Multilayered Perceptrons, by Wlodzislaw Duch, Rafal Adamczak, and Norbert Jankowski, 1997.

It would be best to start with ready software packages. I recommend our own ones, because they are error-free and involve all our know-how; on the contrary, many commercial packages are of no use.
Neural-network researcher Teuvo Kohonen, replying to the question "What tips would you give to programmers wanting to create self-organizing neural networks?" in an interview with generation5, 2000.

All too soon, however, the hopes kindled by AI's second age dimmed as well. Using chips and computer programs, scientists built artificial neural nets that mimicked the information-processing techniques of the brain. Some of these networks could learn to recognise patterns, like words and faces. But the goal of a broader, more comprehensive intelligence remained far out of reach.

And so dawned the third age of AI. Its boosters abandoned hopes of designing the information-processing protocols of intelligence, and tried to evolve them instead. No one wrote the program which controls the walking of Aibo, a $1,500 robotic dog made by Sony. Aibo's genetic algorithms were grown — evolved through many generations of ancestral code in a Sony laboratory.

From 2001: a disappointment?, an Economist feature on evolutionary AI, Dec 20th, 2001.

GAs [Genetic Algorithms] are a terrific approach to searching large, ill-defined spaces, in this case the space of "nice" melodic ideas. There is also an analogy to the "population" of licks that most jazz players have in their heads. These licks come and go over time in a manner similar to evolution; ideas that were cool in the past become overused or cliched, so I stop playing them.
John Al Biles in a 1998 interview with generation5 about his work on the GenJam Genetic Jammer interactive jazz improviser, probably the only evolutionary computation system that is also a working musician.

Dealing with ES [Evolution Strategies] is sometimes seen as "strong tobacco", for it takes a decent amount of probability theory and applied STATISTICS to understand the inner workings of an ES, while it navigates through the hyperspace of the usually n-dimensional problem space, by throwing hyperellipses into the deep...
From an account of the Technical University of Berlin's work on Evolution Strategies, one of many detailed descriptions of evolutionary algorithms in Q1.3 of the comp.ai.genetic FAQ by Jörg Heitkötter and David Beasley, 1993-2001.

It is raining instructions out there; it's raining programs; it's raining tree-growing, fluff-spreading, algorithms. That is not a metaphor, it is the plain truth. It couldn't be any plainer if it were raining floppy discs.
Richard Dawkins in The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe Without Design, 1986.

My optimism about the future of intelligent machines is based partly on the evolutionary record. Nature holds the patents on high intelligence. It invented it not once, but several times, as if to demonstrate how easy it was.

The vertebrate retina has been studied extensively. Its 20 million neurons take signals from a million light sensors and combine them in a series of simple operations to detect things like edges, curvature and motion. Then image thus processed goes on to the much bigger visual cortex in the brain.

Assuming the visual cortex does as much computing for its size as the retina, we can estimate the total capability of the system. The optic nerve has a million signal carrying fibers and the optical cortex is a thousand times deeper than the neurons which do a basic retinal operation. The eye can process ten images a second, so the cortex handles the equivalent of 10,000 simple retinal operations a second, or 3 million an hour.

Roboticist Hans Moravec in The Endless Frontier and The Thinking Machine, 1978.

It sometimes seems to me that the brain is actually a very shitty computer. So why would you want to build a computer out of slimy, wet, broken, slow, hungry, tired neurons? ...

I always say, if I wanted to build a computer from scratch, the very last material I would choose to work with is meat. I'll take transistors over meat any day. Human intelligence may even be a poor kludge of the intelligence algorithm on an organ that is basically a glorified animal eyeball.

Richard Wallace, creator of the Alicebot chatbot, in a Slashdot interview, July 26th, 2002.

... I claim that the soul, spirit, or consciousness may exist, but for most people, most of the time, it is almost infinitesimally small, compared with the robotic machinery responsible for most of our thought and action. ...

I say this with such confidence because of my experience building robot brains over the past seven years. Almost everything people ever say to our robot falls into one of about 45,000 categories. Considering the astronomical number of things people could say, if every sentence was an original line of poetry, 45,000 is a very, very small number.

Richard Wallace in the above interview.

Asp, a Swedish researcher who once majored in industrial design, volunteered for the fMRI probe. The scanner revealed a personality quite at odds with her own sense of self.

She searched the scanner's images for the excited neurons in her prefrontal cortex that would reflect her enthusiasm for Prada and other high-fashion goods. Instead, the scanner detected the agitation in brain areas associated with anxiety and pain, suggesting she found it embarrassing to be seen in something insufficiently stylish.

It was fear, not admiration, that motivated her fashion sense.

Robert Lee Hotz on the neurology of consumerism: Mapping the mind: searching for the why of buy, Los Angeles Times, February 27, 2005.

AI is much more likely to be a boon than a threat to humans. In many ways one can best describe AI technology as the development of what my colleague Ken Ford calls "cognitive prostheses": systems that people can use to amplify their own intellectual capacities. Such tools empower people and aid in removing social barriers. To dramatize the point: about a hundred years ago, rapid mental arithmetic was considered an impressive intellectual talent, and people who could do it received academic honors. Nowadays a high-school dropout at a supermarket checkout can tell the customer the total charge in a fraction of a second. A barcode scanner and a computer read-out act as a mental amplifier enabling someone to perform a task that, without it, would require greater mental capacity than he could deploy unaided. True, we don't usually say that the supermarket checkout clerk is using this machinery to think with; but ask yourself: who is earning the wages, the human or the computer?
Naïve Physics researcher Pat Hayes replying in the AAAI FAQ Annex to a student asking about the threat posed by AI.

A creature that was built de novo might possibly be a much more benign entity than one with a kernel based on fang and talon.
SF writer Vernor Vinge, in The Coming Technological Singularity: How to Survive in the Post-Human Era.

In Asimov's robot novels, the Frankenstein Complex is a major problem for roboticists and robot manufacturers. They do all they can to calm the public and show that robots are harmless, sometimes even hiding the truth because the public would misunderstand it and take it to the extreme.
Discussion of Isaac Asimov's Three Laws of Robotics in the Wikipedia entry for Frankenstein complex.

Artificial intelligence is the study of how to make real computers act like the ones in the movies.
Anonymous quote in Port 2000 Newsletter, The Information Technology Newsletter for Port Washington Educators, taken from Stottler Henke's Artificial Intelligence Quotations.

Yes, now there is a God.
The computer from Frederic Brown's short story Answer, quoted in the Wikipedia List of fictional computers.

By the way, I am collecting anecdotes about the history of AI. I'd love to have anything you can tell me: please mail popx@j-paine.org .