AI - The art and science of making computers do interesting things that are not in their nature.
It's a light issue this month, because of competing demands from a forthcoming software demo. So here's a selection of AI miscellanea: an assortment of news items (it really is a pity about Aibo), research papers, advice for researchers, and a very nifty applet; a poem, some songs, and stories which demonstrate the need for common sense; not to mention vacuum cleaners on Mars. Next month, I'll have a feature on some unusual machine-learning techniques.
Edward Ordman's poem about a lady who "applied her intellect keen / To capture the soul of a new machine". Based on Gilbert and Sullivan's Darwinian Man, and brought to you by the site of IRAS, The Institute for Religion in an Age of Science.
I discovered this 400-page thesis through AI Buzz news. It gives, say AI Buzz, a thorough look at the COG robot, and describes a variety of machine-learning methods by which the robot learns about actions, objects, scenes, and people from its caregivers. Unfortunately, the amount of PDF seems to be hanging my browser, but you may have better luck.
Other research papers are linked from the COG research page: there's one on giving COG a theory of mind, and another on enabling it to sense the energy consumed in moving its limbs so that it can move more humanly. COG is a long-term project; it's interesting to look in from time to time and see how it's getting on.
What is a theory of mind? In the first page, Gloria Origgi, University of Bologna, explains. It has been proposed that people with autism lack a theory of mind: this idea is briefly explained on Origgi's page, and amplified in the second, An interview with: Professor Uta Frith at in-cites. Frith describes brain-imaging studies on normal and autistic people, and the difference they reveal in brain regions that ascribe mental states to other individuals.
I searched to see whether theories of mind are mentioned in the well-known autism-related novel by Mark Haddon, The Curious Incident of the Dog in the Night-Time. Indeed they are, according to Polly Morrice's New York Times review Autism as Metaphor. She explains what to her, as mother of an autistic child, is the least believable aspect of Haddon's book.
James Meehan wrote the story-generating program Tale-Spin for his dissertation The Metanovel: Writing Stories by Computer. My link is to Masoud Yazdani's introductory paper on Computational Story Writing which gives several examples of Tale-Spin output. In AESOP-FABLE GENERATOR mode, Tale-Spin would ask for two characters and try to tell the Aesop "Never trust flatterers" fable about them. Yazdani explains how. For him as for me, the most interesting stories are the Mis-Spun tales, in which Tale-Spin turns out to lack some vital piece of knowledge and produces a quirkily flawed tale.
Tale-Spin was one of several programs reconstructed and simplified for Roger Schank and Christopher Riesbeck's 1981 book Inside Computer Understanding: Five Programs Plus Miniatures. I found a page at the Electronic Literature Organization which says that the program has been translated into Common Lisp (by Warren Sack in 1992). The source is available at www.eliterature.org/images/microtalespin.txt. If you plan to use it, search the Web: I found at least two other versions near the top of a Google search for "talespin meehan".
Below, I quote a mis-spun Tale-spin tale, found in the .sig dictionary of Nick Nicholas, a Lojbanist who I referred to in my March feature on that language. It goes:
Henry Squirrel was thirsty. He walked over to the river bank where his good friend Bill Bird was sitting. Henry slipped and fell in the river. Gravity drowned.Nicholas explains the logic: since Gravity is pulling Henry into the river, and Gravity has no mates, arms, or legs to extricate it from the river, Gravity is doomed to a watery grave. Commonsense reasoning is essential if AI programs are to avoid such mistakes. My link above is to the MIT Media Lab, which houses one of many teams working on this: take a look at their projects, including the Open Mind Common Sense Web site for friendly knowledge capture. Closely related is Open Mind Experiences, an attempt to gather commonsense story knowledge from the public.
Look up some of the 260 Bongard problems - analogy puzzles - in this link. How many can you solve? The page's author, Harry Foundalis, is writing a program to do so. I've not found much information about it; but the problems themselves are an interesting challenge in computational cognition.
"The Aibo lived seven years - or 49 if you count robotic dog years". Mercury News report on Sony's disappointing decision to discontinue robotic toys, including Aibo and the humanoid Qrio. The report quotes David Calkins, a robotics professor at San Francisco State University on how most people don't know of Aibo's many features such as the abilities to recognise its owner and to let them keep an eye on the home through Aibo's head-mounted camera:
"I talk to people all the time and they say 'who wants to spend $2,000 on a dumb little toy'... It didn't have to die. They just never really marketed it to bring their costs down," said Calkins.
Here's research by R. Téllez, C. Angulo and D. Pardo in using distributed neural architecture to implement Central Pattern Generators (neural oscillators) which control Aibo's gait. What a shame there will be no more Aibos, because you can download the authors' open-source software from this page and run it on your own Aibo.
NASA Sends Roomba, Saves Billions, by Alan Graham. Faux-news story about NASA's deployment of Roomba robotic vacuum cleaners as unmanned Mars explorers.
It's the jockeys that are the robots here, not the camels. This is The Peninsula, "Qatar's leading English daily", on how robot camel jockeys are to be used instead of children in the dangerous sport of camel racing. I picked this one up from AAAI's AI NewsToons by way of their March 2005 archive of AI news articles. Wired's Robots of Arabia has more on the story, including pictures. Could there be a market for robot camels too?
No, not a new coinage for relations between foreman and shop-floor worker. This paper on Insect Societies and Manufacturing by Vincent Cicirello and Stephen Smith at the CMU Robotics Institute describes examples of behaviour in social insects such as wasps and ants, and how these have inspired solutions to optimisation and scheduling problems. The way wasps allocate themselves to tasks such as foraging and brood care is one example; the authors explain how it was imitated in allocating jobs between factory floor machines.
An essay by Henry G. Baker, author of several of the famous MIT HAKMEM memos. Baker says in his introduction that: "Computer scientists should have a knowledge of abstract statistical thermodynamics. First, computer systems are dynamical systems, much like physical systems, and therefore an important first step in their characterization is in finding properties and parameters that are constant over time (i.e., constants of motion). Second, statistical thermodynamics successfully reduces macroscopic properties of a system to the statistical behavior of large numbers of microscopic processes. As computer systems become large assemblages of small components, an explanation of their macroscopic behavior may also be obtained as the aggregate statistical behavior of its component parts". This is a good - and unusual - introduction to some concepts of dynamical systems.
The maths department at Warwick is well-known for its research on dynamical systems, topology, and catastrophe theory, not least because of the leadership of Christopher Zeeman. Zeenman has proposed modelling the brain as a dynamical system in a space of high dimension (as "flow on a manifold"). In this article, Tall applies the notion to the learning of mathematical concepts, showing how learning one concept might be blocked by a conflicting earlier version of it. Tall's other papers, on this topic and on his later work in mathematical education, are referenced at his publications page.
Alex Champandard's AI Depot essay on the film Memento, about a man who loses the ability to lay down new memories. What has this to do with AI? Champandard links it to the topics of reactive behaviour, deliberative planning, and emergent intelligence; and he says that he is using these concepts in his work on robot navigation, about which he also has an introductory feature at AI Depot.
Another piece by Henry Baker, in which he tells the real story of the Ada Project. The title comes from The Wizard of Oz: Baker ends his story with the song If I Only Had Ada, based on the Ozian If I only had a Brain.
It would be a shame if you can't read French, because I don't think the page linked here has been translated. It explains how to run this excellent little game in which you build Biobloc creatures and let them learn to walk via a genetic algorithm. The site containing the game itself, biobloc.epfl.ch/, is available in English as well as French. It leads you to a pleasingly efficient and easy-to-use applet in which you can snap together pieces of creature, rotate them and stretch them, and then run the genetic algorithm.
An interesting post and discussion on Linux users' site KernelTrap, about Jake Moilanen's genetic algorithm for automatically tuning the kernel. The second link follows an updated version of the algorithm: search the page for "warm" and "fuzzy" to get a cynical view of why a genetic algorithm was used. Slashdot also has views on the topic: look for "Earlybird (56426)" for the wise words that genetic algorithms are not mystical or magical: they're just a search method, whose properties make them appropriate for some particular tasks.
I didn't know until reading the last set of posts that the open-source database Postgres uses genetic algorithms as a query optimiser. This page briefly explains how.
I suspect Alan Bundy, Ben du Boulay, Jim Howe, and Gordon Plotkin would not agree with Blackwell. In this guide for those doing thesis research, they advise on overcoming the psychological hurdles of Fear of Exposure, Theorem Envy, and Research Impotence, as well as the standard pitfalls of Solving the World, Yet Another Language, and Ambitious Paralysis. Oh, and Computer Bum. Recommended to every research student; indeed, to all researchers, including those in commercial R & D.
Incidentally, the guide reminds me of one pitfall that I referred to in my September issue. In the comp.ai FAQ, the FAQ's authors Mark Kantrowitz, Amit Dubey and Ric Crabbe answer an assortment of questions. In reply to the claim "I have the idea for an AI Project that will solve all of AI", they say:
Many smart people have been thinking about the AI problem for a long time. There have been many ideas that have been pursued by sophisticated research teams which turned out to be dead ends. This includes all of the obvious ideas. Most grand solutions proposed have been seen before (about 70% seem to be recapitulations of Minsky proposals).[ Jocelyn Ireson-Paine's Home Page ]