[ Jocelyn Ireson-Paine's Home Page | Publications | Dobbs Code Talk Index | Dobbs Blog Version ]

Neural Network Urban Legends

Researchers at the Chalmers University humanoid-robot research project have reported exasperation at their latest robot's continuing failure to master somatosensory localisation. It still can't tell its arse from its elbow. This week, I've been pondering neural net anecdotes. Lenn Redman, in his book How to Draw Caricatures, tells the following little story:

Life magazine once ran a two-page spread of about 40 photographs of different persons. Half of them were of college professors, scientists, and esteemed businessmen. The other half were criminals ranging from thieves to rapists to murderers. The magazine feature was a fun contest for the reader to see if he could tell the good citizens from the criminals. My wife and I tried it. My score was about 30 percent right; her score was 100 percent right. Did she have special insight? Yes, but not about faces. She observed that half the photographs had the same draped background and deduced correctly that the criminals were photographed at the same locale.

This reminded me of the isomorphic and infamous anecdote about the military neural net trained to detect camouflaged tanks. As Neil Fraser tells it in Neural Network Follies, the net was trained on 100 battlefield scenes, each containing either a tree with a tank hiding behind, or a tree but no tank. When the trained net was tested with another 100 such scenes, it had indeed, with superbly zero error rate, learnt to distinguish tank photos from non-tank photos. But not, researchers later discovered, because of the tanks. The non-tank photos had all been taken on a sunny day; photos with tanks, on a cloudy day.

I've heard this one several times: always, it seems, from a lecturer who had actually been taught by a supervisor who once worked with a colleague who met a man at a conference who really knew the administrator down the corridor who personally sacked the assistant responsible for taking the training photos. No doubt other disciplines have their own anecdotes. Credit-risk researchers might talk about the credit-scoring net that a large bank was developing one year end. It correctly predicted both low-risk and high-risk customers in its training data; but only because the low-risk data had been amassed in November and the high-risk in January. November accounts were still bloated with pre-Christmas savings; by January, these had been spent. Natural-language researchers reminisce about the neural net trained to vocalise fruit names. When shown a grape, it let out a little whine. But what I want to know about the tanks anecdote is, why do neural networks always get the blame? No-one ever tells such a story about genetic algorithms or symbolic learning techniques.