[ Jocelyn Ireson-Paine's Home Page | Publications ]

As I suggested in Chapter 1, the large, highly evolved sensory and motor portions of the brain seem to be the hidden powerhouse behind human thought. By virtue of the great efficiency of these billion-year-old structures, they may embody one million times the effective computational power of the conscious part of our minds. While novice performance can be achieved using conscious thought alone, master-level expertise draws on the enormous hidden resources of these old and specialized areas. Sometimes some of that power can be harnessed by finding and developing a useful mapping between the problem and a sensory intuition.

Although some individuals, through lucky combinations of inheritance and opportunity have developed expert intuitions in certain fields, most of us are amateurs at most things. What we need to improve our performance is explicit external metaphors that can tap our instinctive skills in a direct and repeatable way. Graphs, rules of thumb, physical models illustrating relationships, and other devices are widely and effectively used to enhance comprehension and retention. More recently, interactive pictorial computer interfaces such as those used in the Macintosh have greatly accelerated learning in novices and eased machine use for the experienced. The full sensory involvement possible with magic glasses may enable us to go much further in this direction. Finding the best metaphors will be the work of a generation: for now, we can amuse ourselves by guessing.

Hans Moravec, in Mind Children [Moravec 1988].

This is an essay on why I believe category theory is important to computer science, and should therefore be promoted; and on how we might do so. While writing this, I discovered the passage I've quoted above. My ideas are closely related, and since there's nothing more pleasing than being supported by such an authority, that's why I've quoted it here.

Category theory has been around since the 1940s, and was invented to unify different treatments of homology theory, a branch of algebraic topology [Marquis 2004; Natural transformation]. It's from there that many examples used in teaching category theory to mathematicians come. Which is a shame, because algebraic topology is advanced: probably post-grad level. Examples based on it are not much use below that level, and not much use to non-mathematicians. The same applies to a lot of the other maths to which category theory has been applied.

It matters because category theory
is a great
source of unifying concepts and
organising principles, as Joseph
Goguen points out in his
"Categorical
Manifesto" [Goguen 1991]. If these could be taught
in the right way, they could help many researchers
unify existing concepts and formalise existing
ones. More: we should *encourage*
researchers to do so.

I shall give four main examples.

This example is from my own work. In the 1960s, Joseph Goguen produced a formalisation of what it means for something to be an object, and what it means to build a system out of interacting objects. This is described in his paper "Sheaf Semantics for Concurrent Interacting Objects" [Goguen 1992].

Using a categorical construction called "limit", Goguen showed that the behaviour of a collection of interacting objects is the "limit", in a precisely definable sense, of the behaviours of the individual objects and their interactions. By "object", he didn't mean object in the sense of object-oriented programming, but in its everyday sense: an economic object, an electronic object, a social object, and so on. This was an extremely general result, which he saw as an important contribution to General Systems Theory. In essence, he'd explained what happens when you put together parts to make a system.

I discovered this work by accident (I was visiting a friend doing a D.Phil. with Goguen, and found the paper on his desk), and realised I could apply it to spreadsheets. As a result, I now have what is probably the world's first software for building spreadsheets from modules that can be independently coded, debugged and documented, then put together to form a complete spreadsheet: see "Excelsior: bringing the benefits of modularisation to Excel" [Ireson-Paine 2005]. Moreover, I also have what would seem to be the first method for delivering spreadsheet modules over the Web so that Excel users can "glue" them into their own spreadsheets: see "Less Excel, More Components: presentation to EuSpRIG 2008" [Ireson-Paine 2008]. By conferring the benefits of prefabrication and modularity, this will surely cut spreadsheet programming costs and reduce errors.

Goguen's work also, I discovered by reading "The semantics of Clear, a specification language" [Burstall and Goguen 1980], inspired the research on modular specification languages, which led to the OBJ family of languages [Goguen 2005], and eventually to Casl, the Common Algebraic Specification Language [Casl].

Recently, John Baez and Mike Stay published a paper entitled "Physics, Topology, Logic and Computation: A Rosetta Stone" [Baez and Stay 2008]. This is a quite different approach to a general theory of systems.

The original Rosetta Stone contained three versions of a single passage, one in hieroglyphs, and provided an invaluable key to deciphering hieroglyphs. Now, say Baez and Stay, computer scientists, mathematicians, and physicists badly need another key:

By now there is an extensive network of interlocking analogies between physics, topology, logic and computer science. They suggest that research in the area of common overlap is actually trying to build a new science:a general science of systems and processes. Building this science will be very difficult. There are good reasons for this, but also bad ones. One bad reason is that different fields use different terminology and notation.

It's this key that their paper provides, by giving a formalism into which key concepts from physics, topology, logic and computation can all be translated. Near the end of the paper, they draw a simple diagram of overlapping strings. This, they explain, can be interpreted as a computation, or as a mathematical entity called a "tangle", or as a quantum process. Which is thrilling, because a "tangle" can be implemented as a quantum process by regarding it as a recipe for moving around "anyons". These are particle-like excitations that can be created in two-dimensional systems such as superconducting thin films in intense magnetic fields. This should give us a very direct method for implementing computations as quantum processes.

In "Categorical Manifesto", Goguen asserts several "dogmas", each of which suggests a use for a particular categorical construction. The dogma for the section on "Colimits" reads:

Given a species of structure, say widgets, then the result of interconnecting a system of widgets to form a super-widget corresponds to taking thecolimitof the diagram of widgets in which the morphisms show how they are interconnected.

Roughly speaking, what this means is that category theory prescribes a way in which one can depict each widget as a dot, and the relations that describe how they interconnect as arrows. The resulting diagram has a precise interpretation, and by applying the categorical operation called "colimit" to it, one can derive a description of the "super-widget" thus formed.

Many computer scientists have used this idea to give a precise semantics to specification languages: to describe exactly how combining specifications of parts of a system results in a specification of the entire system. This is what I referred to at the end of Section 3.1.

However, most (all?) specification languages are based on logic, and the colimit dogma goes well beyond that. For example, in "Category Theory and Higher Dimensional Algebra: potential descriptive tools in neuroscience" [Brown and Porter 2008], Ronald Brown and Timothy Porter suggest that colimits may be useful in explaining brain activity, by giving modellers a unified way — using colimits — to "put together" the component neurons and neural structures into a choerent whole. This deserves to be much better known in the biological sciences.

And in "Category Theory Applied to Neural Modeling and Graphical Representations" [Healy 2000], Michael Healy shows how to build compound neural nets from simpler components using colimits. (He also shows how to combine several neural representations of the same concept into one by using "natural transformations".) Such a principled approach is, I believe, almost unknown in neural network research. However, Healy's neural nets are unlike most neural nets studied in the field, because they are "localist", using one node per concept. It would be very good to see similar work attempted on other kinds of net.

I would like researchers to know category theory can provide unifying concepts like and that it's worth trying to use them and extend their scope. In the spirit of this quote from science-fiction writer (and mathematician) Greg Egan's recent novel "Incandescence" [Egan 2008]:

'Interesting Truths' referred to a kind of theorem which captured subtle unifying insights between broad classes of mathematical structures. In between strict isomorphism — where the same structure recurred exactly in different guises — and the loosest of poetic analogies, Interesting Truths gathered together a panoply of apparently disparate systems by showing them all to be reflections of each other, albeit in a suitably warped mirror. Knowing, for example, that multiplying two positive integers was really the same as adding their logarithms revealed an exact correspondence between two algebraic systems that was useful, but not very deep. Seing how a more sophisticated version of the same could be established for a vast array of more complex systems — from rotations in space to the symmetries of subatomic particles — unified great tracts of physics and mathematics, without collapsing them all into mere copies of a single example.

So here are some suggestions for research areas that I think worth attacking.

I've long been impressed by Douglas Hofstadter's Copycat program for solving letter-analogy problems. But Hofstadter never gives a concise "denotational" account of its semantics; he always justifies this in terms of how the program operates. And this is complicated, being built the idea of enzyme-like agents that dock onto parts of concepts, swimming around in a kind of protoplasm of concepts.

I believe this, and related programs such as Robert French's Tabletop, could be given a decent semantics using Goguen's categorical approach to conceptual blending described in "Mathematical Models of Cognitive Space and Time Joseph Goguen" [Goguen n.d.].

"Unification" is a kind
of pattern-matching used by computer
scientists in automating logical
inference. (It has no connection with
the phrase "unifying concepts" that I
used above.) In essence, it's gap-filling.
You've got a pattern with gaps in it,
and you try to match it against some
data in a way that fills the
gaps as well as possible. Some
computer scientists, Goguen
included, have worked out categorical
formulations of this: see
"A categorical unification
algorithm" [Rydeheard and Burstall 1986];
*Computational
Category Theory* [Rydeheard and Burstall 1988];
and
"What is Unification? A
Categorical View of Substitution, Equation and Solution" [Goguen
1991].

Many miles from that work, other researchers have invented some nifty implementations of analogical reasoning, collectively known as "Holographic Reduced Representations". These use operations on high-dimensional vectors: see for example "Dual Role of Analogy in the Design of a Cognitive Computer" [Kanerva 1988].

Such operatioms appear
very different mathematically from logical inference.
But at a deeper
level, they may be closely related,
in that they are also doing a kind
of unification. Perhaps applying
the categorical formulation mentioned
in the last but one paragraph would
show whether or not there is a connection,
and if so, what. It is interesting
to note that in
*Categorical
Unification* [Galán 2004],
María Ángeles Galán García extends
the categorical treatment of unification to describe
fuzzy matching of terms.

Category theory has an operation called "adjunction", a key part of the subject. I have suggested that generalisation in machine learning can be viewed as an adjunction, and that this might apply to fields as disparate as neural nets, curve-fitting, and logical induction: see "Generalisation is an adjunction" [Ireson-Paine 2000].

The essence of this is that when creating a concept to describe several examples, you want to be able to infer each example back from this concept, while inferring as little else as possible. In other words, the concept should contain as much of the examples as possible, but as little of anything else. However, the language you have to describe the concept might not be adequate to describe it precisely. So in that case, it has to "do the best it can", getting as close to the concept as possible. This idea of doing the best you can but as little else as possible, is what adjunctions do.

I also suspect that the mathematical device called a "sheaf" could unify much of machine learning. Sheaves originated in topology, and are used to weld "local" representations of small parts of topological spaces into a "global" representation of the entire space. (They have other uses too.)

I'll justify this by a historical argument! Sheaves replace an older way of representing the local-to-global transition, known as "charts and atlases". This name comes from where it sounds as though it comes: the idea that an atlas represents the entire world, as a collection of overlapping charts, each showing part of the world.

I suspect that because of the way category theory is usually taught, some who do know it get lost in the formalisms — the syntax — and haven't developed the "intuitive" mental representations that would enable them to apply it more easily. Perhaps, however, by crafting appropriate visual and verbal metaphors, we could teach it in a way that hooks into the brain's existing visual and kinaesthetic knowledge, thus making it easier to use. This is why I quoted Moravec in my Introduction. In the following sections, I say more about this and other suggestions.

[Crafting metaphors. Metaphors to engage with previous knowledge (possibly everyday knowledge such as that of pipes and machines.) Metaphors to run efficiently on the brain's visual and kinaesthetic "virtual machine", using our primitive abilities to walk, hunt, explore, etc. Casting knowledge into stories — see "Tell Us a Story" in "How to Teach Stuff" [Baez 2006]. Whether the metaphors are "primitive" ones such as exploring paths, or not, using notions of quality of mapping (semiotic morphisms) to ensure that the metaphor is as faithful as possible.]

My justification for this proposal is largely anecdotal. For example: the mathematician Paul Erdos claimed that flapping his hands as he walked helped him think; I recall an Oxford topology lecturer who would talk about "upstairs" for the codomain of functions and "downstairs" for their domains, or "upstairs" for topological spaces and "downstairs" for their open sets; a group-theorist friend of mine working in Essen said that the "earthy" German words for group operations helped him think by giving him a gut feel that he was "putting" and "turning" one group inside another. He gave as an example the German word "Darstellung", which means (I think) "representation" (technical term for a way of characterising groups), but which literally means "there-putting".

In support, I note that others are trying the same with other areas of mathematics: see for example "Teaching To See Like a Mathematician" [Whitely 2002]. To quote:

The van Hiele model of learning geometry presents visuals as the basic, essential experience that all students must move through. Along with the kinesthetic, it is the layer which students must have access to in order not to become lost. The standard version is that students (and mathematicians) move beyond this to higher, non-visual levels of reasoning. What actually happens in my experience (see also de Villiers) is that the visual just ceases to be noticed since it is not recorded expressed in words. It is, in fact fully capable of sustaining the practice of mathematics at the highest levels. The visual is an important route for 'mathematical intuition' — pointing to something we currently do not teach.

Mathematicians describe categories in terms of objects and arrows. The objects, usually depicted as dots, represent mathematical entities, or perhaps computational objects such as Java classes, or perhaps "real-world" entities. The arrows, usually depicted as arrows, represent transformations or relations between objects.

Now, "well-behaved" categories contain an "initial object". This has one, and only one, arrow leading to every object. Here's a possible verbal metaphor to describe the fact that an arrow leads from it to every object:

Imagine the objects to be made of Plasticine, with the arrows sticking into them. Pull the initial object. Then all the other objects will move with it.

By involving our primitive notions of grabbing and pulling, this seems to make more impact, at least to me, than the bald statement that an arrow leads from the initial object to every object.

Along different lines, I want to invite the student to think of "standpoints" rather than "objects". Instead of viewing a category from above, depicted as a collection of dots joined with arrows, imagine that the dots depicting objects have been inflated to discs the size of Bonn Square, and that you are standing on one. Some categorists argue that the only important thing about an object is the relations it takes part in, and that these are what define it. Imagining oneself standing on an object, one's view dominated by the arrows entering and leaving it, might encourage such a relativistic attitude. See for example this quote by David Corfield [Corfield 2003], which I have taken from John Baez's "Quantum Quandaries: A Category-Theoretic Perspective" [Baez 2004]:

Category theory allows you to work on structures without the need first to pulverise them into set theoretic dust. To give an example from the field of architecture, when studying Notre Dame cathedral in Paris, you try to understand how the building relates to other cathedrals of the day, and then to earlier and later cathedrals, and other kinds of ecclesiastical building. What you don't do is begin by imagining it reduced to a pile of mineral fragments.

The above seems closely related to one of the objectives of Modelling4All, a project to design software by which non-programmers can collaboratively build and analyse computer models. To quote from the project's "Welcome to Modelling4All project wiki" page, [Modelling4All]:

The third objective is to provide a means of experiencing the execution of a model as an individual or observer inside the model. We want model makers and users to have an alternative to the "god's eye" view of a model. Being inside a model, perhaps having a game-like goal of influencing the way it unfolds, can be a powerful way of understanding. Running a model inside a shared virtual space, such as Second Life, provides the opportunity for social learning and understanding. It can also be a strong motivation to build models.

Here, I shall just mention Toby Bartels's "Quantum Gravity Seminar" [Bartels 2000], and his depiction of arrows in a category as "pipes with machines in".

One thing I want to do here is to make students think of the parts of a category as "graspable". In many categories, an arrow represents a function. Now, functions can be applied to things: they "do something". So let's depict the arrow as something you can zoom in on, pick up, and click on to apply: as a thing that is graspable and manipulable, but that also "does things".

I suspect this may be useful when teaching about categorical concepts such as "functors" and "natural transformations". To a category theorist, these are functions, but functions with a complex and subtle internal structure. My hunch is that making them seem like objects that can be picked up and handled as a whole may help prevent students getting lost in details of their internal structure. I'm not sure why I believe this; perrhaps (in my mind, at least) the action of picking up and grasping the mouse becomes associated with the idea of picking up and grasping whatever the mouse is pointing to on screen. This makes the latter seem more real, more "object-like".

This is related, I think, to Rydeheard and Burstall's
approach in *Computational
Category Theory* [Rydeheard and Burstall 1988].
This makes the
categorical
constructions it describes "concrete", by implementing them
as functions. The fact that these functions run and "produce something"
somehow helps make them real to us.

Perhaps so does the fact that they are implemented in data structures. Good programmers rapidly develop a "feel" for data structures as things "out there in the world", like real-world objects. So perhaps embodying mathematical entities as data structures would in turn transfer this feeling of reality to them.

Category theory is a hugely recursive and self-referential subject, in which an entity can be represented in many different ways. It is easy to get confused when changing representation. Perhaps by displaying such changes visually, we can help the student progress from a stage where such changes require effortful conscious thought to one where they are largely automatic, requiring little conscious monitoring.

We might say that our goal here is to increase the student's n-category number, following Dan Freed's humorous definition that your n-category number is the largest n such that you can think about n-categories for a half hour without getting a splitting headache. (See "This Week's Finds in Mathematical Physics (Week 255)" [Baez 2007b]).

Here, I'm thinking of such things as the advice given to language learners: when you learn a new word, always say it. The "muscle memory" thus learnt is an additional source of redundancy, and may help if you forget the word's sound or spelling. I've been learning Chinese, and I find that writing the characters over and over again definitely seems to "bed in" some kind of motion memory which operates even if I forget what a character looks like.

So: can we help mathematicians visualise categories by asking them to imagine objects and arrows laid out in space, and to gesture and point at these? Would this store some of the information in kinaesthetic memory, reducing the load on other areas of cognition?

In this section, I've just got a mixed bag of suggestions on how to improve teaching. Many were inspired by interesting references. Most apply to any branch of maths, and indeed any subject.

It is important to know the history of a subject. Why did researchers emphasise particular concepts? What tasks did they intend them to perform? As I mentioned in my introduction, category theory was invented to unify different treatments of homology theory, a branch of algebraic topology [Marquis 2004; Natural transformation]. To quote from the latter reference:

Saunders Mac Lane, one of the founders of category theory, is said to have remarked, "I didn't invent categories to study functors; I invented them to study natural transformations." Just as the study of groups is not complete without a study of homomorphisms, so the study of categories is not complete without the study of functors. The reason for Mac Lane's comment is that the study of functors is itself not complete without the study of natural transformations.

The context of Mac Lane's remark was the axiomatic theory of homology. Different ways of constructing homology could be shown to coincide: for example in the case of a simplicial complex the groups defined directly, and those of the singular theory, would be isomorphic. What cannot easily be expressed without the language of natural transformations is how homology groups are compatible with morphisms between objects, and how two equivalent homology theories not only have the same homology groups, but also the same morphisms between those groups.

Unfortunately, homology theory is probably unknown to all but post-graduate mathematicians. This makes it very difficult to explain to anyone else the original point of category theory. To do so, we need to construct examples that have the same structure as those mentioned in the quote, but that are more intelligible to those who don't know homology theory.

Note: could perhaps use the nice explanation in Section entitled "Eilenberg{Mac Lane (1945)" in "A Prehistory of n-Categorical Physics" [Baez and Lauda 2008].

Mathematicians sometimes say that category theory is "abstract nonsense", meaning that it is a formalism that does nothing useful — an overemphasis on abstraction for its own sake. For an example of this view, see this review, of a book by B. R. Tennison on sheaf theory [Hoobler 1977]. Can we counter such views? How? Perhaps by pointing out category theory's power as a unifying mechanism: for example, this very brief .... www-groups.dcs.st-and.ac.uk/~history/Biographies/Kelly_Max.html [O'Connor and Robertson 2007]:

Among the honours he received for his major contributions are election to a fellowship in the Australian Academy of Science and a Centenary Medal "for services to Australian society and science in mathematics" received from the Australian Government in 2001. In the interview reported in [2], Kelly tried to explain the importance of category theory in a way that non-mathematicians would understand. He said:-Category theory sheds light on the relations between various aspects of mathematics and in doing so it brings unity and simplicity. It lights the way for the next lot of advances.

This is a tautology. But I want to emphasise that this is not easy. See this "Interview of John Baez and Urs Schreiber" [Bruce Bartlett, Urs Schreiber and John Baez 2007]:

Urs:But maybe I'm just not explaining it well. That's the problem.

John:Well, you're being a bit harsh on yourself. I've spent a whole bunch of time trying to figure out how to explain things. And you know, it takes a lot of work to convey ideas in a way that people can easily pick them up. If someone wants to learn about something, they can go after it, and find out about it. But if they're just sitting there, and you tell them something that they're not expecting, it will almost always just bounce right back off them, unless you work really hard to get them interested.

Bruce:Okay. So you're an expert, in the art of explaining things?

John:I'm trying to become an expert in that. But I don't do too much of it on the n-Category Café, actually! When I was at sci.physics.research I put a lot of work into making my posts very fun: pretending to be fictional characters, telling stories, and things like that. That's because I was trying very hard to get as many people involved as possible.

This is even more tautological. But I definitely want to point the reader at "How to Teach Stuff" [Baez 2006]. Points I particularly liked: Teaching is Like Acting; Tell Us a Story.

I also want to point the reader at the advice on lecturing in "Advice for the Young Scientist" [Baez 2007a].

Unless you enjoy playing with the ideas you're trying to explain, would you come up with nice explanations such as this one of gerbes [Schreiber 2007]?

No. You need the attitude expressed in "Re: Gerbes in The Guardian" [Trimble 2007]:

See, John, by making mathematics seem accessible and fun, you're ruining everything.

The idea is supposed to be that math is 'cool', in a way - at least we use a bunch of cool-sounding and mysterious words - but please let's keep it at arm's length, because you have to be a genius (or weirdo) or something to actually understand it. It's hard to find the idea in mass culture that anyone does math because it's fun.

I hope I can develop teaching aids that will explain how category theory can describe analogies (including jokes, seen as "twisted" analogies) and analogical problem solving; and how it can describe generalisations (as I noted above). Making the categorical descriptions of these "graspable", will, I believe, help in teaching. We can implement them using Rydeheard and Burstall's approach, and then depict the implementation as a collection of manipulable entities. Pick up a categorically-represented analogy; turn it over and over; rub your hand along the arrows poking out from the objects; lay parallel arrows side by side, and see how they perform parallel transformations, one on each term in the analogy.

Mathematicians acquire metaphors by accident. Instead, let's engineer them by design. I want to see category theory with force-feedback gloves, William Lawvere on an Xbox.

John Baez and James Dolan 2000.
"From Finite Sets to Feynman Diagrams".
Marked as to appear in
*Mathematics Unlimited - 2001 and Beyond*,
edited by Björn Engquist and Wilfried Schmid.
arxiv.org/PS_cache/math/pdf/0004/0004133v1.pdf

John Baez 2004.
"Quantum Quandaries:
A Category-Theoretic Perspective".
Published in
*Structural Foundations of Quantum Gravity*,
edited by Steven French, Dean Rickles and Juha Saatsi, Oxford U. Press,
2006.
The page cited is
math.ucr.edu/home/baez/quantum/node5.html

John Baez 2006. "How to Teach Stuff". math.ucr.edu/home/baez/teaching.html

John Baez 2007a. "Advice for the Young Scientist". math.ucr.edu/home/baez/advice.html

John Baez 2007b. "This Week's Finds in Mathematical Physics (Week 255)". math.ucr.edu/home/baez/week255.html

John Baez and Aaron Lauda 2008. "A prehistory of n-categorical physics". math.ucr.edu/home/baez/history.pdf

John Baez and Mike Stay 2008. "Physics, Topology, Logic and Computation: A Rosetta Stone". math.ucr.edu/home/baez/rosetta.pdf

Toby Bartels 2000. "Quantum Gravity Seminar Week 1, Track 1". math.ucr.edu/home/baez/qg-fall2000/qg1.1.html

Bruce Bartlett, Urs Schreiber and John Baez 2007. "Interview of John Baez and Urs Schreiber", January 13, 2007. math.ucr.edu/home/baez/interview2.html

Ronald Brown and Timothy Porter 2008. "Category Theory and Higher Dimensional Algebra: potential descriptive tools in neuroscience". arxiv.org/PS_cache/math/pdf/0306/0306223v1.pdf

Rod Burstall and Joseph Goguen 1980.
"The semantics of Clear, a
specification
language".
In *Abstract Software Specifications*, edited
by D. Bjorner. LNCS,
Volume 86, Pages 292-332. Springer, 1980.

Pierre Cartier 2001. "A Mad Day's Work: from Grothendieck to Connes and Kontsevich. The Evolution of Concepts of Space and Symmetry". Bulletin (New Series) of the American Mathematical Society, Volume 38, Number 4. www.msri.org/publications/books/sga/from_grothendieck.pdf

Casl. Main Web page for information about Casl, the Common Algebraic Specification Language. www.informatik.uni-bremen.de/cofi/wiki/index.php/CASL

David Corfield 2003.
*Towards a Philosophy of Real Mathematics*, Cambridge
University Press,
2003.

Greg Egan 2008.
*Incandescence*.
Pulication details, synposis, and related material
are at gregegan.customer.netspace.net.au/INCANDESCENCE/Incandescence.html.

María Ángeles Galán García 2004.
*Categorical Unification*.
Ph.D. Thesis, Department of Computing Science, Umea University.
www.diva-portal.org/diva/getDocument?urn_nbn_se_umu_diva-245-1__fulltext.pdf

Joseph Goguen 1989.
"What is Unification? A Categorical View of Substitution, Equation and
Solution".
In *Resolution of Equations in Algebraic Structures, Volume 1: Algebraic
Techniques*,
pp. 217-261.
Edited by
Maurice Nivat and Hassan Aï}t-Kaci, Academic Press, 1989.
citeseer.ist.psu.edu/166243.html

Joseph Goguen 1991.
"A Categorical Manifesto".
*Mathematical Structures in Computer Science*,
Vol. 1, No. 1. (1991), pp. 49-67.
citeseer.ist.psu.edu/goguen91categorical.html

Joseph Goguen 1992.
"Sheaf Semantics for Concurrent Interacting Objects".
*Mathematical Structures in Computer Science*, Vol. 2, No. 2. (1992),
pp. 159-191.
citeseer.ist.psu.edu/goguen92sheaf.html

Joseph Goguen and D. Fox Harrell 2003. "Information Visualization and Semiotic Morphisms". www-cse.ucsd.edu/~goguen/papers/sm/vzln.html

Joseph Goguen 2005. "The OBJ Family". www.cs.ucsd.edu/~goguen/sys/obj.html

Joseph Goguen n.d. "Mathematical Models of Cognitive Space and Time Joseph Goguen". www.cs.ucsd.edu/~goguen/pps/taspm.pdf

Michael Healy 2000.
"Category
Theory Applied to Neural Modeling and Graphical Representations".
Minor revision of Paper NN0648 in *Proceedings of the International
Joint
Conference on Neural Networks* (IJCNN 2000).
citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.32.2635

Hoobler 1977.
Book review of *Sheaf theory* by B.R.Tennison,
*Bulletin of the American Mathematical Society*,
Volume 83, Number 4, July 1977.
projecteuclid.org/DPubS/Repository/1.0/Disseminate?handle=euclid.bams/1183538905&view=body&content-type=pdf_1

Pentti Kanerva 1988.
"Dual Role of Analogy in the Design of a Cognitive Computer".
In *Advances in Analogy Research: Integration of Theory and Data from
the Cognitive, Computational, and Neural Sciences*. Workshop.
Sofia, Bulgaria, July 17-20, 1998.
faculty.cs.tamu.edu/choe/mirror/kanerva.ANALOGY98-kanerva.pdf

Marquis 2004. "Brief Historical Sketch" section in article on "Category Theory", Stanford Encyclopedia of Philosophy. www.seop.leeds.ac.uk/archives/sum2004/entries/category-theory/#Bri

Modelling4All. "Welcome to Modelling4All project wiki". Modelling4All wiki page. modelling4all.wikidot.com/

Hans Moravec 1988.
*Mind Children: the Future of
Robot and Human Intelligence*.
Harvard University Press.

Natural Transformation. "Natural Transformation". Wikipedia page. en.wikipedia.org/wiki/Natural_transformation

J.J.O'Connor and E.F.Robertson 2007. Obituary for Gregory Maxwell Kelly, School of Mathematics and Statistics, University of St Andrews. www-groups.dcs.st-and.ac.uk/~history/Biographies/Kelly_Max.html

Jocelyn Ireson-Paine 2000. "Generalisation is an adjunction". www.j-paine.org/generalisation.html

Jocelyn Ireson-Paine 2005.
"Excelsior: bringing the benefits of modularisation to Excel". In
*Proceedings of EuSpRIG 2005*.
www.j-paine.org/eusprig2005.html

Jocelyn Ireson-Paine 2008. "Less Excel, More Components: presentation to EuSpRIG 2008". www.j-paine.org/eusprig2008/index.html

Brian Rotman 2005. "Gesture in the Head: Mathematics and Mobility", Mathematics and Narrative conference, Mykonos, July 2005. www.thalesandfriends.org/en/papers/pdf/rotman_paper.pdf

David Rydeheard and Rod Burstall 1986.
"A categorical unification
algorithm". In
*Proc. Category Theory and Computer Programming*. *Lecture Notes in
Computer
Science* 240, Springer, pp. 493-505.

David Rydeheard and Rod Burstall 1988.
*Computational
Category Theory*. Prentice-Hall.
www.cs.man.ac.uk/~david/categories/book/book.pdf

Urs Schreiber 2007. "Re: Gerbes in The Guardian", "The n-Category Café" blog post, August 22, 2007 2:18 pm. golem.ph.utexas.edu/category/2007/08/gerbes_in_the_guardian.html#c011485

Todd Trimble 2007. "Re: Gerbes in The Guardian", "The n-Category Café" blog post, August 22, 2007 2:40 pm. golem.ph.utexas.edu/category/2007/08/gerbes_in_the_guardian.html#c011487

Walter Whitely 2002. "Teaching To See Like a Mathematician". www.math.yorku.ca/~whiteley/Teaching_to_see.pdf