As you might or might not know, the so called semantic web is an idea of using urls to string together "resources" online using urls. The notion is that a url represents a unique thing, which might or might not exist. For instance:
might reference an actual chicken that I own. By associating that chicken url with other bits of grammar, I can start to assert various facts about the chicken. For instance, I might have another resource which is the notion of "color" and another which is a specific color, like "brown"
By stringing these urls together, I can say:
http://somewebjoint.com/mychicken http://someonesgrammar/color http://someonesgrammar/brown
or "mychicken is brown". These are what are known as Resource Description Frameworks, or RDFs and apparently it's been proven that you can assert anything using them.
One branch of the semantic web is concerned with the creation of Onotologies, systems of assertions and logic which allow one to make very broad statements, generalizations, and even inferences about things. For instance, there are notions of "the same" and "different." If I know that a bird has wings and a chicken is a bird, and a sparrow is a bird, then both have wings in common. A sparrow can fly but a chicken can't. There are concepts for defining properties of a chicken and constraints on those properties (a chicken has legs, but no more than two).
One might spend a great deal of time nailing down and defining everything about a chicken. Perhaps someone else has defined a variety of facts about birds in general, and your chicken ontology might reference this bird ontology and vice versa. What's fancy about this is that by parsing these ontologies and making software that uses them one can actually begin to generalize the solving of real world problems.
Of course, this all depends upon the ontologies being designed correctly. A simple task, no doubt. To make a long story short, what I find funny about the semantic web is that it reminds me very much of the Deconstructivism about which my art school was so aflutter back in the late 80s and early 90s. In deconstruction (a French invention, arising out of the radicalized student revolts in Paris of the 60's), the idea is that language is the creator of all meaning. Since language is arbitrary, and without any original, first meaning (one has to throw out God or any kind of divine inspiration), meaning is a mere shell game. Through a number of (ponderous) intellectual gymnastics, one can prove that anything is actually nothing at all.
A good example of the arbitrariness of language is to do the kid's exercise of looking up a word in the dictionary and then looking up each word in the definition. One repeats for each of those definitions, ad nauseum. The point of this is that nothing defines language except other language. It's a chicken and egg problem, except it can be proven that neither "chicken" nor "egg" exist except as words. In fact there is no problem at all.
The deconstructivists have often been ridiculed for trying to prove that verifiable parts of reality are mere linguistic and societal constructions. One scientist successfully made fun of a deconstructivist journal by getting them to publish an apparently serious article proving (through their own logic) that science did not exist (I might be getting that part slightly wrong. I'll get back to you.)
I guess what I'm getting at is this: ontologies rest on nothing but facts asserted elsewhere. Because all points of the semantic web point to every other part of the semantic web, the whole thing is inherently self-referential. Is this bad? I don't think so. It's just curious and ultimately inevitable, I suppose. I find that the collision of this relatively shiny new technology and the obscure criticism of my artistic education have collided in such a way.