I was fascinated by the link from elearningpost yesterday to a story about a new method of searching images. Previous attempts at image search have always relied on metadata. The utility of the results is at the mercy of the people who code the metadata.
A good example of this is the image search available at the William Blake Archive. The effort to make a search like this useful is directly proportional to the amount of time and thought into just what constitutes a notable feature of the images. In Blake’s case, his use of rhetorical gesture does not map onto a neat continuum. If you want to find matches with particular features of Lavater’s physiognomy, for example, you could search for “head” or “face” but then you’d still be faced with matching up hundreds of plates with hundreds of other plates. Perhaps Purdue’s new technology would be a huge leap forward.
However, if you look at the photograph I posted yesterday, it seems clear to me that it is a “triangular” composition. However, since a machine cannot easily discern what the central points of interest are in a photograph, it would probably fail to locate the visual triangle that a viewer’s eye constructs. I’m sure that software might come close with sophisticated algorithms, but there are no triangles in this image to be found when examining it on the basis of contours or colors. The Purdue research is still exciting though; it must have been a bear to make the pattern matching “fuzzy” enough to be useful.