Harvesting
Image Databases from the Web
The objective of this work is to automatically generate a
large number of images for a specified object class. A multimodal approach
employing both text, metadata, and visual features is used to gather many
high-quality images from the Web. Candidate images are obtained by a text-based
Web search querying on the object identifier (e.g., the word penguin). The
Webpages and the images they contain are downloaded. The task is then to remove
irrelevant images and rerank the remainder. First, the images are reranked
based on the text surrounding the image and metadata features. A number of
methods are compared for this reranking. Second, the top-ranked images are used
as (noisy) training data and an SVM visual classifier is learned to improve the
ranking further. We investigate the sensitivity of the cross-validation
procedure to this noisy training data. The principal novelty of the overall
method is in combining text/metadata and visual features in order to achieve a
completely automatic ranking of the images. Examples are given for a selection
of animals, vehicles, and other classes, totaling 18 classes. The results are
assessed by precision/recall curves on ground-truth annotated data and by
comparison to previous approaches, including those of Berg and Forsyth [5] and
Fergus et al
No comments:
Post a Comment