Reranking Methods for Visual Search
Winston H. Hsu, Lyndon Kennedy, and Shih-Fu Chang
We proposed two novel reranking processes for image and video search, which automatically reorder results from initial text-only searches based on visual features and content similarity. The two approaches both utilize the recurrent patterns (in different granularities) commonly observed in large-scale image and video databases. The first approach, IB reranking, discovers the salient visual patterns that favor images or videos of high recurrence frequency and top search relevance. The second method, context reranking, utilizes the multimodal similarity between semantic video documents (e.g., stories) to improve the initial text query results through a random walk formulation. Evaluation of the approaches on the TRECVID 2003-2005 data sets shows significant improvement over the text search baseline. The IB reranking method demonstrates a large relative performance gain -- 23% in terms of Mean Average Precision. The context reranking improves by up to 32% at the story level. The proposed methods do not require any additional input from users (e.g., example images), or complex search models for special queries (e.g., named person search). Nonetheless, the performance is shown to be competitive with alternative approaches.
[IEEE Multimedia Magazine, July-September, 2007]