TRECVID
TRECVID
is an international competition to
evaluate multimedia research. We
consider the problem of concept
detection, where each instance is assocated
with several convepts.
This is a complicated task as you must do multimedia
pre-processing and generate various types
of features. To focus on some machine
learning aspects, we rely on platform such as
mediamill, which has done all pre-processing
and provided baseline models.
We plan to investigate two
issues
- Training time: If one can improve
the training speed, then more settings
can be tried in order to
improve the performance.
We will first identify how
much time is needed for existing
settings and find the bottleneck.
We then seek for possible
improvements.
- We will check our performance
for TRECVID 2005. The mediamill set
splits 2005 training data to 70%
training and 30% testing. We combine
both for training and predict
the 2005 testing set. The goal
is to see how the baseline
model of mediamill ranks in the
competition.
Week 1: introduction of the TRECVID problem, evaluation
criteria, and the mediamill framework.
We will have the mediamill sets available
at ??, so you don't need to download them.
weeks 2-3: Replicate results in mediamill's
ACM MM 2006 paper. Investigate time
needed for finding parameters. Show time
distribution of different concepts.
How much time we can save if we use
a dense version of libsvm?
week 4: Prepare TRECVID 2005 testing data.
Discuss difficulties on using it?
Week 5: Analyze results. Are optimal
parameters for 101 classes suitable for
the situation of 39 (or 10) classes?
Last modified: Tue Feb 13 01:40:55 CST 2007