We have been working on semantic content analysis for media such as photos
and videos. Such an analysis will enable more efficient and effective
utiliziation of massive media, for example, search, organization, summarization,
presentation and retrieval.
We have worked on developing algorithms to automatically segment a home
video into video segments based on events.
As a first attempt, we focused on church wedding videos. The initial
results was published in MIR 2007. The paper's title is
Analysis and Segmentation of Wedding Ceremony Videos.
A more complete version of this work has been published as an IEEE TCSVT article in November 2008.
Recently, we have made efforts on collecting benchmarks for
Region-of-Interest (ROI) of images using a collaborative game,
With the collected benchmark, we can compare exisiting ROI algorithms
quantatively. The preliminary results are published in
a CVPR 2009 paper.
We have also developed methods for learning landmark ontology from geotagged
images of Flickr and Wikipedia article. The resultant ontology could be
used in appliations such as automatic tag suggestion and content-relevant
The paper is published in MMM 2010.
- Learning Landmarks
by Exploting Social Media
Yu-Ting Hsieh, Tien-Jung Chuang, Yin Wang,
- MMM 2010
Benchmark for Region of Interest Detection Algorithms
- IEEE CVPR 2009
- Semantic Analysis
for Automatic Event Recognition and Segmentation of Wedding Ceremony Videos
- IEEE Transactions on Circuits and Systems for Video Technology, November 2008
- Semantic-Event Based Analysis and Segmentation of Wedding Ceremony Videos
- Proceedings of the 9th ACM SIGMM International Workshop on Multimedia Information Retrieval 2007
This research is supported by:
cyy -a-t- csie.ntu.edu.tw