Segmentation and Saliency Detection with Fewer Labels
Overview
Image segmentation and saliency detection are core problems in computer vision. Although deep learning has made significant progress in solving these problems, the most successful methods require labeled training data.
Collecting these labels requires a great deal of time and effort. To address these issues, we propose a number of methods that require fewer labels.
-
Image segmentation.
In IJCAI 2018, we propose a CNN-based method for solving the unsupervised object co-segmentation problem.
The method comprises two collaborative CNN modules, a feature extractor and a co-attention map generator.
The method achieves superior results, even outperforming the state-of-the-art, supervised methods.
In CVPR 2019, we propose a new task called instance co-segmentation. Given a set of images jointly covering object instances of a specific category, instance co-segmentation aims to identify all of these instances and segment each of them, i.e., generating one mask for each instance.
In this setting, an instance segmentation model of a class can be learned from the given set of images covering the class.
We also propose the first solution to the problem.
Our NeurIPS 2019 proposes a weakly supervised instance segmentation method that consumes training data with tight bounding box annotations.
The major difficulty lies in the uncertain figure-ground separation within each bounding box since there is no supervisory signal about it.
We address the difficulty by formulating the problem as a multiple instance learning (MIL) task, and generate positive and negative bags based on the sweeping lines of each bounding box.
-
Saliency detection.
Top-down saliency detection aims to highlight the regions of a specific object category, and typically relies on pixel-wise annotated training data. Our TIP 2019 paper addresses the high cost of collecting such training data by a weakly supervised approach to object saliency detection, where only image-level labels, indicating the presence or absence of a target object in an image, are available.
Our ECCV 2018 paper addresses co-saliency detection in a set of images jointly covering objects of a specific class by an unsupervised convolutional neural network (CNN). Our method does not require any additional training data in the form of object masks. We decompose co-saliency detection into two sub-tasks, single-image saliency detection and cross-image co-occurrence region discovery. These two tasks can be jointly optimized for generating co-saliency maps of high quality.
Publications
- Weakly Supervised Instance Segmentation using the Bounding Box Tightness Prior
-
Cheng-Chun Hsu, Kuang-Jui Hsu, Chung-Chi Tsai,
Yen-Yu Lin,
Yung-Yu Chuang
- NeurIPS 2019
- DeepCO3: Deep Instance Co-segmentation by Co-peak Search and Co-saliency Detection
-
Kuang-Jui Hsu,
Yen-Yu Lin,
Yung-Yu Chuang
- CVPR 2019
- Weakly Supervised Salient Object Detection by Learning A Classifier-Driven Map Generator
-
Kuang-Jui Hsu,
Yen-Yu Lin,
Yung-Yu Chuang
- IEEE TIP 2019
- Unsupervised CNN-based Co-saliency Detection with Graphical Optimization.
-
Kuang-Jui Hsu, Chung-Chi Tsai,
Yen-Yu Lin,
Xiaoning Qian,
Yung-Yu Chuang
- ECCV 2018
- Co-attention CNNs for Unsupervised Object Co-segmentation
-
Kuang-Jui Hsu,
Yen-Yu Lin,
Yung-Yu Chuang
- IJCAI 2018
cyy -a-t- csie.ntu.edu.tw