Smart Scribbles for Image Matting

Xin Yang*, Yu Qiao*, Shaozhe Chen, Shengfeng He, Baocai Yin, Qiang Zhang, Xiaopeng Wei, Rynson W.H. Lau,

(* Joint first authors       † corresponding author)

ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), 2020

         we prpose a new framework which can produce high-quality alpha mattes from limited scribbles. The users only need to draw scribbles on informative regions to suggest the foreground, background and unknown. More detailed comparisons and analysis can refer to our paper.

Abstract

      Image matting is an ill-posed problem that usually requires additional user input, such as trimaps or scribbles. Drawing a fine trimap requires a large amount of user effort, while using scribbles can hardly obtain satisfactory alpha mattes for non-professional users. Some recent deep learning based matting networks rely on large-scale composite datasets for training to improve performance, resulting in the occasional appearance of obvious artifacts when processing natural images. In this paper, we explore the intrinsic relationship between user input and alpha mattes, and strike a balance between user effort and the quality of alpha mattes. In particular, we propose an interactive framework, referred to as smart scribbles, to guide users to draw few scribbles on the input images to produce high-quality alpha mattes. It first infers the most informative regions of an image for drawing scribbles to indicate different categories (foreground, background or unknown), then spreads these scribbles (i.e., the category labels) to the rest of the image via our well-designed two-phase propagation. Both neighboring low-level affinities and high-level semantic features are considered during the propagation process. Our method can be optimized without large-scale matting datasets, and exhibits more universality in real situations. Extensive experiments demonstrate that smart scribbles can produce more accurate alpha mattes with reduced additional input, compared to the state-of-the-art matting methods.

Method

       The pipeline of the proposed method. The input image is first over-segmented into superpixels, then is divided into regular rectangle regions of the same size.We calculate the information content of each region and the most informative region is automatically selected for users to draw scribbles, specifying the foreground (in red), background (in blue) and unknown areas (in green). These labels are then propagated to unlabeled regions to update the probability matrix (PrM) via two-phase propagation. During CNN propagation, we gather all superpixel external rectangles as input, and the superpixels with scribbles are used for training, while for the others we predict category labels for them using the trained model. After CNN propagation, we update PrM to generate a refined trimap and the final matte can be produced by an embedded existing matting algorithm.

Downloads

Source Code: Code

Paper:            Paper


BibTex
@article{Yang2020Smart,
	author = {Yang, Xin and Qiao, Yu and Chen, Shaozhe and He, Shengfeng and Yin, Baocai and Zhang, Qiang and Wei, Xiaopeng and Lau, Rynson W. H.},
	title = {Smart Scribbles for Image Matting},
	journal = {ACM TOMM},
	year = {2020},
	volume = {16},
	number = {4},
}