FocalClick: Towards Practical Interactive Image Segmentation

Interactive segmentation allows users to extract target masks by making positive/negative clicks. Although explored by many previous works, there is still a gap between academic approaches and industrial needs: first, existing models are not efficient enough to work on low power devices; second, they perform poorly when used to refine preexisting masks as they could not avoid destroying the correct part. FocalClick solves both issues at once by predicting and updating the mask in localized areas. For higher efficiency, we decompose the slow prediction on the entire image into two fast inferences on small crops: a coarse segmentation on the Target Crop, and a local refinement on the Focus Crop. To make the model work with preexisting masks, we formulate a sub-task termed Interactive Mask Correction, and propose Progressive Merge as the solution. Progressive Merge exploits morphological information to decide where to preserve and where to update, enabling users to refine any preexisting mask effectively. FocalClick achieves competitive results against SOTA methods with significantly smaller FLOPs. It also shows significant superiority when making corrections on preexisting masks. Code and data will be released at github.com/XavierCHEN34/ClickSEG

PDF Abstract CVPR 2022 PDF CVPR 2022 Abstract

Datasets


Introduced in the Paper:

DAVIS-585

Used in the Paper:

MS COCO DAVIS LVIS SBD

Results from the Paper


Ranked #2 on Interactive Segmentation on DAVIS (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Interactive Segmentation Berkeley FocalClick-B3-S2 NoC@90 1.48 # 2
Interactive Segmentation DAVIS FocalClick-B3-S2 NoC@85 2.92 # 1
NoC@90 4.52 # 2
Interactive Segmentation DAVIS-585 FocalClick-B3-S2 NoC@90 2.76 # 2
Interactive Segmentation SBD FocalClick NoC@85 3.53 # 8
NoC@90 5.59 # 7

Methods


No methods listed for this paper. Add relevant methods here