[2111.02394] FAST: Searching for a Faster Arbitrarily-Shaped Text Detector with Minimalist Kernel Representation

Significance

Faster text detection with simpler architecture and parallel computing

Keypoints

  • Propose a text detection method for faster text detection using neural architecture search and efficient post-processing
  • Demonstrate detection performance with respect to inference speed of the proposed method by experiments

Review

Background

Scene text detection is one of the tasks in computer vision with successful application of deep neural networks. Although there are a number of previous methods that achieve excellent performance with a reasonable speed, there still exists a need for faster methods which conserves the detection performance for real-time application in smaller devices. The authors address this issue by updating a previous method based on neural architecture search, and propose efficient post-processing step to achieve faster and better text detection.

Keypoints

Propose a method for faster text detection using neural architecture search and efficient post-processing

211104-1 Schematic illustration of the proposed method

Main architecture of the proposed method largely inherits the ProxylessNAS but with different combination of convolutional layer kernel shapes in the learnable block $L_{i}$. (The authors claim that combination of vertical $3 \times 1$ or horizontal $1 \times 3$ kernels within the learnable block can capture the features of extreme aspect-ratio text lines, but I think that this statement requires further theoretical/experimental verification.) Optimal model $m$ is chosen from the over-parameterized initial network with the reward function $\mathcal{R}(m)$ incorporating both the detection performance metric IoU and the speed metric FPS: \begin{equation} \mathcal{R}(m) = (\text{IoU}_{k}(m) + \alpha \text{IoU}_{t}(m)) \times (\frac{\text{FPS}(m)}{T})^{w}. \end{equation}

The model $m$ outputs feature $F$ is used for predicting the text kernel label, rather than the coordinates of the bounding box. 211104-2 Generating region and kernel labels from the bounding box The loss function is accordingly set as the DICE loss, which can represent the level of region overlap between the prediction and the label mask. One of the important point is that the eroded text kernel and the dilated text region can be bidirectionally mapped by invertible and differentiable function, enabling end-to-end training and GPU computing for more efficient inference.

Demonstrate detection performance with respect to inference speed of the proposed method by experiments

The proposed method is extensively compared with previous methods for various datasets in terms of the detection performance with respect to the inference speed. Baseline methods include TextSnake, TextField, CRAFT, LOMO, SPCNet, PSENet for non-realtime methods and EAST, DB-R50, DB-R18, PAN, PAN++ for realtime methods. Comparison of Precision, recall, F-score with respect to FPS demonstrates solid performance/speed of the proposed method for Total-Text, CTW1500, ICDAR2015, and MSRA-TD500 datasets.

211104-3 Performance curve of F-measure with respect to FPS

211104-5 Quantiative performance of the proposed method for the Total-Text dataset

211104-6 Quantiative performance of the proposed method for the CTW1500 dataset

211104-7 Qualitative performance of the proposed method

Quantitative results of the proposed method for ICDAR2015, MSRA-TD500 are referred to the original paper.

Share

Comment

#image-generation #multi-modal #language-model #retrieval-augmentation #robotics #forecasting #psychiatry #instruction-tuning #diffusion-model #notice #graph-neural-network #responsible-ai #privacy-preserving #scaling #mixture-of-experts #generative-adversarial-network #speech-model #contrastive-learning #self-supervised #image-representation #image-processing #object-detection #pseudo-labeling #scene-text-detection #neural-architecture-search #data-sampling #long-tail #graph-representation #zero-shot #metric-learning #federated-learning #weight-matrix #low-rank #vision-transformer #computer-vision #normalizing-flow #invertible-neural-network #super-resolution #image-manipulation #thread-summarization #natural-language-processing #domain-adaptation #knowledge-distillation #scene-text #model-compression #semantic-segmentation #instance-segmentation #video-understanding #code-generation #graph-generation #image-translation #data-augmentation #model-pruning #signal-processing #text-generation #text-classification #music-representation #transfer-learning #link-prediction #counterfactual-learning #medical-imaging #acceleration #transformer #style-transfer #novel-view-synthesis #point-cloud #spiking-neural-network #optimization #multi-layer-perceptron #adversarial-training #visual-search #image-retrieval #negative-sampling #action-localization #weakly-supervised #data-compression #hypergraph #adversarial-attack #submodularity #active-learning #deblurring #object-tracking #pyramid-structure #loss-function #gradient-descent #generalization #bug-fix #orthogonality #explainability #saliency-mapping #information-theory #question-answering #knowledge-graph #robustness #limited-data #recommender-system #anomaly-detection #gaussian-discriminant-analysis #molecular-graph #video-processing