文章詳目資料

Journal of Computers EIMEDLINEScopus

  • 加入收藏
  • 下載文章
篇名 Focus on Specific-Video Objects: Learning Various Sample Representations for Visual Tracking
卷期 31:5
作者 Bo-Yan ZhangYong Zhong
頁次 112-126
關鍵字 clusteringdeep learninggenerative modelobject trackingEIMEDLINEScopus
出刊日期 202010
DOI 10.3966/199115992020103105009

中文摘要

英文摘要

Visual object tracking is one of the most challenging tasks in the field of computer vision. Many trackers can achieve impressive performance in the field; however, there is still some room for improvement, especially when it comes to tough cases, such as fast motion, blur, and rotation. Deep feature-based trackers have been employed due to their outstanding ability for representation, but their performance suffers from over-fitting due to lack of sufficient labeled training data, as well as similar distractors. Besides, the categories of targets are diverse in the tracking task. In this paper, we introduce a positive data augmentation module (PDAM) during the offline phase to generate various positive samples. The generated samples with the original data are clustered to form a different classes of training data. Each class is used to train one of multiple deep-tracking models which have an identical structure. At the tracking stage, a selection module chooses the most suitable pretrained tracking model according to the target information in the given video sequence. We conducted the experiments to validate the effectiveness of our method with some state-of-the-art trackers on a standard benchmark. The results showed that the proposed method achieved excellent tracking performance and robustness in videos involved in deformation, scale variation, motion blur.

本卷期文章目次

相關文獻