Zhao Hongwei, He Jinsong. Saliency detection method fused depth information based on Bayesian framework[J]. Opto-Electronic Engineering, 2018, 45(2): 170341. doi: 10.12086/oee.2018.170341
Citation: Zhao Hongwei, He Jinsong. Saliency detection method fused depth information based on Bayesian framework[J]. Opto-Electronic Engineering, 2018, 45(2): 170341. doi: 10.12086/oee.2018.170341

Saliency detection method fused depth information based on Bayesian framework

More Information
  • In the complex background, the traditional saliency detection methods often encounter the problems of unstable detection results and low accuracy. To address this problem, a saliency detection method fused depth information based on Bayesian framework is proposed. Firstly, the color saliency map is obtained by using a variety of contrast methods which includes global contrast, local contrast and foreground-background contrast, and the depth saliency map is obtained by using the depth contrast method based on the anisotropic center-surround difference. Secondly, using the Bayesian model to fuse the color-based saliency map and the depth-based saliency map. The experimental results show that the proposed method can effectively detect the salient targets under complex background and achieve higher detection accuracy on the published NLPR-RGBD dataset and NJU-DS400 dataset.
  • 加载中
  • [1] 李萌, 陈恳, 郭春梅, 等.融合显著性信息和社会力模型的人群异常检测[J].光电工程, 2016, 43(12): 193-199. doi: 10.3969/j.issn.1003-501X.2016.12.029

    CrossRef Google Scholar

    Li M, Chen K, Guo C M, et al. Abnormal crowd event detection by fusing saliency information and social force model[J]. Opto-Electronic Engineering, 2016, 43(12): 193-199. doi: 10.3969/j.issn.1003-501X.2016.12.029

    CrossRef Google Scholar

    [2] 张学典, 汪泓, 江旻珊, 等.显著性分析在对焦图像融合方面的应用[J].光电工程, 2017, 44(4): 435-441.

    Google Scholar

    Zhang X D, Wang H, Wang M S, et al. Applications of saliency analysis in focus image fusion[J]. Opto-Electronic Engineering, 2017, 44(4): 435-441.

    Google Scholar

    [3] Itti L, Koch C, Niebur E. A model of saliency-based visual attention for rapid scene analysis[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998, 20(11): 1254-1259. doi: 10.1109/34.730558

    CrossRef Google Scholar

    [4] Achanta R, Hemami S, Estrada F, et al. Frequency-tuned salient region detection[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 2009: 1597-1604.

    Google Scholar

    [5] Hou X D, Zhang L Q. Saliency detection: a spectral residual approach[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 2007: 1-8.

    Google Scholar

    [6] Zhai Y, Shah M. Visual attention detection in video sequences using spatiotemporal cues[C]//Proceedings of the 14th ACM International Conference on Multimedia, Santa Barbara, CA, 2006: 815-824.

    Google Scholar

    [7] Harel J, Koch C, Perona P. Graph-based visual saliency[C]//Proceedings of the 19th International Conference on Neural Information Processing Systems, Canada, 2006: 545-552.

    Google Scholar

    [8] Cheng M M, Zhang G X, Mitra N J, et al. Global contrast based salient region detection[C]//Proceedings of 2011 IEEE Conference on Computer Vision and Pattern Recognition, Colorado, CO, USA, 2011: 409-416.

    Google Scholar

    [9] Cheng M M, Mitra N J, Huang X L, et al. Global contrast based salient region detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(3): 569-582. doi: 10.1109/TPAMI.2014.2345401

    CrossRef Google Scholar

    [10] Lu H C, Li X H, Zhang L H, et al. Dense and sparse reconstruction error based saliency descriptor[J]. IEEE Transactions on Image Processing, 2016, 25(4): 1592-1603. doi: 10.1109/TIP.2016.2524198

    CrossRef Google Scholar

    [11] Desingh K, Madhava Krishna K, Rajan D, et al. Depth really matters: improving visual salient region detection with depth[C]//Proceedings of British Machine Vision Conference, 2013.

    Google Scholar

    [12] Peng H W, Li B, Xiong W H, et al. RGBD salient object detection: a benchmark and algorithms[C]//Proceedings of the 13th European Conference on Computer Vision, Switzerland, 2014: 92-109.

    Google Scholar

    [13] Ren J Q, Gong X J, Yu L, et al. Exploiting global priors for RGB-D saliency detection[C]//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops, Boston, MA, USA, 2015: 25-32.

    Google Scholar

    [14] 林昌, 何炳蔚, 董升升.融合深度信息的室内RGB图像视觉显著物体快速检测方法[J].中国激光, 2014, 41(11): 1108005.

    Google Scholar

    Lin C, He B W, Dong S S. An indoor object fast detection method based on visual attention mechanism of fusion depth information in RGB image[J]. Chinese Journal of Lasers, 2014, 41(11): 1108005.

    Google Scholar

    [15] Zhang Y, Jiang G Y, Yu M, et al. Stereoscopic visual attention model for 3D video[C]//Proceedings of the 16th International Multimedia Modeling Conference on Advances in Multimedia Modeling, Chongqing, China, 2010: 314-324.

    Google Scholar

    [16] Shen X H, Wu Y. A unified approach to salient object detection via low rank matrix recovery[C]//Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 2012: 853-860.

    Google Scholar

    [17] Borji A, Itti L. Exploiting local and global patch rarities for saliency detection[C]//Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 2012: 478-485.

    Google Scholar

    [18] Perazzi F, KrähenbÜHl P, Pritch Y, et al. Saliency filters: contrast based filtering for salient region detection[C]//Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 2012: 733-740.

    Google Scholar

    [19] Achanta R, Shaji A, Smith K, et al. SLIC superpixels compared to state-of-the-art superpixel methods[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(11): 2274-2282. doi: 10.1109/TPAMI.2012.120

    CrossRef Google Scholar

    [20] Wolfe J M, Horowitz T S. What attributes guide the deployment of visual attention and how do they do it?[J]. Nature Reviews Neuroscience, 2004, 5(6): 495-501. doi: 10.1038/nrn1411

    CrossRef Google Scholar

    [21] Ju R, Ge L, Geng W J, et al. Depth saliency based on anisotropic center-surround difference[C]//Proceedings of IEEE International Conference on Image Processing, Paris, France, 2014: 1115-1119.

    Google Scholar

  • Overview: Saliency detection aims to detect salient objects in an image and filter out background noise by simulating human visual attention mechanism. Most current methods for saliency detection rely on the color-difference between salient object and background while ignoring depth information, which has been proven to be important in the human cognitive system. This leads to not good enough saliency detection results especially when the salient object presents in a low-contrast background with confusing visual appearance. To address this problem, we present a saliency detection method fused depth information based on the Bayesian framework. Firstly, in order to reduce the computational complexity, we use SLIC algorithm on the RGB images and depth image. Secondly, we extract the tinctorial information and spatial information of the superpixel from the input RGB picture, and obtain the color-based saliency map using a variety of contrast methods which includes global contrast, local contrast and foreground-background contrast method. Meanwhile, extracting the depth information of the superpixel from the input depth picture, and obtaining the depth-based saliency map based on anisotropic center-surround difference. Third, an object-biased Gaussian model acts on color-based saliency map and depth-based saliency map for the purpose of filtering out background noise further. Finally, we fuse the color-based saliency map and the depth-based saliency map based on the Bayesian framework. Specifically, depth-based saliency map is used as the prior probability, and calculate the likelihood probability using color-based saliency map, then obtain a posterior probability based on the Bayesian formula. Exchanging the role of depth-based saliency map and color-based saliency map in the Bayesian framework could obtain another posterior probability. The finally saliency map is defined as the product of two posterior probability in this paper. Our approach is evaluated on the published NLPR-RGBD dataset and the NJU-DS400 dataset, and experimental results show that our approach can effectively detect the salient object in a low-contrast background with a confusing visual appearance by filtering tinctorial information and deep information. Furthermore, compared with other four prevailing methods by precision score and the F-measure score, our approach is superior to the other four prevailing methods.

  • 加载中
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Figures(5)

Tables(1)

Article Metrics

Article views(7356) PDF downloads(3117) Cited by(0)

Access History

Other Articles By Authors

Article Contents

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint