Citation: | Wang B, Bai Y Q, Zhu Z J, et al. No-reference light field image quality assessment based on joint spatial-angular information[J]. Opto-Electron Eng, 2024, 51(9): 240139. doi: 10.12086/oee.2024.240139 |
[1] | 左超, 陈钱. 计算光学成像: 何来, 何处, 何去, 何从?[J]. 红外与激光工程, 2022, 51(2): 20220110. doi: 10.3788/IRLA20220110 Zuo C, Chen Q. Computational optical imaging: an overview[J]. Infrared Laser Eng, 2022, 51(2): 20220110. doi: 10.3788/IRLA20220110 |
[2] | Xiang J J, Jiang G Y, Yu M, et al. No-reference light field image quality assessment using four-dimensional sparse transform[J]. IEEE Trans Multimedia, 2023, 25: 457−472. doi: 10.1109/TMM.2021.3127398 |
[3] | 吕天琪, 武迎春, 赵贤凌. 角度差异强化的光场图像超分网络[J]. 光电工程, 2023, 50(2): 220185. doi: 10.12086/oee.2023.220185 Lv T Q, Wu Y C, Zhao X L. Light field image super-resolution network based on angular difference enhancement[J]. Opto-Electron Eng, 2023, 50(2): 220185. doi: 10.12086/oee.2023.220185 |
[4] | 于淼, 刘诚. 基于单次曝光光场成像的全焦图像重建技术[J]. 应用光学, 2021, 42(1): 71−78. doi: 10.5768/JAO202142.0102004 Yu M, Liu C. Single exposure light field imaging based all-in-focus image reconstruction technology[J]. J Appl Opt, 2021, 42(1): 71−78. doi: 10.5768/JAO202142.0102004 |
[5] | Tian Y, Zeng H Q, Xing L, et al. A multi-order derivative feature-based quality assessment model for light field image[J]. J Vis Commun Image Represent, 2018, 57: 212−217. doi: 10.1016/j.jvcir.2018.11.005 |
[6] | Huang H L, Zeng H Q, Hou J H, et al. A spatial and geometry feature-based quality assessment model for the light field images[J]. IEEE Trans Image Process, 2022, 31: 3765−3779. doi: 10.1109/TIP.2022.3175619 |
[7] | Min X K, Zhou J T, Zhai G T, et al. A metric for light field reconstruction, compression, and display quality evaluation[J]. IEEE Trans Image Process, 2020, 29: 3790−3804. doi: 10.1109/TIP.2020.2966081 |
[8] | Shi L K, Zhou W, Chen Z B, et al. No-reference light field image quality assessment based on spatial-angular measurement[J]. IEEE Trans Circuits Syst Video Technol, 2020, 30(11): 4114−4128. doi: 10.1109/TCSVT.2019.2955011 |
[9] | Luo Z Y, Zhou W, Shi L K, et al. No-reference light field image quality assessment based on micro-lens image[C]//2019 Picture Coding Symposium (PCS), 2019: 1–5. https://doi.org/10.1109/PCS48520.2019.8954551. |
[10] | Xiang J J, Yu M, Chen H, et al. VBLFI: visualization-based blind light field image quality assessment[C]//2020 IEEE International Conference on Multimedia and Expo (ICME), 2020: 1–6. https://doi.org/10.1109/ICME46284.2020.9102963. |
[11] | Lamichhane K, Battisti F, Paudyal P, et al. Exploiting saliency in quality assessment for light field images[C]//2021 Picture Coding Symposium (PCS), 2021: 1–5. https://doi.org/10.1109/PCS50896.2021.9477451. |
[12] | Zhou W, Shi L K, Chen Z B, et al. Tensor oriented no-reference light field image quality assessment[J]. IEEE Trans Image Process, 2020, 29: 4070−4084. doi: 10.1109/TIP.2020.2969777 |
[13] | Xiang J J, Yu M, Jiang G Y, et al. Pseudo video and refocused images-based blind light field image quality assessment[J]. IEEE Trans Circuits Syst Video Technol, 2021, 31(7): 2575−2590. doi: 10.1109/TCSVT.2020.3030049 |
[14] | Alamgeer S, Farias M C Q. No-Reference light field image quality assessment method based on a long-short term memory neural network[C]//2022 IEEE International Conference on Multimedia and Expo Workshops (ICMEW), 2022: 1–6. https://doi.org/10.1109/ICMEW56448.2022.9859419. |
[15] | Qu Q, Chen X M, Chung V, et al. Light field image quality assessment with auxiliary learning based on depthwise and anglewise separable convolutions[J]. IEEE Trans Broadcast, 2021, 67(4): 837−850. doi: 10.1109/TBC.2021.3099737 |
[16] | Qu Q, Chen X M, Chung Y Y, et al. LFACon: Introducing anglewise attention to no-reference quality assessment in light field space[J]. IEEE Trans Vis Comput Graph, 2023, 29(5): 2239−2248. doi: 10.1109/TVCG.2023.3247069 |
[17] | Zhao P, Chen X M, Chung V, et al. DeLFIQE: a low-complexity deep learning-based light field image quality evaluator[J]. IEEE Trans Instrum Meas, 2021, 70: 5014811. doi: 10.1109/TIM.2021.3106113 |
[18] | Zhang Z Y, Tian S S, Zou W B, et al. Deeblif: deep blind light field image quality assessment by extracting angular and spatial information[C]//2022 IEEE International Conference on Image Processing (ICIP), 2022: 2266–2270. https://doi.org/10.1109/ICIP46576.2022.9897951. |
[19] | He K M, Zhang X Y, Ren S Q, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016: 770–778. https://doi.org/10.1109/CVPR.2016.90. |
[20] | Lu S Y, Ding Y M, Liu M Z, et al. Multiscale feature extraction and fusion of image and text in VQA[J]. Int J Comput Intell Syst, 2023, 16(1): 54. doi: 10.1007/s44196-023-00233-6 |
[21] | Shi L K, Zhao S Y, Zhou W, et al. Perceptual evaluation of light field image[C]//2018 25th IEEE International Conference on Image Processing (ICIP), 2018: 41–45. https://doi.org/10.1109/ICIP.2018.8451077. |
[22] | Huang Z J, Yu M, Jiang G Y, et al. Reconstruction distortion oriented light field image dataset for visual communication[C]//2019 International Symposium on Networks, Computers and Communications (ISNCC), 2019: 1–5. https://doi.org/10.1109/ISNCC.2019.8909170. |
[23] | Shan L, An P, Meng C L, et al. A no-reference image quality assessment metric by multiple characteristics of light field images[J]. IEEE Access, 2019, 7: 127217−127229. doi: 10.1109/ACCESS.2019.2940093 |
[24] | Mittal A, Moorthy A K, Bovik A C. No-reference image quality assessment in the spatial domain[J]. IEEE Trans Image Process, 2012, 21(12): 4695−4708. doi: 10.1109/TIP.2012.2214050 |
[25] | Li Q H, Lin W S, Fang Y M. No-reference quality assessment for multiply-distorted images in gradient domain[J]. IEEE Signal Process Lett, 2016, 23(4): 541−545. doi: 10.1109/LSP.2016.2537321 |
[26] | Meng C L, An P, Huang X P, et al. Full reference light field image quality evaluation based on angular-spatial characteristic[J]. IEEE Signal Process Lett, 2020, 27: 525−529. doi: 10.1109/LSP.2020.2982060 |
[27] | Shi L K, Zhao S Y, Chen Z B. Belif: blind quality evaluator of light field image with tensor structure variation index[C]//2019 IEEE International Conference on Image Processing (ICIP), 2019: 3781–3785. https://doi.org/10.1109/ICIP.2019.8803559. |
[28] | Zhang Z Y, Tian S S, Zou W B, et al. Blind quality assessment of light field image based on spatio-angular textural variation[C]//2023 IEEE International Conference on Image Processing (ICIP), 2023: 2385–2389. https://doi.org/10.1109/ICIP49359.2023.10222216. |
[29] | Rerabek M, Ebrahimi T. New light field image dataset[C]//8th International Conference on Quality of Multimedia Experience (QoMEX), 2016: 1–2. |
[30] | VQEG. Final Report From the Video Quality Experts Group on the Validation of Objective Models of Video Quality Assessment, 2000 [EB/OL]. http://www.vqeg.org. |
Light field imaging, as an emerging media dissemination method, differs from traditional 2D and stereoscopic images in its ability to capture the intensity of light in scenes and the directional information of light rays in free space. Due to its rich spatial and angular information, light field imaging finds extensive applications in depth estimation, refocusing, and 3D reconstruction. However, during acquisition, compression, transmission, and reconstruction, light field images inevitably suffer from various distortions, leading to a decline in image quality. Light field image quality assessment (LFIQA) plays a crucial role in enhancing the quality of these images. Based on the characteristics of light field images, this paper proposes a no-reference image quality assessment (NRIQA) scheme that integrates spatial-angular information and epipolar plane image (EPI) information using deep learning. Specifically, this approach estimates the overall quality of distorted light field images by assessing the perceptual quality of image blocks. To simulate human visual perception, it employs two multi-scale feature extraction methods to establish subtle correlations between local and global features, thereby capturing information on spatial and angular distortions. Considering the unique angular properties of light field images, a bidirectional EPI feature learning network is additionally designed to acquire vertical and horizontal disparity information, enhancing consideration of angular consistency distortions in images. Finally, by aggregating across different features, the method integrates three distinct image features to predict the quality of distorted images. Experimental results conducted on three publicly available light field image quality assessment datasets demonstrate that the proposed method achieves higher consistency between objective quality prediction and subjective evaluation, showcasing excellent predictive accuracy.
Different representations of light field image. (a) MLI; (b) SAIs
Overall framework of SAE-BLFI
Schematic diagram of spatial-angular separation
EPI under different distortion conditions for two scenarios
Boxplot of SROCC distribution in K-fold cross-validation on Win5-LID and NBU-LF1.0 datasets. (a) Win5-LID; (b) NBU-LF1.0
F-test statistical significance analysis on Win5-LID and NBU-LF1.0 datasets. (a) Win5-LID; (b) NBU-LF1.0