Lin S L, Chen Y, Zhang X, et al. Dual low-light images combining color correction and structural information enhance[J]. Opto-Electron Eng, 2024, 51(9): 240142. doi: 10.12086/oee.2024.240142
Citation: Lin S L, Chen Y, Zhang X, et al. Dual low-light images combining color correction and structural information enhance[J]. Opto-Electron Eng, 2024, 51(9): 240142. doi: 10.12086/oee.2024.240142

Dual low-light images combining color correction and structural information enhance

    Fund Project: Project supported by National Key R & D Program of China (2023YFB3609400), and National Natural Science Foundation of China Youth Foundation (62101132)
More Information
  • To enhance image quality in low-light conditions, an unsupervised dual-path low-light image enhancement algorithm is proposed, integrating color correction and structural information. The algorithm utilizes a generative adversarial network (GAN) with a generator that employs a dual-branch architecture to concurrently handle color and structural details, resulting in natural color restoration and clear texture details. A spatial-discriminative block (SDB) is introduced in the discriminator to improve its judgment capability, leading to more realistic image generation. An illumination-guided color correction block (IGCB) uses illumination features to mitigate noise and artifacts in low-light environments. The selective kernel channel fusion (SKCF) and convolution attention block (CAB) modules enhance the semantic and local details of the image. Experimental results show that the algorithm outperforms classical methods on the LOL and LSRW datasets, achieving PSNR and SSIM scores of 19.89 and 0.672, respectively, on the LOLv1 dataset, and 20.08 and 0.693 on the LOLv2 dataset. Practical applications confirm its effectiveness in restoring brightness, contrast, and color in low-light images.
  • 加载中
  • [1] Lan R S, Sun L, Liu Z B, et al. MADNet: a fast and lightweight network for single-image super resolution[J]. IEEE Trans Cybern, 2021, 51(3): 1443−1453. doi: 10.1109/TCYB.2020.2970104

    CrossRef Google Scholar

    [2] Lin J P, Liao L Z, Lin S L, et al. Deep and adaptive feature extraction attention network for single image super-resolution[J]. J Soc Inf Disp, 2024, 32(1): 23−33. doi: 10.1002/jsid.1269

    CrossRef Google Scholar

    [3] 徐胜军, 杨华, 李明海, 等. 基于双频域特征聚合的低照度图像增强[J]. 光电工程, 2023, 50(12): 230225. doi: 10.12086/oee.2023.230225

    CrossRef Google Scholar

    Xu S J, Yang H, Li M H, et al. Low-light image enhancement based on dual-frequency domain feature aggregation[J]. Opto-Electron Eng, 2023, 50(12): 230225. doi: 10.12086/oee.2023.230225

    CrossRef Google Scholar

    [4] 刘光辉, 杨琦, 孟月波, 等. 一种并行混合注意力的渐进融合图像增强方法[J]. 光电工程, 2023, 50(4): 220231. doi: 10.12086/oee.2023.220231

    CrossRef Google Scholar

    Liu G H, Yang Q, Meng Y B, et al. A progressive fusion image enhancement method with parallel hybrid attention[J]. Opto-Electron Eng, 2023, 50(4): 220231. doi: 10.12086/oee.2023.220231

    CrossRef Google Scholar

    [5] Jin X, Han L H, Li Z, et al. DNF: decouple and feedback network for seeing in the dark[C]//Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 18135–18144. https://doi.org/10.1109/CVPR52729.2023.01739.

    Google Scholar

    [6] Lore K G, Akintayo A, Sarkar S. LLNet: a deep autoencoder approach to natural low-light image enhancement[J]. Pattern Recognit, 2017, 61: 650−662. doi: 10.1016/j.patcog.2016.06.008

    CrossRef Google Scholar

    [7] Wei C, Wang W J, Yang W H, et al. Deep retinex decomposition for low-light enhancement[C]//British Machine Vision Conference 2018, 2018: 155.

    Google Scholar

    [8] Guo X J, Li Y, Ling H B. LIME: low-light image enhancement via illumination map estimation[J]. IEEE Trans Image Process, 2017, 26(2): 982−993. doi: 10.1109/TIP.2016.2639450

    CrossRef Google Scholar

    [9] Gong Y F, Liao P Y, Zhang X D, et al. Enlighten-GAN for super resolution reconstruction in mid-resolution remote sensing images[J]. Remote Sens, 2021, 13(6): 1104. doi: 10.3390/rs13061104

    CrossRef Google Scholar

    [10] Ma L, Ma T Y, Liu R S, et al. Toward fast, flexible, and robust low-light image enhancement[C]//Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 5627–5636. https://doi.org/10.1109/CVPR52688.2022.00555.

    Google Scholar

    [11] Fu Z Q, Yang Y, Tu X T, et al. Learning a simple low-light image enhancer from paired low-light instances[C]// Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 22252–22261. https://doi.org/10.1109/CVPR52729.2023.02131.

    Google Scholar

    [12] Liang D, Xu Z Y, Li L, et al. PIE: physics-inspired low-light enhancement[Z]. arXiv: 2404.04586, 2024. https://arxiv.org/abs/2404.04586.

    Google Scholar

    [13] 林坚普, 王栋, 肖智阳, 等. 图像边缘权重优化的最小生成树分割提取[J]. 电子与信息学报, 2023, 45(4): 1494−1504. doi: 10.11999/JEIT220182

    CrossRef Google Scholar

    Lin J P, Wang D, Xiao Z Y, et al. Minimum spanning tree segmentation and extract with image edge weight optimization[J]. J Electron Inf Technol, 2023, 45(4): 1494−1504. doi: 10.11999/JEIT220182

    CrossRef Google Scholar

    [14] 程德强, 尤杨杨, 寇旗旗, 等. 融合暗通道先验损失的生成对抗网络用于单幅图像去雾[J]. 光电工程, 2022, 49(7): 210448. doi: 10.12086/oee.2022.210448

    CrossRef Google Scholar

    Cheng D Q, You Y Y, Kou Q Q, et al. A generative adversarial network incorporating dark channel prior loss used for single image defogging[J]. Opto-Electron Eng, 2022, 49(7): 210448. doi: 10.12086/oee.2022.210448

    CrossRef Google Scholar

    [15] 刘皓轩, 林珊玲, 林志贤, 等. 基于GAN的轻量级水下图像增强网络[J]. 液晶与显示, 2023, 38(3): 378−386. doi: 10.37188/CJLCD.2022-0212

    CrossRef Google Scholar

    Liu H X, Lin S L, Lin Z X, et al. Lightweight underwater image enhancement network based on GAN[J]. Chin J Liq Cryst Disp, 2023, 38(3): 378−386. doi: 10.37188/CJLCD.2022-0212

    CrossRef Google Scholar

    [16] Cai Y H, Bian H, Lin J, et al. Retinexformer: one-stage retinex-based transformer for low-light image enhancement[C]// Proceedings of 2023 IEEE/CVF International Conference on Computer Vision, 2023: 12470–12479. https://doi.org/10.1109/ICCV51070.2023.01149.

    Google Scholar

    [17] Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation [C]//Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, 2015: 234–241. https://doi.org/10.1007/978-3-319-24574-4_28.

    Google Scholar

    [18] Joshy A A, Rajan R. Dysarthria severity assessment using squeeze-and-excitation networks[J]. Biomed Signal Process Control, 2023, 82: 104606. doi: 10.1016/j.bspc.2023.104606

    CrossRef Google Scholar

    [19] Zhang S, Liu Z W, Chen Y P, et al. Selective kernel convolution deep residual network based on channel-spatial attention mechanism and feature fusion for mechanical fault diagnosis[J]. ISA Transactions, 2023, 133: 369−383. doi: 10.1016/j.isatra.2022.06.035

    CrossRef Google Scholar

    [20] Ponomarenko N, Silvestri F, Egiazarian K, et al. On between-coefficient contrast masking of DCT basis functions[C]//Third International Workshop on Video Processing and Quality Metrics for Consumer Electronics, 2007: 1–4.

    Google Scholar

    [21] Wang Z, Bovik A C, Sheikh H R, et al. Image quality assessment: from error visibility to structural similarity[J]. IEEE Trans Image Process, 2004, 13(4): 600−612. doi: 10.1109/TIP.2003.819861

    CrossRef Google Scholar

    [22] Zhang R, Isola P, Efros A A, et al. The unreasonable effectiveness of deep features as a perceptual metric[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018: 586–595. https://doi.org/10.1109/CVPR.2018.00068.

    Google Scholar

    [23] Zhang L, Zhang L, Bovik A C. A feature-enriched completely blind image quality evaluator[J]. IEEE Trans Image Process, 2015, 24(8): 2579−2591. doi: 10.1109/TIP.2015.2426416

    CrossRef Google Scholar

    [24] Hai J, Xuan Z, Yang R, et al. R2RNet: low-light image enhancement via real-low to real-normal network[J]. J Vis Commun Image Represent, 2023, 90: 103712. doi: 10.1016/j.jvcir.2022.103712

    CrossRef Google Scholar

    [25] Lee C, Lee C, Kim C S. Contrast enhancement based on layered difference representation[C]//2012 19th IEEE International Conference on Image Processing, 2012: 965–968. https://doi.org/10.1109/ICIP.2012.6467022.

    Google Scholar

    [26] Ma K D, Zeng K, Wang Z. Perceptual quality assessment for multi-exposure image fusion[J]. IEEE Trans Image Process, 2015, 24(11): 3345−3356. doi: 10.1109/TIP.2015.2442920

    CrossRef Google Scholar

    [27] Wang S H, Zheng J, Hu H M, et al. Naturalness preserved enhancement algorithm for non-uniform illumination images[J]. IEEE Trans Image Process, 2013, 22(9): 3538−3548. doi: 10.1109/TIP.2013.2261309

    CrossRef Google Scholar

    [28] Wu W H, Weng J, Zhang P P, et al. URetinex-Net: retinex-based deep unfolding network for low-light image enhancement [C]//Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 5891–5900. https://doi.org/10.1109/CVPR52688.2022.00581.

    Google Scholar

    [29] Liu R S, Ma L, Zhang J A, et al. Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement[C]//Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021: 10556–10565. https://doi.org/10.1109/CVPR46437.2021.01042.

    Google Scholar

  • During the image acquisition process, insufficient or uneven external lighting, differences in the positioning of the shooting equipment, or varying exposures of the same equipment can result in images that are relatively dark, leading to low-light image problems. These issues not only affect the observation experience but also pose significant challenges to feature extraction, object detection, and image understanding in image processing or machine vision, seriously impacting their application effectiveness. Although deep learning methods have achieved significant success in the field of low-light image enhancement, several issues remain: 1) poor generalization caused by paired dataset training; 2) noise amplification and color deviation introduced during the enhancement process; 3) loss of structural details during the transmission process of deep learning networks.

    To address these issues, an unsupervised dual low-light image enhancement method that combines color correction and structural information is proposed. Firstly, based on generative adversarial networks, the generator adopts a dual-branch network structure to process image colors and structural details in parallel, resulting in restored images with more natural colors and clearer texture details. The addition of a spatial discrimination module (SDB) to the discriminator enhances its judgment capability, encouraging the generator to produce more realistic images. Secondly, an image color correction module (IGCB) is proposed based on the lighting characteristics of the image itself, using lighting features as guidance to reduce the impact of noise and artifacts caused by environmental factors on low-light images. Finally, the proposed attention convolution module (CAB) and multi-scale channel fusion module (SKCF) are utilized to enhance the semantic and local information at each level of the image. In the color branch, an image correction module introduces lighting features at each stage of image processing, enhancing the interaction between regions with different exposure levels, and resulting in an enhanced image with rich color and illumination information. In the structural branch, attention convolution modules are introduced during the encoding stage to perform fine-grained spatial feature optimization and enhance high-frequency information. During the decoding stage, a multi-scale channel fusion module is used to gather comprehensive feature information from different scales, improving the texture recovery ability of the image network. Experimental results show that, compared with classical algorithms, this method restores images with more natural colors and clearer texture details across multiple datasets.

  • 加载中
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Figures(11)

Tables(3)

Article Metrics

Article views() PDF downloads() Cited by()

Access History
Article Contents

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint