LECARM: Low-Light Image Enhancement Using the Camera Response Model

Low-light image enhancement algorithms can improve the visual quality of low-light images and support the extraction of valuable information for some computer vision techniques. However, existing techniques inevitably introduce color and lightness distortions when enhancing the images. To lower the...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on circuits and systems for video technology Vol. 29; no. 4; pp. 968 - 981
Main Authors Ren, Yurui, Ying, Zhenqiang, Li, Thomas H., Li, Ge
Format Journal Article
LanguageEnglish
Published New York IEEE 01.04.2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN1051-8215
1558-2205
DOI10.1109/TCSVT.2018.2828141

Cover

Loading…
Abstract Low-light image enhancement algorithms can improve the visual quality of low-light images and support the extraction of valuable information for some computer vision techniques. However, existing techniques inevitably introduce color and lightness distortions when enhancing the images. To lower the distortions, we propose a novel enhancement framework using the response characteristics of cameras. First, we discuss how to determine a reasonable camera response model and its parameters. Then, we use the illumination estimation techniques to estimate the exposure ratio for each pixel. Finally, the selected camera response model is used to adjust each pixel to the desired exposure according to the estimated exposure ratio map. Experiments show that our method can obtain enhancement results with fewer color and lightness distortions compared with the several state-of-the-art methods.
AbstractList Low-light image enhancement algorithms can improve the visual quality of low-light images and support the extraction of valuable information for some computer vision techniques. However, existing techniques inevitably introduce color and lightness distortions when enhancing the images. To lower the distortions, we propose a novel enhancement framework using the response characteristics of cameras. First, we discuss how to determine a reasonable camera response model and its parameters. Then, we use the illumination estimation techniques to estimate the exposure ratio for each pixel. Finally, the selected camera response model is used to adjust each pixel to the desired exposure according to the estimated exposure ratio map. Experiments show that our method can obtain enhancement results with fewer color and lightness distortions compared with the several state-of-the-art methods.
Author Ren, Yurui
Li, Ge
Li, Thomas H.
Ying, Zhenqiang
Author_xml – sequence: 1
  givenname: Yurui
  orcidid: 0000-0003-0178-4460
  surname: Ren
  fullname: Ren, Yurui
  organization: Peking University Shenzhen Graduate School, Shenzhen, China
– sequence: 2
  givenname: Zhenqiang
  surname: Ying
  fullname: Ying, Zhenqiang
  organization: Peking University Shenzhen Graduate School, Shenzhen, China
– sequence: 3
  givenname: Thomas H.
  surname: Li
  fullname: Li, Thomas H.
  organization: Gpower Semiconductor Inc., Suzhou, China
– sequence: 4
  givenname: Ge
  orcidid: 0000-0003-0140-0949
  surname: Li
  fullname: Li, Ge
  email: gli@pkusz.edu.cn
  organization: Peking University Shenzhen Graduate School, Shenzhen, China
BookMark eNp9kMFOAjEQhhuDiYC-gF6aeF5suy3beiMbVJIlJghem263hSVsi-0S49u7CPHgwdPMYb75Z74B6DnvDAC3GI0wRuJhmb-9L0cEYT4inHBM8QXoY8Z4Qghiva5HDCecYHYFBjFuEcKU06wP8mKaTxbzR1j4z6So15sWzhq1NnDqNspp0xjXwlWs3Rq2GwNz1Zig4MLEvXfRwLmvzO4aXFq1i-bmXIdg9TRd5i9J8fo8yydFoolgbaJLzMa6YhprO7bUpmVFMKmMTWmGsFVKaK1ZpSwh3Z1KlQRZJBAWGS1JSXk6BPenvfvgPw4mtnLrD8F1kbL7Mk2FYFR0U-Q0pYOPMRgr96FuVPiSGMmjLPkjSx5lybOsDuJ_IF23qq29a4Oqd_-jdye0Nsb8ZvGUoizj6TfkQXhM
CODEN ITCTEM
CitedBy_id crossref_primary_10_1109_ACCESS_2020_2992749
crossref_primary_10_1109_TCSVT_2021_3073371
crossref_primary_10_1007_s11760_021_02093_z
crossref_primary_10_3390_e25091282
crossref_primary_10_1007_s00530_024_01298_9
crossref_primary_10_1016_j_jvcir_2024_104079
crossref_primary_10_1109_TAI_2021_3077522
crossref_primary_10_1137_22M1543161
crossref_primary_10_1007_s42979_021_00912_1
crossref_primary_10_1109_TCSVT_2021_3049940
crossref_primary_10_1109_JSTARS_2024_3357093
crossref_primary_10_1007_s11042_023_17141_8
crossref_primary_10_1109_TCSVT_2022_3146731
crossref_primary_10_1109_TCE_2024_3351711
crossref_primary_10_1016_j_ijleo_2022_168771
crossref_primary_10_1109_TITS_2024_3495034
crossref_primary_10_1016_j_displa_2024_102774
crossref_primary_10_1016_j_patcog_2023_109823
crossref_primary_10_1007_s11042_020_09948_6
crossref_primary_10_1007_s12596_025_02610_0
crossref_primary_10_1016_j_patcog_2023_110001
crossref_primary_10_1016_j_dsp_2024_104557
crossref_primary_10_1007_s00371_023_02770_9
crossref_primary_10_1016_j_eswa_2025_126616
crossref_primary_10_1016_j_image_2023_116925
crossref_primary_10_1016_j_ndteint_2024_103049
crossref_primary_10_1109_TIM_2022_3165303
crossref_primary_10_1109_ACCESS_2023_3305680
crossref_primary_10_1109_TETCI_2024_3359051
crossref_primary_10_1109_TCSVT_2022_3141578
crossref_primary_10_1109_TMM_2022_3194993
crossref_primary_10_1109_TCSVT_2023_3252007
crossref_primary_10_1109_TCSVT_2022_3190057
crossref_primary_10_1109_TCSVT_2022_3186880
crossref_primary_10_3390_sym14061165
crossref_primary_10_1109_TCSVT_2023_3323128
crossref_primary_10_1109_TCSVT_2024_3454763
crossref_primary_10_1016_j_cam_2023_115435
crossref_primary_10_1016_j_jvcir_2024_104133
crossref_primary_10_1049_ipr2_12114
crossref_primary_10_1109_TCE_2022_3209791
crossref_primary_10_1007_s00521_022_07713_4
crossref_primary_10_1109_ACCESS_2020_2983457
crossref_primary_10_1109_TETCI_2021_3053253
crossref_primary_10_1038_s41598_024_64421_w
crossref_primary_10_1109_TCSVT_2022_3194169
crossref_primary_10_1109_ACCESS_2023_3336411
crossref_primary_10_1016_j_inffus_2022_12_012
crossref_primary_10_1007_s00371_021_02343_8
crossref_primary_10_1016_j_measurement_2025_116870
crossref_primary_10_3390_s24155019
crossref_primary_10_3390_s20164614
crossref_primary_10_3390_sym12091561
crossref_primary_10_1049_ipr2_12124
crossref_primary_10_1016_j_knosys_2024_112427
crossref_primary_10_1007_s11263_024_02292_4
crossref_primary_10_1016_j_dsp_2025_105044
crossref_primary_10_1109_TMM_2021_3068840
crossref_primary_10_3390_e23060746
crossref_primary_10_1109_TCSVT_2021_3086598
crossref_primary_10_3390_drones7070483
crossref_primary_10_1142_S0218001422540180
crossref_primary_10_3390_app12105257
crossref_primary_10_1109_TIP_2021_3135473
crossref_primary_10_1007_s11042_024_20086_1
crossref_primary_10_1109_TIP_2020_2986687
crossref_primary_10_3390_s22010085
crossref_primary_10_3390_app142311033
crossref_primary_10_1109_TCSVT_2023_3325357
crossref_primary_10_1016_j_dsp_2019_06_014
crossref_primary_10_1109_TIM_2023_3304675
crossref_primary_10_1109_TCSVT_2022_3181781
crossref_primary_10_3390_ijgi12100400
crossref_primary_10_3390_electronics12122654
crossref_primary_10_1016_j_neucom_2022_08_042
crossref_primary_10_1109_ACCESS_2020_3001206
crossref_primary_10_1364_OE_459063
crossref_primary_10_1109_TETCI_2024_3358200
crossref_primary_10_1109_TCSVT_2021_3113559
crossref_primary_10_1007_s11554_021_01104_z
crossref_primary_10_1016_j_displa_2024_102738
crossref_primary_10_1088_1742_6596_2478_6_062022
crossref_primary_10_1016_j_dsp_2023_104256
crossref_primary_10_1016_j_knosys_2024_111958
crossref_primary_10_1145_3638772
crossref_primary_10_1016_j_patcog_2025_111411
crossref_primary_10_1016_j_dsp_2022_103547
crossref_primary_10_1109_TCI_2023_3323835
crossref_primary_10_3390_s21154986
crossref_primary_10_1109_TCSVT_2020_2991437
crossref_primary_10_1007_s12652_021_02947_x
crossref_primary_10_1016_j_jvcir_2023_103978
crossref_primary_10_1109_ACCESS_2020_3023485
crossref_primary_10_1016_j_neucom_2025_129426
crossref_primary_10_1007_s12652_021_02930_6
crossref_primary_10_1016_j_oceaneng_2024_118951
crossref_primary_10_1007_s11001_022_09478_w
crossref_primary_10_1109_TCSVT_2023_3241162
crossref_primary_10_1007_s40031_022_00838_z
crossref_primary_10_1007_s11042_023_16207_x
crossref_primary_10_1142_S0218126624501172
crossref_primary_10_1016_j_sigpro_2022_108523
crossref_primary_10_1016_j_sigpro_2021_108447
crossref_primary_10_3390_su15021029
crossref_primary_10_1109_TCI_2023_3340617
crossref_primary_10_1007_s10489_022_04013_1
crossref_primary_10_1049_ipr2_12148
crossref_primary_10_3390_electronics13193883
crossref_primary_10_1016_j_knosys_2023_111099
crossref_primary_10_1109_TETCI_2023_3327397
crossref_primary_10_1007_s00138_021_01272_9
crossref_primary_10_1109_TCSVT_2023_3324591
crossref_primary_10_1016_j_patrec_2024_07_017
crossref_primary_10_1016_j_sigpro_2022_108752
crossref_primary_10_1109_TAI_2022_3190240
crossref_primary_10_1109_TCSVT_2024_3377108
crossref_primary_10_1109_TIM_2023_3306525
crossref_primary_10_1007_s11042_021_11505_8
crossref_primary_10_1109_TCSVT_2020_3009235
crossref_primary_10_1142_S0218001423540125
crossref_primary_10_1109_TIP_2024_3519997
crossref_primary_10_1007_s11042_020_09919_x
crossref_primary_10_1109_TIM_2021_3132086
Cites_doi 10.1109/TIP.2014.2324813
10.1109/TIP.2013.2261309
10.1109/TNNLS.2017.2649101
10.1109/TIP.2015.2482903
10.1109/CVPR.2007.383000
10.5201/ipol.2014.107
10.1109/CVPR.2007.383213
10.1016/j.sigpro.2016.05.031
10.1109/TIP.2012.2214050
10.1145/2366145.2366158
10.1049/iet-ipr:20070012
10.1109/TIP.2015.2442920
10.1145/2366145.2366222
10.1109/TCE.2007.4429280
10.1023/B:VLSI.0000028532.53893.82
10.1364/JOSAA.7.002032
10.1109/CVPR.2016.304
10.1137/S1052623496303470
10.1016/j.cviu.2006.02.007
10.1109/83.841534
10.1109/TCE.2007.381734
10.1109/CVPR.2004.1315266
10.1109/TIP.2018.2794218
10.1016/0734-189X(89)90166-7
10.1145/2070781.2024208
10.1109/CVPR.1999.786966
10.1109/TPAMI.2004.88
10.1109/TIP.2016.2639450
10.1109/ICCVW.2017.356
10.1109/ICME.2017.8019529
10.1109/TCYB.2015.2401732
10.1109/TCE.2003.1261234
10.1109/83.557356
10.1016/j.dsp.2013.06.002
10.1145/1360612.1360668
10.1109/TIP.2017.2651366
10.1016/j.jvcir.2016.11.001
10.1109/83.855434
10.1109/CVPR.2005.128
10.1109/ICIP.2015.7351501
10.1109/ICASSP.2016.7471978
10.1145/3150165.3150170
10.1016/j.patcog.2016.06.008
10.1109/TCE.2005.1561863
10.1109/TPAMI.2012.66
10.1109/LSP.2015.2487369
10.1364/AO.53.000082
10.1109/TPAMI.2003.1240119
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019
DBID 97E
RIA
RIE
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
DOI 10.1109/TCSVT.2018.2828141
DatabaseName IEEE Xplore (IEEE)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
DatabaseTitleList Technology Research Database

Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1558-2205
EndPage 981
ExternalDocumentID 10_1109_TCSVT_2018_2828141
8340778
Genre orig-research
GroupedDBID -~X
0R~
29I
4.4
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
HZ~
H~9
ICLAB
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
O9-
OCL
P2P
RIA
RIE
RNS
RXW
TAE
TN5
VH1
AAYXX
CITATION
RIG
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c295t-cb156cd5c1cf6f4f3bd212def34701faa9ccc5daf22215aab20f0901974b2b483
IEDL.DBID RIE
ISSN 1051-8215
IngestDate Mon Jun 30 06:28:54 EDT 2025
Tue Jul 01 00:41:11 EDT 2025
Thu Apr 24 22:58:00 EDT 2025
Wed Aug 27 02:44:40 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 4
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c295t-cb156cd5c1cf6f4f3bd212def34701faa9ccc5daf22215aab20f0901974b2b483
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0003-0178-4460
0000-0003-0140-0949
PQID 2203399549
PQPubID 85433
PageCount 14
ParticipantIDs proquest_journals_2203399549
crossref_primary_10_1109_TCSVT_2018_2828141
ieee_primary_8340778
crossref_citationtrail_10_1109_TCSVT_2018_2828141
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2019-04-01
PublicationDateYYYYMMDD 2019-04-01
PublicationDate_xml – month: 04
  year: 2019
  text: 2019-04-01
  day: 01
PublicationDecade 2010
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE transactions on circuits and systems for video technology
PublicationTitleAbbrev TCSVT
PublicationYear 2019
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref56
ref12
ref15
ref14
ref53
ref55
ref11
ref54
ref10
xu (ref48) 2012; 31
ref16
ref19
ref18
xu (ref49) 2011; 30
dong (ref17) 2011
jain (ref7) 1989
ref50
ref46
ref45
ref47
ref42
ref41
ref44
ref43
karaduzovic-hadziabdic (ref52) 2016
sen (ref51) 2012; 31
ref8
ref9
ref4
ref3
ref5
eilertsen (ref33) 2017
aydin (ref28) 2008; 27
ref35
ref34
ref37
ref36
ref31
ref30
ref32
ref2
ying (ref40) 2017
ren (ref29) 2017
ref1
ref39
ref38
fang (ref25) 2015; 22
gonzalez (ref6) 2008
ref24
ref23
ref26
ref20
ref22
ref21
ref27
oza (ref13) 2013; 23
References_xml – ident: ref20
  doi: 10.1109/TIP.2014.2324813
– ident: ref2
  doi: 10.1109/TIP.2013.2261309
– ident: ref24
  doi: 10.1109/TNNLS.2017.2649101
– ident: ref50
  doi: 10.1109/TIP.2015.2482903
– ident: ref41
  doi: 10.1109/CVPR.2007.383000
– ident: ref53
  doi: 10.5201/ipol.2014.107
– year: 2016
  ident: ref52
  article-title: Subjective and objective evaluation of multi-exposure high dynamic range image deghosting methods
– ident: ref45
  doi: 10.1109/CVPR.2007.383213
– start-page: 36
  year: 2017
  ident: ref40
  article-title: A new image contrast enhancement algorithm using exposure fusion framework
  publication-title: Proc Int Conf Comput Anal Images Patterns
– ident: ref21
  doi: 10.1016/j.sigpro.2016.05.031
– ident: ref23
  doi: 10.1109/TIP.2012.2214050
– year: 2017
  ident: ref33
  publication-title: HDR image reconstruction from a single exposure using deep CNNs
– volume: 22
  start-page: 838
  year: 2015
  ident: ref25
  article-title: No-reference quality assessment of contrast-distorted images based on natural scene statistics
  publication-title: IEEE Signal Process Lett
– volume: 31
  start-page: 139
  year: 2012
  ident: ref48
  article-title: Structure extraction from texture via relative total variation
  publication-title: ACM Trans Graph
  doi: 10.1145/2366145.2366158
– ident: ref55
  doi: 10.1049/iet-ipr:20070012
– ident: ref54
  doi: 10.1109/TIP.2015.2442920
– start-page: 1
  year: 2011
  ident: ref17
  article-title: Fast efficient algorithm for enhancement of low lighting video
  publication-title: Proc IEEE Int Conf Multimedia Expo
– volume: 31
  year: 2012
  ident: ref51
  article-title: Robust patch-based HDR reconstruction of dynamic scenes
  publication-title: ACM Trans Graph
  doi: 10.1145/2366145.2366222
– ident: ref3
  doi: 10.1109/TCE.2007.4429280
– ident: ref12
  doi: 10.1023/B:VLSI.0000028532.53893.82
– ident: ref5
  doi: 10.1364/JOSAA.7.002032
– ident: ref22
  doi: 10.1109/CVPR.2016.304
– ident: ref42
  doi: 10.1137/S1052623496303470
– ident: ref30
  doi: 10.1016/j.cviu.2006.02.007
– ident: ref8
  doi: 10.1109/83.841534
– ident: ref9
  doi: 10.1109/TCE.2007.381734
– ident: ref46
  doi: 10.1109/CVPR.2004.1315266
– ident: ref16
  doi: 10.1109/TIP.2018.2794218
– ident: ref4
  doi: 10.1016/0734-189X(89)90166-7
– volume: 30
  year: 2011
  ident: ref49
  article-title: Image smoothing via $L_{0}$ gradient minimization
  publication-title: ACM Trans Graph
  doi: 10.1145/2070781.2024208
– ident: ref34
  doi: 10.1109/CVPR.1999.786966
– ident: ref32
  doi: 10.1109/TPAMI.2004.88
– ident: ref1
  doi: 10.1109/TIP.2016.2639450
– ident: ref39
  doi: 10.1109/ICCVW.2017.356
– year: 2017
  ident: ref29
  article-title: A reduced-reference color distortion metric for enhanced low-light images
  publication-title: Proc Vis Commun Image Process (VCIP)
– ident: ref56
  doi: 10.1109/ICME.2017.8019529
– year: 1989
  ident: ref7
  publication-title: Fundamentals of Digital Image Processing
– ident: ref26
  doi: 10.1109/TCYB.2015.2401732
– ident: ref11
  doi: 10.1109/TCE.2003.1261234
– ident: ref19
  doi: 10.1109/83.557356
– year: 2008
  ident: ref6
  publication-title: Digital Image Processing
– volume: 23
  start-page: 1856
  year: 2013
  ident: ref13
  article-title: Automatic contrast enhancement of low-light images based on local statistics of wavelet coefficients
  publication-title: Digit Signal Process
  doi: 10.1016/j.dsp.2013.06.002
– volume: 27
  start-page: 69:1
  year: 2008
  ident: ref28
  article-title: Dynamic range independent image quality assessment
  publication-title: ACM Trans Graph
  doi: 10.1145/1360612.1360668
– ident: ref37
  doi: 10.1109/TIP.2017.2651366
– ident: ref14
  doi: 10.1016/j.jvcir.2016.11.001
– ident: ref36
  doi: 10.1109/83.855434
– ident: ref44
  doi: 10.1109/CVPR.2005.128
– ident: ref18
  doi: 10.1109/ICIP.2015.7351501
– ident: ref38
  doi: 10.1109/ICASSP.2016.7471978
– ident: ref47
  doi: 10.1145/3150165.3150170
– ident: ref15
  doi: 10.1016/j.patcog.2016.06.008
– ident: ref10
  doi: 10.1109/TCE.2005.1561863
– ident: ref43
  doi: 10.1109/TPAMI.2012.66
– ident: ref27
  doi: 10.1109/LSP.2015.2487369
– ident: ref31
  doi: 10.1364/AO.53.000082
– ident: ref35
  doi: 10.1109/TPAMI.2003.1240119
SSID ssj0014847
Score 2.632031
Snippet Low-light image enhancement algorithms can improve the visual quality of low-light images and support the extraction of valuable information for some computer...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 968
SubjectTerms Algorithms
brightness transform function
Camera response function
Cameras
Color
Computer vision
contrast enhancement
Distortion
Exposure
Histograms
Image color analysis
Image enhancement
Image quality
Light
Lighting
low-light image enhancement
Nonlinear distortion
Parameter estimation
Pixels
Title LECARM: Low-Light Image Enhancement Using the Camera Response Model
URI https://ieeexplore.ieee.org/document/8340778
https://www.proquest.com/docview/2203399549
Volume 29
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV07T8MwED5BJxh4IwoFeWCDFDtxUoetiooAtQxQEFtkO7aQKC2CVEj8enxOWvESYstgO5bv7LvPvvsO4FBwnkapZEHCqXUAhcpAOMMdaGQ3N1ooQTEbeXCVnN_yy_v4fgGO57kwxhgffGba-Onf8ouJnuJV2YmIHPzoiEVYdMCtytWavxhw4YuJOXeBub-xeJYgQ9OTYXZzN8QoLtFGgME4-2KEfFWVH0exty9nqzCYzawKK3lsT0vV1u_fSBv_O_U1WKkdTdKtNGMdFsx4A5Y_0Q9uQtbvZd3rwSnpT96CPqJ0cvHkzhfSGz-gMuCQxMcUEOcmkkziBRa5rqJqDcEyaqMtuD3rDbPzoC6qEOgwjctAK4fYdBFrpm1iuY1U4axXYWzEO5RZKVOtdVxI6xwHFkupQmqpcxoc7lCh4iLahsZ4MjY7QCKGFMcdy42MuEyQ5zopnFEsYqYVlaYJbLbKua4Zx7HwxSj3yIOmuZdMjpLJa8k04Wje57ni2_iz9SYu9bxlvcpNaM2Emddb8jUPQxphHi9Pd3_vtQdLbuy0CstpQaN8mZp953GU6sCr2geRQc79
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3PT9swFH5icGA7sEGZ1g02H7hBWjtxUmc3FLUqLOVQWtRbZDu2JgHtNFIh7a_Hz0krBghxy8G_5Pfi9z77ve8BHAnO0yiVLEg4tQ6gUBkIZ7gDjezmRgslKGYjjy6S4ZSfz-LZBpysc2GMMT74zHTw07_llwu9xKuyrogc_OiJd7AVYzJuna21fjPgwpcTcw4Dc_OxeJUiQ9PuJLu8mmAcl-ggxGCc_WeGfF2VZ4extzCDjzBara0OLLnuLCvV0f-e0Da-dfGfYKdxNclprRu7sGHme_DhEQFhC7K8n52ORz9JvrgPcsTp5OzWnTCkP_-N6oBDEh9VQJyjSDKJV1hkXMfVGoKF1G72YTroT7Jh0JRVCHSYxlWglcNsuow10zax3EaqdParNDbiPcqslKnWOi6lda4Di6VUIbXUuQ0OeahQcRF9hs35Ym6-AIkYkhz3LDcy4jJBpuukdGaxjJlWVJo2sNUuF7rhHMfSFzeFxx40LbxkCpRM0UimDcfrPn9qxo1XW7dwq9ctm11uw8FKmEXzU94VYUgjzOTl6deXe_2A7eFklBf52cWvb_DezZPWQToHsFn9XZpD539U6rtXuweIqdJF
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=LECARM%3A+Low-Light+Image+Enhancement+Using+the+Camera+Response+Model&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Ren%2C+Yurui&rft.au=Ying%2C+Zhenqiang&rft.au=Li%2C+Thomas+H.&rft.au=Li%2C+Ge&rft.date=2019-04-01&rft.pub=IEEE&rft.issn=1051-8215&rft.volume=29&rft.issue=4&rft.spage=968&rft.epage=981&rft_id=info:doi/10.1109%2FTCSVT.2018.2828141&rft.externalDocID=8340778
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon