Wireless Image Transmission Using Deep Source Channel Coding With Attention Modules
Recent research on joint source channel coding (JSCC) for wireless communications has achieved great success owing to the employment of deep learning (DL). However, the existing work on DL based JSCC usually trains the designed network to operate under a specific signal-to-noise ratio (SNR) regime,...
Saved in:
Published in | IEEE transactions on circuits and systems for video technology Vol. 32; no. 4; pp. 2315 - 2328 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.04.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Recent research on joint source channel coding (JSCC) for wireless communications has achieved great success owing to the employment of deep learning (DL). However, the existing work on DL based JSCC usually trains the designed network to operate under a specific signal-to-noise ratio (SNR) regime, without taking into account that the SNR level during the deployment stage may differ from that during the training stage. A number of networks are required to cover the scenario with a broad range of SNRs, which is computational inefficiency (in the training stage) and requires large storage. To overcome these drawbacks our paper proposes a novel method called Attention DL based JSCC (ADJSCC) that can successfully operate with different SNR levels during transmission. This design is inspired by the resource assignment strategy in traditional JSCC, which dynamically adjusts the compression ratio in source coding and the channel coding rate according to the channel SNR. This is achieved by resorting to attention mechanisms because these are able to allocate computing resources to more critical tasks. Instead of applying the resource allocation strategy in traditional JSCC, the ADJSCC uses the channel-wise soft attention to scaling features according to SNR conditions. We compare the ADJSCC method with the state-of-the-art DL based JSCC method through extensive experiments to demonstrate its adaptability, robustness and versatility. Compared with the existing methods, the proposed method takes less storage and is more robust in the presence of channel mismatch. |
---|---|
AbstractList | Recent research on joint source channel coding (JSCC) for wireless communications has achieved great success owing to the employment of deep learning (DL). However, the existing work on DL based JSCC usually trains the designed network to operate under a specific signal-to-noise ratio (SNR) regime, without taking into account that the SNR level during the deployment stage may differ from that during the training stage. A number of networks are required to cover the scenario with a broad range of SNRs, which is computational inefficiency (in the training stage) and requires large storage. To overcome these drawbacks our paper proposes a novel method called Attention DL based JSCC (ADJSCC) that can successfully operate with different SNR levels during transmission. This design is inspired by the resource assignment strategy in traditional JSCC, which dynamically adjusts the compression ratio in source coding and the channel coding rate according to the channel SNR. This is achieved by resorting to attention mechanisms because these are able to allocate computing resources to more critical tasks. Instead of applying the resource allocation strategy in traditional JSCC, the ADJSCC uses the channel-wise soft attention to scaling features according to SNR conditions. We compare the ADJSCC method with the state-of-the-art DL based JSCC method through extensive experiments to demonstrate its adaptability, robustness and versatility. Compared with the existing methods, the proposed method takes less storage and is more robust in the presence of channel mismatch. |
Author | Chen, Wei Rodrigues, Miguel Sun, Peng Yang, Ang Ai, Bo Xu, Jialong |
Author_xml | – sequence: 1 givenname: Jialong orcidid: 0000-0003-3669-9274 surname: Xu fullname: Xu, Jialong email: jialongxu@bjtu.edu.cn organization: State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing, China – sequence: 2 givenname: Bo orcidid: 0000-0001-6850-0595 surname: Ai fullname: Ai, Bo email: boai@bjtu.edu.cn organization: State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing, China – sequence: 3 givenname: Wei orcidid: 0000-0001-5090-9915 surname: Chen fullname: Chen, Wei email: weich@bjtu.edu.cn organization: State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing, China – sequence: 4 givenname: Ang orcidid: 0000-0001-8107-4543 surname: Yang fullname: Yang, Ang email: ang.yang@vivo.com organization: Vivo Communication Research Institute, Beijing, China – sequence: 5 givenname: Peng orcidid: 0000-0002-6095-5461 surname: Sun fullname: Sun, Peng email: sunpeng@vivo.com organization: Vivo Communication Research Institute, Beijing, China – sequence: 6 givenname: Miguel orcidid: 0000-0002-8908-847X surname: Rodrigues fullname: Rodrigues, Miguel email: m.rodrigues@ucl.ac.uk organization: Department of Electronic and Electrical Engineering, University College London, London, U.K |
BookMark | eNp9UMtOwkAUnRhMBPQHdNPEdXGe7XRJ6osE44Iiy8m0vYUhMMWZYeHf2wpx4cLVvck9j3vOCA1sawGhW4InhODsocgXH8WEYkomDEsqKLlAQyKEjCnFYtDtWJBYUiKu0Mj7LcaES54O0WJlHOzA-2i212uICqet3xvvTWujpTd2HT0CHKJFe3QVRPlGWwu7KG_r_rQyYRNNQwAbevxbWx87rWt02eidh5vzHKPl81ORv8bz95dZPp3HFc1EiBtcQlUnWa1ZI9KGJwyE5AxDCiSppC4zrblksmlSgRmUddLwUkAmBdMVESkbo_uT7sG1n0fwQW27L21nqWjCU05FwnoUPaEq13rvoFEHZ_bafSmCVV-e-ilP9eWpc3kdSf4hVSboPmRw2uz-p96dqAYAfr0yzmTSpfkGxhp_aQ |
CODEN | ITCTEM |
CitedBy_id | crossref_primary_10_1109_LCOMM_2025_3538486 crossref_primary_10_1109_TVT_2024_3390213 crossref_primary_10_1109_TWC_2024_3520870 crossref_primary_10_1109_TGCN_2024_3374700 crossref_primary_10_1016_j_phycom_2024_102568 crossref_primary_10_1109_JIOT_2024_3350656 crossref_primary_10_1109_JSAC_2022_3221999 crossref_primary_10_1109_JSAC_2023_3288238 crossref_primary_10_1109_JIOT_2024_3373808 crossref_primary_10_1109_TCCN_2022_3228536 crossref_primary_10_1109_TCCN_2023_3306851 crossref_primary_10_3390_electronics12224637 crossref_primary_10_1109_TCCN_2023_3326302 crossref_primary_10_1016_j_aeue_2024_155512 crossref_primary_10_1109_TCOMM_2024_3406381 crossref_primary_10_1109_TCCN_2022_3151935 crossref_primary_10_1109_TWC_2023_3234408 crossref_primary_10_1109_JSAC_2022_3180802 crossref_primary_10_1007_s00530_023_01116_8 crossref_primary_10_1109_TCOMM_2024_3400912 crossref_primary_10_1109_OJCOMS_2024_3371871 crossref_primary_10_1109_JSTSP_2024_3521277 crossref_primary_10_1109_JSAC_2023_3288243 crossref_primary_10_1016_j_dcan_2024_06_001 crossref_primary_10_1109_JSAC_2023_3288246 crossref_primary_10_1016_j_phycom_2024_102534 crossref_primary_10_1109_LWC_2024_3460789 crossref_primary_10_1109_TCCN_2024_3422496 crossref_primary_10_1109_TCOMM_2024_3364990 crossref_primary_10_3390_s24124005 crossref_primary_10_3390_s24216772 crossref_primary_10_1109_LCOMM_2024_3499956 crossref_primary_10_1109_TWC_2024_3422794 crossref_primary_10_1109_JSAC_2022_3223408 crossref_primary_10_1109_TCCN_2024_3417627 crossref_primary_10_1109_ACCESS_2023_3269848 crossref_primary_10_1109_TCOMM_2023_3258484 crossref_primary_10_1109_LWC_2022_3204837 crossref_primary_10_1109_JSAC_2022_3191354 crossref_primary_10_1109_TCCN_2024_3392803 crossref_primary_10_1109_TCCN_2024_3424842 crossref_primary_10_1109_TCCN_2024_3438371 crossref_primary_10_1109_TWC_2023_3334225 crossref_primary_10_1109_TWC_2023_3339239 crossref_primary_10_1109_JIOT_2024_3352737 crossref_primary_10_1109_TVT_2024_3362328 crossref_primary_10_1109_TWC_2024_3409735 crossref_primary_10_1109_COMST_2023_3333342 crossref_primary_10_1109_JSAC_2025_3536557 crossref_primary_10_1109_TCCN_2024_3375506 crossref_primary_10_1109_ACCESS_2025_3546514 crossref_primary_10_1109_LWC_2024_3510722 crossref_primary_10_1016_j_dcan_2024_12_001 crossref_primary_10_1109_JSTSP_2024_3511403 crossref_primary_10_1109_TCCN_2024_3394867 crossref_primary_10_1109_JSAC_2025_3531537 crossref_primary_10_1109_TCCN_2023_3294754 crossref_primary_10_22630_MGV_2025_34_1_1 crossref_primary_10_1038_s41598_024_70619_9 crossref_primary_10_1109_MCOM_004_2200819 crossref_primary_10_1109_TWC_2024_3386052 crossref_primary_10_1109_COMST_2022_3223224 crossref_primary_10_1109_LWC_2023_3256006 crossref_primary_10_1109_JSAC_2025_3531579 crossref_primary_10_1109_MWC_001_2300014 crossref_primary_10_1109_OJCOMS_2025_3548079 crossref_primary_10_1109_TWC_2023_3265201 crossref_primary_10_1109_TVT_2024_3401140 crossref_primary_10_3390_electronics12112425 crossref_primary_10_1109_JLT_2023_3328311 crossref_primary_10_1007_s11760_025_03826_0 crossref_primary_10_1109_ACCESS_2025_3532797 crossref_primary_10_1109_TWC_2023_3269444 crossref_primary_10_1109_LWC_2024_3452705 crossref_primary_10_1109_LWC_2025_3527145 crossref_primary_10_1109_JPROC_2024_3520707 crossref_primary_10_1109_TVT_2024_3396426 crossref_primary_10_1109_JSAC_2022_3221963 crossref_primary_10_1109_COMST_2023_3300664 crossref_primary_10_1109_JSAC_2024_3460084 crossref_primary_10_1109_OJCOMS_2022_3189023 crossref_primary_10_1016_j_phycom_2024_102555 crossref_primary_10_1109_TWC_2024_3483314 crossref_primary_10_3390_math11040992 crossref_primary_10_1109_TWC_2021_3107452 crossref_primary_10_1109_ACCESS_2024_3360003 crossref_primary_10_1109_TCOMM_2024_3455233 crossref_primary_10_1109_TVT_2022_3219363 crossref_primary_10_1109_LCOMM_2023_3339776 crossref_primary_10_1109_TIFS_2025_3534562 crossref_primary_10_1109_TSP_2024_3411692 crossref_primary_10_1109_TWC_2024_3379244 crossref_primary_10_1109_LWC_2025_3526962 crossref_primary_10_1007_s11042_023_15374_1 crossref_primary_10_1109_TWC_2024_3450697 |
Cites_doi | 10.1109/ACSSC.2018.8645416 10.1109/WCSP.2019.8928071 10.1016/j.image.2015.11.005 10.1007/s11263-015-0816-y 10.1109/TCSVT.2018.2867067 10.1109/TCSVT.2020.3010627 10.1109/TCOMM.2005.844936 10.1109/ICASSP40776.2020.9054078 10.1109/ACCESS.2019.2920929 10.1109/CVPR.2015.7298594 10.1109/CVPR.2016.90 10.1109/ICC.2010.5502650 10.1109/TCCN.2017.2758370 10.1109/ISIT.2007.4557472 10.1109/ICASSP.2018.8461983 10.1109/TCSVT.2017.2734838 10.1109/ICIP40778.2020.9190935 10.1109/SPAWC.2019.8815416 10.1109/JSAIT.2020.2988577 10.1109/DCC.2004.1281472 10.1109/JSAIT.2020.2987203 10.1109/CVPR.2018.00745 10.1109/26.843191 10.1002/j.1538-7305.1948.tb01338.x 10.1109/JSTSP.2019.2908700 10.1109/GLOCOMW.2018.8644250 10.18653/v1/D15-1166 10.1002/0471200611 10.1109/ICASSP.2005.1415816 10.1109/TCCN.2019.2919300 10.1109/ICCV.2015.123 10.1007/978-3-7091-2945-6 10.1109/ICIP.1995.537420 10.1007/978-3-030-01234-2_1 10.1109/CVPR.2017.683 10.1109/TIT.2007.909092 10.1109/76.867934 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
DOI | 10.1109/TCSVT.2021.3082521 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) - NZ CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Technology Research Database |
Database_xml | – sequence: 1 dbid: RIE name: IEEE/IET Electronic Library url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1558-2205 |
EndPage | 2328 |
ExternalDocumentID | 10_1109_TCSVT_2021_3082521 9438648 |
Genre | orig-research |
GrantInformation_xml | – fundername: National Natural Science Foundation of China grantid: 61911530216; 6196113039; U1834210 funderid: 10.13039/501100001809 – fundername: NSFC Outstanding Youth Foundation grantid: 61725101 funderid: 10.13039/501100001809 – fundername: National Key Research and Development Program of China grantid: 2018YFE0207600; 2020YFB1807201 funderid: 10.13039/501100012166 – fundername: State Key Laboratory of Rail Traffic Control and Safety through Beijing Jiaotong University grantid: RCS2021ZZ004; RCS2020ZT010 funderid: 10.13039/501100005023 – fundername: Vivo Research Grant – fundername: Beijing Natural Science Foundation grantid: L202019 funderid: 10.13039/501100004826 – fundername: Royal Society Newton Advanced Fellowship grantid: NA191006 funderid: 10.13039/501100000288 – fundername: Key-Area Research and Development Program of Guangdong Province grantid: 2019B010157002 |
GroupedDBID | -~X 0R~ 29I 4.4 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 O9- OCL P2P RIA RIE RNS RXW TAE TN5 VH1 AAYXX CITATION RIG 7SC 7SP 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c295t-f0becd69da3f57f463e58430e7e16c8ab9aa4838ff7503ebd6f4b5e9853ac1573 |
IEDL.DBID | RIE |
ISSN | 1051-8215 |
IngestDate | Mon Jun 30 02:28:07 EDT 2025 Thu Apr 24 23:08:23 EDT 2025 Tue Jul 01 00:41:15 EDT 2025 Wed Aug 27 02:40:50 EDT 2025 |
IsDoiOpenAccess | false |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 4 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c295t-f0becd69da3f57f463e58430e7e16c8ab9aa4838ff7503ebd6f4b5e9853ac1573 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0002-8908-847X 0000-0001-5090-9915 0000-0003-3669-9274 0000-0001-8107-4543 0000-0002-6095-5461 0000-0001-6850-0595 |
PQID | 2647425637 |
PQPubID | 85433 |
PageCount | 14 |
ParticipantIDs | ieee_primary_9438648 proquest_journals_2647425637 crossref_primary_10_1109_TCSVT_2021_3082521 crossref_citationtrail_10_1109_TCSVT_2021_3082521 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2022-04-01 |
PublicationDateYYYYMMDD | 2022-04-01 |
PublicationDate_xml | – month: 04 year: 2022 text: 2022-04-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | IEEE transactions on circuits and systems for video technology |
PublicationTitleAbbrev | TCSVT |
PublicationYear | 2022 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref12 Ballé (ref15) 2016 ref53 ref52 ref11 ref10 Minnen (ref16) ref17 Ballé (ref48) 2015 ref19 Jiang (ref25) ref46 ref45 ref47 ref42 ref49 Burth Kurka (ref37) ref8 ref7 ref9 ref3 ref6 ref5 ref40 Goodfellow (ref18) 2016; 1 ref35 ref34 ref36 ref31 ref30 ref2 ref1 ref39 ref38 Csiszár (ref4) 1980; 9 Mnih (ref44) Devlin (ref13) 2018 Bahdanau (ref41) 2014 ref24 ref23 ref26 Toderici (ref14) 2015 ref20 Mnih (ref33) 2016 ref22 ref21 Choi (ref32) Krizhevsky (ref51) 2009 ref28 ref27 Abadi (ref50) 2016 ref29 Vaswani (ref43) |
References_xml | – ident: ref26 doi: 10.1109/ACSSC.2018.8645416 – start-page: 2204 volume-title: Proc. Adv. neural Inf. Process. Syst. ident: ref44 article-title: Recurrent models of visual attention – ident: ref23 doi: 10.1109/WCSP.2019.8928071 – start-page: 10771 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref16 article-title: Joint autoregressive and hierarchical priors for learned image compression – ident: ref39 doi: 10.1016/j.image.2015.11.005 – ident: ref52 doi: 10.1007/s11263-015-0816-y – start-page: 2758 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref25 article-title: Turbo autoencoder: Deep learning based channel codes for point-to-point communication channels – ident: ref21 doi: 10.1109/TCSVT.2018.2867067 – ident: ref20 doi: 10.1109/TCSVT.2020.3010627 – volume-title: arXiv:1409.0473 year: 2014 ident: ref41 article-title: Neural machine translation by jointly learning to align and translate – year: 2009 ident: ref51 article-title: Learning multiple layers of features from tiny images – ident: ref8 doi: 10.1109/TCOMM.2005.844936 – volume: 9 start-page: 315 issue: 5 year: 1980 ident: ref4 article-title: Joint source-channel error exponent publication-title: Problems Control Inf. Theory – start-page: 1182 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref32 article-title: Neural joint source-channel coding – ident: ref34 doi: 10.1109/ICASSP40776.2020.9054078 – ident: ref36 doi: 10.1109/ACCESS.2019.2920929 – ident: ref11 doi: 10.1109/CVPR.2015.7298594 – ident: ref35 doi: 10.1109/CVPR.2016.90 – ident: ref38 doi: 10.1109/ICC.2010.5502650 – ident: ref22 doi: 10.1109/TCCN.2017.2758370 – ident: ref6 doi: 10.1109/ISIT.2007.4557472 – volume-title: arXiv:1603.04467 year: 2016 ident: ref50 article-title: TensorFlow: Large-scale machine learning on heterogeneous distributed systems – ident: ref28 doi: 10.1109/ICASSP.2018.8461983 – ident: ref19 doi: 10.1109/TCSVT.2017.2734838 – ident: ref17 doi: 10.1109/ICIP40778.2020.9190935 – ident: ref30 doi: 10.1109/SPAWC.2019.8815416 – volume-title: arXiv:1511.06085 year: 2015 ident: ref14 article-title: Variable rate image compression with recurrent neural networks – ident: ref24 doi: 10.1109/JSAIT.2020.2988577 – ident: ref10 doi: 10.1109/DCC.2004.1281472 – ident: ref31 doi: 10.1109/JSAIT.2020.2987203 – ident: ref46 doi: 10.1109/CVPR.2018.00745 – volume: 1 volume-title: Deep Learning year: 2016 ident: ref18 – ident: ref40 doi: 10.1109/26.843191 – ident: ref2 doi: 10.1002/j.1538-7305.1948.tb01338.x – ident: ref12 doi: 10.1109/JSTSP.2019.2908700 – ident: ref27 doi: 10.1109/GLOCOMW.2018.8644250 – ident: ref42 doi: 10.18653/v1/D15-1166 – volume-title: arXiv:1611.01704 year: 2016 ident: ref15 article-title: End-to-end optimized image compression – ident: ref1 doi: 10.1002/0471200611 – ident: ref53 doi: 10.1109/ICASSP.2005.1415816 – ident: ref29 doi: 10.1109/TCCN.2019.2919300 – ident: ref49 doi: 10.1109/ICCV.2015.123 – volume-title: arXiv:1810.04805 year: 2018 ident: ref13 article-title: BERT: Pre-training of deep bidirectional transformers for language understanding – start-page: 5998 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref43 article-title: Attention is all you need – volume-title: arXiv:1602.06725 year: 2016 ident: ref33 article-title: Variational inference for Monte Carlo objectives – start-page: 90 volume-title: Proc. Int. Zurich Seminar Inf. Commun. (IZS) ident: ref37 article-title: Joint source-channel coding of images with (not very) deep learning – ident: ref3 doi: 10.1007/978-3-7091-2945-6 – ident: ref7 doi: 10.1109/ICIP.1995.537420 – ident: ref47 doi: 10.1007/978-3-030-01234-2_1 – volume-title: arXiv:1511.06281 year: 2015 ident: ref48 article-title: Density modeling of images using a generalized normalization transformation – ident: ref45 doi: 10.1109/CVPR.2017.683 – ident: ref5 doi: 10.1109/TIT.2007.909092 – ident: ref9 doi: 10.1109/76.867934 |
SSID | ssj0014847 |
Score | 2.7118623 |
Snippet | Recent research on joint source channel coding (JSCC) for wireless communications has achieved great success owing to the employment of deep learning (DL).... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 2315 |
SubjectTerms | attention mechanism Channel coding Coding Compression ratio Decoding deep learning deep neural network Image coding Image transmission Joint source channel coding Resource allocation Signal to noise ratio Source coding Training Transform coding Wireless communication Wireless communications |
Title | Wireless Image Transmission Using Deep Source Channel Coding With Attention Modules |
URI | https://ieeexplore.ieee.org/document/9438648 https://www.proquest.com/docview/2647425637 |
Volume | 32 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV07T8MwED4BEwy8EeUlD2yQEseOY4-ogACpLC2PLUrii0CUtoJ04ddzdtKKlxBbBtuy7pz77n0Ah9zmOizIUi2lCAMZCwyyMDGBypSwhKC29N32uzfq8lZeP8QPc3A8q4VBRJ98hm336WP5dlRMnKvsxEihldTzME-GW12rNYsYSO2HiZG6wANNODYtkAnNSb_Tu-uTKRjxtmvOEkf8Cwj5qSo_RLHHl4sV6E5vVqeVPLcnVd4u3r81bfzv1VdhuVE02Wn9MtZgDofrsPSp_eAG9Fzq64BEHbt6IbHCPG4R350DjflcAnaGOGY97-BnrhBhiAPWGTm8Y_dP1SM7rao6X5J1R3ZCZ23C7cV5v3MZNEMWgiIycRWUIXHRKmMzUcZJKZVA0klEiAlyVegsN1kmtdBl6SKemFtVyjxGQzCfFTxOxBYsDEdD3AbGcx4pg9zSEZLLnCwpTfqREW6SRIJFC_iU6mnRdCB3gzAGqbdEQpN6TqWOU2nDqRYczfaM6_4bf67ecKSfrWyo3oK9KXPT5hd9S0kTTEhgKZHs_L5rFxYjV-vg03T2YKF6neA-aSBVfuCf3gdGutX- |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LT-MwEB6xcGA5ALuworzWh72xKXHsOPYRFVBhKZeWxy1K4okWUdoK0gu_nrGTViysELccbMuaceab9wD84jbXYUGWailFGMhYYJCFiQlUpoQlBLWl77bfu1TdK3l-G98uwO95LQwi-uQzbLtPH8u342LqXGWHRgqtpP4CS4T7cVRXa81jBlL7cWKkMPBAE5LNSmRCczjo9K8HZAxGvO3as8QR_weG_FyVd8LYI8zpGvRmd6sTS-7b0ypvF89v2jZ-9vLrsNqomuyofhvfYAFH32HlVQPCDei75NchCTt29kCChXnkIs47Fxrz2QTsGHHC-t7Fz1wpwgiHrDN2iMdu7qq_7Kiq6oxJ1hvbKZ21CVenJ4NON2jGLARFZOIqKEPio1XGZqKMk1IqgaSViBAT5KrQWW6yTGqhy9LFPDG3qpR5jIaAPit4nIgfsDgaj3ALGM95pAxyS0dILnOypTRpSEa4WRIJFi3gM6qnRdOD3I3CGKbeFglN6jmVOk6lDadacDDfM6k7cHy4esORfr6yoXoLdmfMTZuf9CklXTAhkaVEsv3_XT9huTvoXaQXZ5d_duBr5CoffNLOLixWj1PcI32kyvf9M3wBnjjZSA |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Wireless+Image+Transmission+Using+Deep+Source+Channel+Coding+With+Attention+Modules&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Xu%2C+Jialong&rft.au=Ai%2C+Bo&rft.au=Chen%2C+Wei&rft.au=Yang%2C+Ang&rft.date=2022-04-01&rft.issn=1051-8215&rft.eissn=1558-2205&rft.volume=32&rft.issue=4&rft.spage=2315&rft.epage=2328&rft_id=info:doi/10.1109%2FTCSVT.2021.3082521&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TCSVT_2021_3082521 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon |