MSTA-Net: Forgery Detection by Generating Manipulation Trace Based on Multi-Scale Self-Texture Attention
Lots of Deepfake videos are circulating on the Internet, which not only damages the personal rights of the forged individual, but also pollutes the web environment. What's worse, it may trigger public opinion and endanger national security. Therefore, it is urgent to fight deep forgery. Most of...
Saved in:
Published in | IEEE transactions on circuits and systems for video technology Vol. 32; no. 7; pp. 4854 - 4866 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.07.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Lots of Deepfake videos are circulating on the Internet, which not only damages the personal rights of the forged individual, but also pollutes the web environment. What's worse, it may trigger public opinion and endanger national security. Therefore, it is urgent to fight deep forgery. Most of the current forgery detection algorithms are based on convolutional neural networks to learn the feature differences between forged and real frames from big data. In this paper, from the perspective of image generation, we simulate the forgery process based on image generation and explore possible trace of forgery. We propose a multi-scale self-texture attention Generative Network(MSTA-Net) to track the potential texture trace in image generation process and eliminate the interference of deep forgery post-processing. Firstly, a generator with encoder-decoder is to disassemble images and performed trace generation, then we merge the generated trace image and the original map, which is input into the classifier with Resnet as the backbone. Secondly, the self-texture attention mechanism(STA) is proposed as the skip connection between the encoder and the decoder, which significantly enhances the texture characteristics in the image disassembly process and assists the generation of texture trace. Finally, we propose a loss function called Prob-tuple loss restricted by classification probability to amend the generation of forgery trace directly. To verify the performance of the MSTA-Net, we design different experiments to verify the feasibility and advancement of the method. Experimental results show that the proposed method performs well on deep forged databases represented by FaceForensics++, Celeb-DF, Deeperforensics and DFDC, and some results are reaching the state-of-the-art. |
---|---|
AbstractList | Lots of Deepfake videos are circulating on the Internet, which not only damages the personal rights of the forged individual, but also pollutes the web environment. What's worse, it may trigger public opinion and endanger national security. Therefore, it is urgent to fight deep forgery. Most of the current forgery detection algorithms are based on convolutional neural networks to learn the feature differences between forged and real frames from big data. In this paper, from the perspective of image generation, we simulate the forgery process based on image generation and explore possible trace of forgery. We propose a multi-scale self-texture attention Generative Network(MSTA-Net) to track the potential texture trace in image generation process and eliminate the interference of deep forgery post-processing. Firstly, a generator with encoder-decoder is to disassemble images and performed trace generation, then we merge the generated trace image and the original map, which is input into the classifier with Resnet as the backbone. Secondly, the self-texture attention mechanism(STA) is proposed as the skip connection between the encoder and the decoder, which significantly enhances the texture characteristics in the image disassembly process and assists the generation of texture trace. Finally, we propose a loss function called Prob-tuple loss restricted by classification probability to amend the generation of forgery trace directly. To verify the performance of the MSTA-Net, we design different experiments to verify the feasibility and advancement of the method. Experimental results show that the proposed method performs well on deep forged databases represented by FaceForensics++, Celeb-DF, Deeperforensics and DFDC, and some results are reaching the state-of-the-art. |
Author | Xiao, Shuai Li, Yang Gao, Xinbo Yang, Jiachen Li, Aiyun Lu, Wen |
Author_xml | – sequence: 1 givenname: Jiachen orcidid: 0000-0003-2558-552X surname: Yang fullname: Yang, Jiachen email: yangjiachen@tju.edu.cn organization: School of Electronic Information Engineering and the School of Electrical and Information Engineering, Tianjin University, Tianjin, China – sequence: 2 givenname: Shuai orcidid: 0000-0003-4058-8120 surname: Xiao fullname: Xiao, Shuai email: xs611@tju.edu.cn organization: School of Electrical and Information Engineering, Tianjin University, Tianjin, China – sequence: 3 givenname: Aiyun surname: Li fullname: Li, Aiyun email: liaiyun@tju.edu.cn organization: School of Electrical and Information Engineering, Tianjin University, Tianjin, China – sequence: 4 givenname: Wen surname: Lu fullname: Lu, Wen email: luwen@xidian.edu.cn organization: School of Electronic Engineering, Xidian University, Xi'an, China – sequence: 5 givenname: Xinbo orcidid: 0000-0002-7985-0037 surname: Gao fullname: Gao, Xinbo email: gaoxb@cqupt.edu.cn organization: Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, China – sequence: 6 givenname: Yang orcidid: 0000-0002-4268-4004 surname: Li fullname: Li, Yang email: liyang328@shzu.edu.cn organization: College of Mechanical and Electrical Engineering, Shihezi University, Shihezi, Xinjiang, China |
BookMark | eNp9kE1PwzAMhiMEEmzwB-ASiXNHnI825TYGG0gbHFa4VmnqjqLSjjSV2L-n-xAHDpxsy378Ss-AHNdNjYRcAhsBsPgmmSzfkhFnHEYChNAqPiJnoJQOOGfquO-ZgkBzUKdk0LYfjIHUMjoj74tlMg6e0d_SaeNW6Db0Hj1aXzY1zTZ0hjU648t6RRemLtddZXarxBmL9M60mNN-XHSVL4OlNRXSJVZFkOC37xzSsfdYb4lzclKYqsWLQx2S1-lDMnkM5i-zp8l4HlgeKx_kmTAq4hAyKyMUgmMoozy3RoMurJASpCoya1gIUADkBc8gyyKegxRZWGgxJNf7v2vXfHXY-vSj6VzdR6Y81DyGUDLeX_H9lXVN2zos0rUrP43bpMDSrdF0ZzTdGk0PRntI_4Fs6Xc6vDNl9T96tUdLRPzNikMpJAfxA9KVheU |
CODEN | ITCTEM |
CitedBy_id | crossref_primary_10_1016_j_comcom_2023_12_036 crossref_primary_10_1109_JSTARS_2022_3213749 crossref_primary_10_1109_TCSVT_2022_3220630 crossref_primary_10_1007_s10489_023_04616_2 crossref_primary_10_1109_TCSVT_2022_3189545 crossref_primary_10_1080_23742917_2023_2192888 crossref_primary_10_1109_TCSVT_2024_3349909 crossref_primary_10_1109_TIFS_2023_3324739 crossref_primary_10_1007_s10489_023_04684_4 crossref_primary_10_1109_ACCESS_2022_3230660 crossref_primary_10_1109_TCSVT_2024_3390945 crossref_primary_10_1109_TAI_2024_3455311 crossref_primary_10_1109_TCSVT_2023_3251444 crossref_primary_10_1109_TMC_2024_3367781 crossref_primary_10_1007_s10489_022_03766_z crossref_primary_10_1109_TCSVT_2023_3269841 crossref_primary_10_1007_s11042_024_20223_w crossref_primary_10_1016_j_ins_2023_03_006 crossref_primary_10_1109_TCSVT_2023_3289147 crossref_primary_10_1109_TIFS_2024_3451356 crossref_primary_10_1109_ACCESS_2022_3221123 crossref_primary_10_1016_j_cmpb_2023_107719 crossref_primary_10_1007_s11042_024_18706_x crossref_primary_10_1007_s11063_023_11249_6 crossref_primary_10_1109_TMM_2023_3252267 crossref_primary_10_3390_math11173710 crossref_primary_10_1109_TMM_2024_3355651 crossref_primary_10_1109_TMM_2023_3313507 crossref_primary_10_1109_ACCESS_2023_3344653 crossref_primary_10_1145_3715138 crossref_primary_10_7717_peerj_cs_1101 crossref_primary_10_1109_TCSVT_2023_3238517 crossref_primary_10_1109_TCSVT_2023_3239607 crossref_primary_10_3390_math11183952 crossref_primary_10_1007_s00530_024_01357_1 crossref_primary_10_1016_j_neucom_2025_130009 crossref_primary_10_1109_TMM_2023_3237169 crossref_primary_10_1109_TCSVT_2023_3269948 crossref_primary_10_1186_s13007_022_00866_2 crossref_primary_10_3390_computers12100216 crossref_primary_10_3390_s22134697 crossref_primary_10_1109_TCSVT_2023_3299278 crossref_primary_10_1016_j_patcog_2025_111528 crossref_primary_10_1007_s10489_022_03910_9 crossref_primary_10_1007_s11042_023_17840_2 crossref_primary_10_1007_s12652_022_04393_9 crossref_primary_10_1109_TCSVT_2022_3209336 crossref_primary_10_1007_s11042_024_19584_z crossref_primary_10_1007_s11760_023_02644_6 crossref_primary_10_1109_TCSVT_2023_3281475 crossref_primary_10_1016_j_imavis_2022_104566 crossref_primary_10_1109_TCSVT_2023_3269742 crossref_primary_10_1109_TCSVT_2022_3207310 crossref_primary_10_1109_JAS_2023_123117 crossref_primary_10_1145_3605893 crossref_primary_10_3390_computers13010031 crossref_primary_10_1016_j_jvcir_2024_104263 crossref_primary_10_1109_TCSS_2022_3213832 crossref_primary_10_1145_3623639 crossref_primary_10_1049_2024_2280143 crossref_primary_10_1109_TCSVT_2023_3278310 crossref_primary_10_1007_s00530_024_01424_7 crossref_primary_10_1016_j_eswa_2023_120898 crossref_primary_10_3390_info15090525 crossref_primary_10_1007_s11042_023_17475_3 crossref_primary_10_1049_ipr2_12638 crossref_primary_10_1109_ACCESS_2024_3523288 crossref_primary_10_1016_j_comcom_2022_04_020 crossref_primary_10_1109_TCSVT_2023_3309899 |
Cites_doi | 10.1109/CVPR.2016.262 10.1109/WACV.2016.7477553 10.1109/TPAMI.2021.3050918 10.1109/TCSVT.2021.3089724 10.1109/ICCV.2019.00765 10.1109/TCSVT.2018.2884203 10.1109/TPAMI.2019.2928540 10.1109/CVPR.2015.7298682 10.1007/978-3-030-58610-2_6 10.1007/s10479-011-0841-3 10.1109/ICASSP.2019.8683164 10.1109/CVPR.2018.00813 10.1109/TCSVT.2021.3074259 10.1145/3306346.3323035 10.1007/s11263-019-01228-7 10.1016/j.future.2021.06.043 10.1109/TCSVT.2019.2892608 10.1109/WIFS49906.2020.9360901 10.1109/ICCV.2019.00728 10.1109/TCSVT.2019.2918852 10.1109/CVPR42600.2020.00813 10.1109/CVPR42600.2020.00808 10.1145/3297280.3297410 10.1109/CVPR46437.2021.00991 10.1109/WACVW.2019.00020 10.1109/TCSVT.2019.2945057 10.1007/s10489-022-03766-z 10.1109/TCSVT.2021.3068294 10.1109/TCSVT.2020.3014889 10.1109/CVPR42600.2020.00296 10.1109/CVPR.2018.00116 10.1007/978-3-319-24574-4_28 10.1109/ICCV48922.2021.00209 10.1109/CVPR.2016.90 10.1109/TCSVT.2021.3116679 10.1109/WIFS.2018.8630761 10.1109/ICCV.2019.00009 10.1109/TCSVT.2021.3061153 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
DOI | 10.1109/TCSVT.2021.3133859 |
DatabaseName | IEEE Xplore (IEEE) IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Xplore Digital Library CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Technology Research Database |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1558-2205 |
EndPage | 4866 |
ExternalDocumentID | 10_1109_TCSVT_2021_3133859 9643421 |
Genre | orig-research |
GrantInformation_xml | – fundername: National Natural Science Foundation of China grantid: 61871283 funderid: 10.13039/501100001809 |
GroupedDBID | -~X 0R~ 29I 4.4 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 O9- OCL P2P RIA RIE RNS RXW TAE TN5 VH1 AAYXX CITATION RIG 7SC 7SP 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c295t-db3a572160c47e332e647ddca818fc344145fbca0611f11df2b1bb72d143b6f83 |
IEDL.DBID | RIE |
ISSN | 1051-8215 |
IngestDate | Mon Jun 30 06:32:40 EDT 2025 Tue Jul 01 00:41:16 EDT 2025 Thu Apr 24 22:57:04 EDT 2025 Wed Aug 27 02:23:54 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 7 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c295t-db3a572160c47e332e647ddca818fc344145fbca0611f11df2b1bb72d143b6f83 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0003-4058-8120 0000-0002-7985-0037 0000-0002-4268-4004 0000-0003-2558-552X |
PQID | 2682916402 |
PQPubID | 85433 |
PageCount | 13 |
ParticipantIDs | crossref_primary_10_1109_TCSVT_2021_3133859 proquest_journals_2682916402 ieee_primary_9643421 crossref_citationtrail_10_1109_TCSVT_2021_3133859 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2022-07-01 |
PublicationDateYYYYMMDD | 2022-07-01 |
PublicationDate_xml | – month: 07 year: 2022 text: 2022-07-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | IEEE transactions on circuits and systems for video technology |
PublicationTitleAbbrev | TCSVT |
PublicationYear | 2022 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref13 ref12 ref14 ref53 ref52 ref18 Zhao (ref8) 2021 Vaswani (ref44) 2017; abs/1706.03762 ref50 (ref16) 2018 Li (ref7) 2019 ref46 ref45 ref48 ref47 ref42 ref41 ref43 ref49 Li (ref19) 2019 ref9 ref4 ref3 (ref17) 2018 ref6 ref5 ref40 ref34 ref37 ref36 Korshunov (ref28) 2018 ref30 ref33 Güera (ref31) ref32 Karras (ref23) 2021 ref2 ref1 ref39 ref38 Feng (ref11) 2020 Bonettini (ref51) 2020 Goodfellow (ref10) ref24 ref26 ref25 ref20 ref22 ref21 ref27 Dolhansky (ref15) 2020; abs/2006.07397 Agarwal (ref29) Sabir (ref35) 2019 |
References_xml | – ident: ref21 doi: 10.1109/CVPR.2016.262 – ident: ref30 doi: 10.1109/WACV.2016.7477553 – ident: ref39 doi: 10.1109/TPAMI.2021.3050918 – year: 2020 ident: ref11 article-title: Learning generalized spoof cues for face anti-spoofing publication-title: arXiv:2005.03922 – ident: ref1 doi: 10.1109/TCSVT.2021.3089724 – volume: abs/1706.03762 year: 2017 ident: ref44 article-title: Attention is all you need publication-title: CoRR – ident: ref36 doi: 10.1109/ICCV.2019.00765 – ident: ref46 doi: 10.1109/TCSVT.2018.2884203 – ident: ref40 doi: 10.1109/TPAMI.2019.2928540 – ident: ref42 doi: 10.1109/CVPR.2015.7298682 – ident: ref49 doi: 10.1007/978-3-030-58610-2_6 – ident: ref53 doi: 10.1007/s10479-011-0841-3 – volume: abs/2006.07397 year: 2020 ident: ref15 article-title: The DeepFake detection challenge dataset publication-title: CoRR – ident: ref27 doi: 10.1109/ICASSP.2019.8683164 – ident: ref45 doi: 10.1109/CVPR.2018.00813 – start-page: 1 volume-title: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. Workshops ident: ref29 article-title: Protecting world leaders against deep fakes – ident: ref38 doi: 10.1109/TCSVT.2021.3074259 – ident: ref22 doi: 10.1145/3306346.3323035 – ident: ref52 doi: 10.1007/s11263-019-01228-7 – year: 2019 ident: ref35 article-title: Recurrent convolutional strategies for face manipulation detection in videos publication-title: arXiv:1905.00582 – ident: ref47 doi: 10.1016/j.future.2021.06.043 – year: 2020 ident: ref51 article-title: Video face manipulation detection through ensemble of CNNs publication-title: arXiv:2004.07676 – year: 2021 ident: ref8 article-title: A transferable anti-forensic attack on forensic CNNs using a generative adversarial network publication-title: arXiv:2101.09568 – year: 2021 ident: ref23 article-title: Alias-free generative adversarial networks publication-title: arXiv:2106.12423 – ident: ref9 doi: 10.1109/TCSVT.2019.2892608 – ident: ref50 doi: 10.1109/WIFS49906.2020.9360901 – ident: ref18 doi: 10.1109/ICCV.2019.00728 – ident: ref6 doi: 10.1109/TCSVT.2019.2918852 – ident: ref25 doi: 10.1109/CVPR42600.2020.00813 – volume-title: Faceswap: Deepfakes Software for All year: 2018 ident: ref17 – ident: ref48 doi: 10.1109/CVPR42600.2020.00808 – ident: ref33 doi: 10.1145/3297280.3297410 – year: 2019 ident: ref7 article-title: Celeb-DF: A large-scale challenging dataset for DeepFake forensics publication-title: arXiv:1909.12962 – year: 2018 ident: ref28 article-title: DeepFakes: A new threat to face recognition? Assessment and detection publication-title: arXiv:1812.08685 – ident: ref24 doi: 10.1109/CVPR46437.2021.00991 – ident: ref32 doi: 10.1109/WACVW.2019.00020 – ident: ref5 doi: 10.1109/TCSVT.2019.2945057 – ident: ref2 doi: 10.1007/s10489-022-03766-z – ident: ref20 doi: 10.1109/TCSVT.2021.3068294 – volume-title: Faceswap-GAN year: 2018 ident: ref16 – start-page: 2672 volume-title: Proc. Adv. Neural Inf. Process. Syst. (NIPS) ident: ref10 article-title: Generative adversarial nets – ident: ref13 doi: 10.1109/TCSVT.2020.3014889 – ident: ref14 doi: 10.1109/CVPR42600.2020.00296 – ident: ref34 doi: 10.1109/CVPR.2018.00116 – ident: ref41 doi: 10.1007/978-3-319-24574-4_28 – ident: ref26 doi: 10.1109/ICCV48922.2021.00209 – ident: ref43 doi: 10.1109/CVPR.2016.90 – year: 2019 ident: ref19 article-title: FaceShifter: Towards high fidelity and occlusion aware face swapping publication-title: arXiv:1912.13457 – ident: ref37 doi: 10.1109/TCSVT.2021.3116679 – ident: ref3 doi: 10.1109/WIFS.2018.8630761 – ident: ref4 doi: 10.1109/ICCV.2019.00009 – start-page: 1 volume-title: Proc. ICML Worksop Synth. Realities, Deep Learn. Detecting AudioVisual Fakes ident: ref31 article-title: We need no pixels: Video manipulation detection using stream descriptors – ident: ref12 doi: 10.1109/TCSVT.2021.3061153 |
SSID | ssj0014847 |
Score | 2.6142802 |
Snippet | Lots of Deepfake videos are circulating on the Internet, which not only damages the personal rights of the forged individual, but also pollutes the web... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 4854 |
SubjectTerms | Algorithms Artificial neural networks Coders Dismantling Encoders-Decoders Faces faceswap detection Feature extraction Forgery Generators Image processing Information integrity prob-tuple loss self-texture attention Texture Trace generation Videos |
Title | MSTA-Net: Forgery Detection by Generating Manipulation Trace Based on Multi-Scale Self-Texture Attention |
URI | https://ieeexplore.ieee.org/document/9643421 https://www.proquest.com/docview/2682916402 |
Volume | 32 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3PS8MwFA5zJz34a4rTKTl408ylTX95m9MxhO2yTnYrzS8URyezPehf70vajqEi3lpIIPCled9r3vc9hC4jj8lUMpeE1E8J0wEjkR9SorXw_Yi6aSiNUHg88Ucz9jj35g10vdbCKKVs8Znqmkd7ly-XojC_ym6MdxQzqvEtSNxKrdb6xoCFtpkY0AVKQohjtUCmF93Eg-lTDKmgQyFDhZTM-JJuBCHbVeXHUWzjy3APjeuVlWUlr90i513x-c208b9L30e7FdHE_XJnHKCGyg7Rzob9YAs9j6dxn0xUfouHS6uOxvcqt7VZGeYfuLSkNnXReJxmL3WnLwzxTSh8B_FPYni1Gl4yBbAVnqqFJjEc-MVK4X6el8WUR2g2fIgHI1J1XiDCibycSO6mnrH16QkWKNd1lM8CKUUK4V0LFygU8zQXKZABqimV2uGU88CRwL64r0P3GDWzZaZOEPbgBOFhTxonPiYDngbmMk64mpmGo17URrSGIhGVLbnpjrFIbHrSixILX2LgSyr42uhqPeetNOX4c3TL4LEeWUHRRp0a8aT6bt8Txw8dIMyQVJ_-PusMbTtGAGELdjuoma8KdQ60JOcXdj9-AXp33Ks |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Nb9NAEB1F5VA4AKWgBtJ2D9xgk6y9_uKWpo3SD-cSF_Vmeb_UisipUvsAv57ZtR1FgFBvtrQrrfTWO2-8894AfE4CrgrFfRqzsKDcRJwmYcyoMTIME-YXsbJC4XQRzm_51V1w14OvWy2M1toVn-mhfXR3-Wota_urbGS9o7hVjb_AuB-wRq21vTPgsWsnhoSB0RgjWSeRGSejbLr8nmEy6DHMUTEps86kO2HI9VX56zB2EWb2BtJubU1hyY9hXYmh_PWHbeNzF_8WXrdUk0yavXEAPV2-g1c7BoSHcJ8uswld6Oobma2dPpqc68pVZ5VE_CSNKbWtjCZpUT50vb4IRjipyRlGQEXw1al46RLh1mSpV4ZmeOTXG00mVdWUU76H29lFNp3TtvcClV4SVFQJvwissc9Y8kj7vqdDHiklCwzwRvpIonhghCyQDjDDmDKeYEJEnkL-JUIT-x9gr1yX-ghIgGeIiMfKevFxFYkistdx0jfcthwNkj6wDopctsbktj_GKncJyjjJHXy5hS9v4evDl-2cx8aW47-jDy0e25EtFH0YdIjn7Zf7lHth7CFlxrT6479nncL-PEtv8pvLxfUneOlZOYQr3x3AXrWp9TGSlEqcuL35GyfD3_Q |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=MSTA-Net%3A+Forgery+Detection+by+Generating+Manipulation+Trace+Based+on+Multi-Scale+Self-Texture+Attention&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Yang%2C+Jiachen&rft.au=Xiao%2C+Shuai&rft.au=Li%2C+Aiyun&rft.au=Lu%2C+Wen&rft.date=2022-07-01&rft.issn=1051-8215&rft.eissn=1558-2205&rft.volume=32&rft.issue=7&rft.spage=4854&rft.epage=4866&rft_id=info:doi/10.1109%2FTCSVT.2021.3133859&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TCSVT_2021_3133859 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon |