Data Augmentation Using Random Image Cropping and Patching for Deep CNNs
Deep convolutional neural networks (CNNs) have achieved remarkable results in image processing tasks. However, their high expression ability risks overfitting. Consequently, data augmentation techniques have been proposed to prevent overfitting while enriching datasets. Recent CNN architectures with...
Saved in:
Published in | IEEE transactions on circuits and systems for video technology Vol. 30; no. 9; pp. 2917 - 2931 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.09.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Deep convolutional neural networks (CNNs) have achieved remarkable results in image processing tasks. However, their high expression ability risks overfitting. Consequently, data augmentation techniques have been proposed to prevent overfitting while enriching datasets. Recent CNN architectures with more parameters are rendering traditional data augmentation techniques insufficient. In this study, we propose a new data augmentation technique called random image cropping and patching ( RICAP ) which randomly crops four images and patches them to create a new training image. Moreover, RICAP mixes the class labels of the four images, resulting in an advantage of the soft labels. We evaluated RICAP with current state-of-the-art CNNs (e.g., the shake-shake regularization model) by comparison with competitive data augmentation techniques such as cutout and mixup. RICAP achieves a new state-of-the-art test error of 2.19% on CIFAR-10. We also confirmed that deep CNNs with RICAP achieve better results on classification tasks using CIFAR-100 and ImageNet, an image-caption retrieval task using Microsoft COCO, and other computer vision tasks. |
---|---|
AbstractList | Deep convolutional neural networks (CNNs) have achieved remarkable results in image processing tasks. However, their high expression ability risks overfitting. Consequently, data augmentation techniques have been proposed to prevent overfitting while enriching datasets. Recent CNN architectures with more parameters are rendering traditional data augmentation techniques insufficient. In this study, we propose a new data augmentation technique called random image cropping and patching ( RICAP ) which randomly crops four images and patches them to create a new training image. Moreover, RICAP mixes the class labels of the four images, resulting in an advantage of the soft labels. We evaluated RICAP with current state-of-the-art CNNs (e.g., the shake-shake regularization model) by comparison with competitive data augmentation techniques such as cutout and mixup. RICAP achieves a new state-of-the-art test error of 2.19% on CIFAR-10. We also confirmed that deep CNNs with RICAP achieve better results on classification tasks using CIFAR-100 and ImageNet, an image-caption retrieval task using Microsoft COCO, and other computer vision tasks. |
Author | Uehara, Kuniaki Matsubara, Takashi Takahashi, Ryo |
Author_xml | – sequence: 1 givenname: Ryo orcidid: 0000-0003-0723-0119 surname: Takahashi fullname: Takahashi, Ryo email: takahashi@ai.cs.kobe-u.ac.jp organization: Graduate School of System Informatics, Kobe University, Kobe, Japan – sequence: 2 givenname: Takashi orcidid: 0000-0003-0642-4800 surname: Matsubara fullname: Matsubara, Takashi email: matsubara@phoenix.kobe-u.ac.jp organization: Graduate School of System Informatics, Kobe University, Kobe, Japan – sequence: 3 givenname: Kuniaki orcidid: 0000-0002-7160-3752 surname: Uehara fullname: Uehara, Kuniaki email: uehara@kobe-u.ac.jp organization: Graduate School of System Informatics, Kobe University, Kobe, Japan |
BookMark | eNp9kDFPwzAQhS1UJNrCH4AlEnPK2YkTe6xSoJUqQNCyWk56LqnaONjpwL8nIRUDA9PdPb13d_pGZFDZCgm5pjChFOTdKnt7X00YUDlhMuKUiTMypJyLkDHgg7YHTkPBKL8gI-93ADQWcTok85ludDA9bg9YNbopbRWsfVltg1ddbewhWBz0FoPM2bru1FYMXnRTfHSDsS6YIdZB9vTkL8m50XuPV6c6JuuH-1U2D5fPj4tsugwLJnkTUsaBJ7wwRd6-l7AIeUSFFDJJZGzSTUpBszxPEFIECswkKPN8k2gjRWRQRmNy2--tnf08om_Uzh5d1Z5ULI4pgIgBWhfrXYWz3js0qnblQbsvRUF1xNQPMdURUydibUj8CRVlz6Rxutz_H73poyUi_t4SqeScRdE3N3p5Tw |
CODEN | ITCTEM |
CitedBy_id | crossref_primary_10_1145_3641856 crossref_primary_10_1016_j_compind_2024_104231 crossref_primary_10_3390_rs16010182 crossref_primary_10_1016_j_cja_2021_11_021 crossref_primary_10_1109_TMM_2022_3222624 crossref_primary_10_1109_ACCESS_2020_3038493 crossref_primary_10_1007_s11263_020_01383_2 crossref_primary_10_1016_j_procs_2023_01_151 crossref_primary_10_3390_met11040549 crossref_primary_10_3390_electronics13081516 crossref_primary_10_1109_TGRS_2023_3257039 crossref_primary_10_1016_j_knosys_2024_112040 crossref_primary_10_32604_csse_2022_024695 crossref_primary_10_1007_s40964_024_00603_2 crossref_primary_10_3390_agronomy14030618 crossref_primary_10_1109_TITS_2022_3206709 crossref_primary_10_1016_j_eja_2023_127076 crossref_primary_10_1109_TCSVT_2021_3076523 crossref_primary_10_1186_s12859_024_05787_6 crossref_primary_10_1109_TGRS_2024_3510948 crossref_primary_10_1016_j_apacoust_2023_109489 crossref_primary_10_1016_j_patcog_2024_111117 crossref_primary_10_1109_TCSVT_2023_3283282 crossref_primary_10_48084_etasr_5852 crossref_primary_10_1109_TITS_2024_3509140 crossref_primary_10_1016_j_compbiomed_2024_108555 crossref_primary_10_1186_s40537_021_00444_8 crossref_primary_10_32604_cmc_2024_048443 crossref_primary_10_3390_rs15030827 crossref_primary_10_1080_24725854_2022_2080306 crossref_primary_10_1177_03611981241263574 crossref_primary_10_1007_s10278_024_01128_4 crossref_primary_10_1007_s00521_023_08421_3 crossref_primary_10_1016_j_inpa_2021_09_003 crossref_primary_10_1016_j_jds_2024_06_001 crossref_primary_10_1541_ieejeiss_142_586 crossref_primary_10_1109_ACCESS_2022_3204683 crossref_primary_10_3390_agriengineering5010018 crossref_primary_10_3390_math10173091 crossref_primary_10_1002_stvr_1910 crossref_primary_10_1364_OSAC_414109 crossref_primary_10_1587_elex_16_20190004 crossref_primary_10_1016_j_cmpb_2022_106716 crossref_primary_10_1109_ACCESS_2021_3059314 crossref_primary_10_1007_s11042_023_14566_z crossref_primary_10_1109_TGRS_2021_3080580 crossref_primary_10_1016_j_cities_2024_105019 crossref_primary_10_1109_TCSVT_2023_3331499 crossref_primary_10_3390_electronics13142732 crossref_primary_10_1109_TCSVT_2020_3045978 crossref_primary_10_1016_j_engappai_2024_107999 crossref_primary_10_1071_CP23351 crossref_primary_10_1109_TCSVT_2022_3218891 crossref_primary_10_1177_14759217221089571 crossref_primary_10_3390_jimaging9010001 crossref_primary_10_1016_j_patcog_2022_108637 crossref_primary_10_1109_TMC_2024_3459944 crossref_primary_10_3390_electronics12183781 crossref_primary_10_3390_rs17020346 crossref_primary_10_1109_TNNLS_2022_3152245 crossref_primary_10_1007_s00521_023_08742_3 crossref_primary_10_1109_ACCESS_2023_3320638 crossref_primary_10_1007_s11063_023_11304_2 crossref_primary_10_1109_TCSS_2023_3264804 crossref_primary_10_4108_eetiot_4578 crossref_primary_10_1109_TCSVT_2023_3260082 crossref_primary_10_1109_TCSVT_2022_3219339 crossref_primary_10_1109_ACCESS_2024_3352431 crossref_primary_10_1016_j_array_2022_100258 crossref_primary_10_3390_computers13020038 crossref_primary_10_1109_ACCESS_2020_3039356 crossref_primary_10_1088_2040_8986_acb36d crossref_primary_10_1109_TBME_2022_3189617 crossref_primary_10_1109_LGRS_2025_3527712 crossref_primary_10_1109_TCSS_2023_3252028 crossref_primary_10_3390_electronics12030612 crossref_primary_10_3390_s21248501 crossref_primary_10_1080_10447318_2023_2247570 crossref_primary_10_3390_data9020021 crossref_primary_10_1109_ACCESS_2022_3154061 crossref_primary_10_1109_ACCESS_2023_3328243 crossref_primary_10_1109_TCSVT_2022_3161427 crossref_primary_10_1080_01431161_2021_1907866 crossref_primary_10_1007_s11517_024_03127_7 crossref_primary_10_1007_s11042_024_18278_w crossref_primary_10_1016_j_ijmedinf_2024_105666 crossref_primary_10_1093_gji_ggac117 crossref_primary_10_3390_agriculture14111964 crossref_primary_10_1016_j_compag_2025_110151 crossref_primary_10_1002_int_22537 crossref_primary_10_1177_14727978241292932 crossref_primary_10_2139_ssrn_4057058 crossref_primary_10_1587_transinf_2022DLP0066 crossref_primary_10_1109_TMI_2021_3136682 crossref_primary_10_1007_s00521_022_08111_6 crossref_primary_10_1109_ACCESS_2024_3485479 crossref_primary_10_1016_j_autcon_2023_105066 crossref_primary_10_1016_j_isprsjprs_2021_02_015 crossref_primary_10_1109_TMM_2023_3234399 crossref_primary_10_1109_TCSVT_2021_3090902 crossref_primary_10_1016_j_cmpb_2022_106775 crossref_primary_10_1109_JSEN_2024_3443885 crossref_primary_10_3390_e25010118 crossref_primary_10_1007_s00371_023_02930_x crossref_primary_10_1109_ACCESS_2024_3510923 crossref_primary_10_1109_TCSVT_2023_3250031 crossref_primary_10_1016_j_ijfatigue_2024_108724 crossref_primary_10_1109_ACCESS_2025_3526684 crossref_primary_10_1109_ACCESS_2021_3114799 crossref_primary_10_1109_ACCESS_2022_3224141 crossref_primary_10_3390_s22218127 crossref_primary_10_1109_TIP_2024_3364500 crossref_primary_10_1109_JSEN_2021_3127686 crossref_primary_10_26599_BDMA_2024_9020049 crossref_primary_10_1016_j_eswa_2025_126776 crossref_primary_10_1002_adom_202200178 crossref_primary_10_2139_ssrn_4181525 crossref_primary_10_3390_sym13091597 crossref_primary_10_3390_app13031711 crossref_primary_10_1111_1754_9485_13261 crossref_primary_10_21541_apjess_1060763 crossref_primary_10_1109_TETC_2021_3087174 crossref_primary_10_1109_ACCESS_2022_3174199 crossref_primary_10_1109_JIOT_2024_3457908 crossref_primary_10_3934_mbe_2023311 crossref_primary_10_3390_bioengineering10121424 crossref_primary_10_1109_ACCESS_2020_3048083 crossref_primary_10_1142_S0218126622501596 crossref_primary_10_1121_10_0007291 crossref_primary_10_1088_1361_6560_ac7bcd crossref_primary_10_1136_bjo_2023_323308 crossref_primary_10_17780_ksujes_1113669 crossref_primary_10_1016_j_jmsy_2024_02_011 crossref_primary_10_3390_bioengineering10101192 crossref_primary_10_1109_ACCESS_2021_3070391 crossref_primary_10_7717_peerj_cs_571 crossref_primary_10_3390_rs13132602 crossref_primary_10_1007_s11042_022_12651_3 crossref_primary_10_1016_j_neucom_2021_02_080 crossref_primary_10_3390_agronomy13071846 crossref_primary_10_1007_s10489_022_03463_x crossref_primary_10_1021_acsomega_3c05350 crossref_primary_10_1109_TMM_2021_3117064 crossref_primary_10_1109_TCSVT_2024_3484530 crossref_primary_10_3390_su152015051 crossref_primary_10_1088_2632_2153_abd615 crossref_primary_10_1002_ecj_12368 crossref_primary_10_1109_TGRS_2022_3211517 crossref_primary_10_3390_diagnostics10060358 crossref_primary_10_3390_s22228932 crossref_primary_10_1016_j_jaim_2024_100987 crossref_primary_10_1109_TMM_2022_3193559 crossref_primary_10_1527_tjsai_37_6_B_M43 crossref_primary_10_1007_s11063_022_11022_1 crossref_primary_10_1007_s10044_024_01258_z crossref_primary_10_1109_TCSVT_2024_3402097 crossref_primary_10_1186_s12903_024_03898_3 crossref_primary_10_3390_make6010033 crossref_primary_10_1007_s10921_021_00842_1 crossref_primary_10_2493_jjspe_89_105 crossref_primary_10_1107_S1600577523009852 crossref_primary_10_1016_j_jappgeo_2023_104947 crossref_primary_10_1007_s00530_022_01013_6 crossref_primary_10_1007_s11760_023_02859_7 crossref_primary_10_1108_EC_10_2022_0639 crossref_primary_10_3934_mbe_2021080 crossref_primary_10_1016_j_media_2023_102808 crossref_primary_10_1016_j_wasman_2023_04_044 crossref_primary_10_3389_fbioe_2021_761548 crossref_primary_10_1186_s12859_023_05398_7 crossref_primary_10_3389_fnbot_2022_1100697 crossref_primary_10_3902_jnns_25_175 crossref_primary_10_3390_rs15051212 crossref_primary_10_3847_1538_4357_adb7ce crossref_primary_10_1016_j_apgeochem_2021_105135 |
Cites_doi | 10.1109/ICCV.2017.97 10.1109/CVPR.2017.668 10.1109/CVPR.2015.7298594 10.1109/CVPR.2016.319 10.1109/CVPR.2016.308 10.1162/neco.1989.1.4.541 10.1109/CVPR.2016.91 10.1109/CVPR.2017.634 10.1109/CVPR.2017.243 10.1109/ICCV.2015.133 10.1016/j.neunet.2019.03.013 10.1016/0893-6080(91)90033-2 10.1109/CVPR.2016.90 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020 |
DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
DOI | 10.1109/TCSVT.2019.2935128 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Technology Research Database |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1558-2205 |
EndPage | 2931 |
ExternalDocumentID | 10_1109_TCSVT_2019_2935128 8795523 |
Genre | orig-research |
GrantInformation_xml | – fundername: Strategic Information and Communications R&D Promotion Programme of Ministry of Internal Affairs and Communications, Japan (MIC/SCOPE) grantid: 172107101 |
GroupedDBID | -~X 0R~ 29I 4.4 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 O9- OCL P2P RIA RIE RNS RXW TAE TN5 VH1 AAYXX CITATION RIG 7SC 7SP 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c295t-1250565cfcb155623e53189896694f7d710a2bb6e07e0102f6e9bbd6af983fe93 |
IEDL.DBID | RIE |
ISSN | 1051-8215 |
IngestDate | Mon Jun 30 05:40:09 EDT 2025 Tue Jul 01 00:41:13 EDT 2025 Thu Apr 24 22:55:43 EDT 2025 Wed Aug 27 02:31:47 EDT 2025 |
IsDoiOpenAccess | false |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 9 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c295t-1250565cfcb155623e53189896694f7d710a2bb6e07e0102f6e9bbd6af983fe93 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0003-0723-0119 0000-0003-0642-4800 0000-0002-7160-3752 |
PQID | 2441008400 |
PQPubID | 85433 |
PageCount | 15 |
ParticipantIDs | ieee_primary_8795523 crossref_primary_10_1109_TCSVT_2019_2935128 proquest_journals_2441008400 crossref_citationtrail_10_1109_TCSVT_2019_2935128 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2020-09-01 |
PublicationDateYYYYMMDD | 2020-09-01 |
PublicationDate_xml | – month: 09 year: 2020 text: 2020-09-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | IEEE transactions on circuits and systems for video technology |
PublicationTitleAbbrev | TCSVT |
PublicationYear | 2020 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | krizhevsky (ref12) 2012 devries (ref20) 2017 ref14 sermanet (ref3) 2014 goodfellow (ref29) 2015 ref11 he (ref15) 2016; 9908 ref1 ref17 ref19 ref18 cire?an (ref7) 2012 zheng (ref43) 2017; 14 hinton (ref32) 2014 zoph (ref31) 2017 kiros (ref40) 2014 zeiler (ref2) 2014 zhang (ref22) 2018 lee (ref33) 2015; 2 faghri (ref41) 2018 romero (ref34) 2015 springenberg (ref35) 2015 kingma (ref42) 2014 gastaldi (ref39) 2017 sun (ref44) 2018 redmon (ref47) 2018 ref46 ref45 ref23 simonyan (ref10) 2015 ref25 nair (ref37) 2010 he (ref38) 2016 takahashi (ref27) 2018 zhong (ref21) 2017 ref28 russakovsky (ref24) 2014 hinton (ref13) 2012 zintgraf (ref5) 2017 cubuk (ref30) 2018 ref9 zagoruyko (ref16) 2016 ref4 ioffe (ref36) 2015 ciresan (ref6) 2011 krizhevsky (ref8) 2009 lin (ref26) 2014 |
References_xml | – ident: ref4 doi: 10.1109/ICCV.2017.97 – start-page: 786 year: 2018 ident: ref27 article-title: RICAP: Random image cropping and patching data augmentation for deep CNNs publication-title: Proc Asian Conf Mach Learn – volume: 14 start-page: 13:1 year: 2017 ident: ref43 article-title: Person re-identification: Past, present and future publication-title: ACM Trans Multimedia Comput Commun Appl – ident: ref18 doi: 10.1109/CVPR.2017.668 – start-page: 3642 year: 2012 ident: ref7 article-title: Multi-column deep neural networks for image classification publication-title: Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit (CVPR) – start-page: 740 year: 2014 ident: ref26 article-title: Microsoft COCO: Common objects in context publication-title: Proc Eur Conf Comput Vis (ECCV) – ident: ref9 doi: 10.1109/CVPR.2015.7298594 – year: 2017 ident: ref21 article-title: Random erasing data augmentation publication-title: arXiv 1708 04896 – ident: ref25 doi: 10.1109/CVPR.2016.319 – start-page: 1 year: 2009 ident: ref8 article-title: Learning multiple layers of features from tiny images – ident: ref23 doi: 10.1109/CVPR.2016.308 – start-page: 1 year: 2014 ident: ref3 article-title: OverFeat: Integrated recognition, localization and detection using convolutional networks publication-title: Proc Int Conf Learn Represent – volume: 2 start-page: 562 year: 2015 ident: ref33 article-title: Deeply-supervised nets publication-title: Proc 14th Int Conf Artif Intell Statist (AISTATS) – start-page: 87.1 year: 2016 ident: ref16 article-title: Wide residual networks publication-title: Proc Brit Mach Vis Conf (BMVC) – ident: ref1 doi: 10.1162/neco.1989.1.4.541 – ident: ref46 doi: 10.1109/CVPR.2016.91 – start-page: 1097 year: 2012 ident: ref12 article-title: ImageNet classification with deep convolutional neural networks publication-title: Proc Adv Neural Inf Process Syst (NIPS) – start-page: 448 year: 2015 ident: ref36 article-title: Batch normalization: Accelerating deep network training by reducing internal covariate shift publication-title: Proc 32nd Int Conf Mach Learn (ICML) – volume: 9908 start-page: 630 year: 2016 ident: ref15 article-title: Identity mappings in deep residual networks publication-title: Vision Computer – start-page: 818 year: 2014 ident: ref2 article-title: Visualizing and understanding convolutional networks publication-title: Proc Eur Conf Comput Vis (ECCV) – year: 2018 ident: ref47 article-title: YOLOv3: An incremental improvement publication-title: arXiv 1804 02767 – start-page: 480 year: 2018 ident: ref44 article-title: Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline) publication-title: Proc Eur Conf Comput Vis (ECCV) – start-page: 1 year: 2015 ident: ref10 article-title: Very deep convolutional networks for large-scale image recognition publication-title: Proc Int Conf Learn Represent (ICLR) – start-page: 1 year: 2015 ident: ref34 article-title: FitNets: Hints for thin deep nets publication-title: Proc Int Conf Learn Represent (ICLR) – start-page: 1 year: 2017 ident: ref5 article-title: Visualizing deep neural network decisions: Prediction difference analysis publication-title: Proc Int Conf Learn Represent (ICLR) – ident: ref19 doi: 10.1109/CVPR.2017.634 – start-page: 1 year: 2015 ident: ref35 article-title: Striving for simplicity: The all convolutional net publication-title: Proc Int Conf Learn Represent (ICLR) – year: 2012 ident: ref13 article-title: Improving neural networks by preventing co-adaptation of feature detectors publication-title: arXiv 1207 0580 – ident: ref17 doi: 10.1109/CVPR.2017.243 – ident: ref45 doi: 10.1109/ICCV.2015.133 – year: 2017 ident: ref20 article-title: Improved regularization of convolutional neural networks with cutout publication-title: arXiv 1708 04552 – start-page: 807 year: 2010 ident: ref37 article-title: Rectified linear units improve restricted boltzmann machines publication-title: Proc 27th Int Conf Mach Learn (ICML) – start-page: 1 year: 2017 ident: ref31 article-title: Neural architecture search with reinforcement learning publication-title: Proc Int Conf Learn Represent (ICLR) – start-page: 1 year: 2018 ident: ref41 article-title: VSE++: Improving visual-semantic embeddings with hard negatives publication-title: Proc Brit Mach Vis Conf (BMVC) – year: 2018 ident: ref30 article-title: AutoAugment: Learning augmentation policies from data publication-title: arXiv 1805 09501 – start-page: 1 year: 2015 ident: ref29 article-title: Explaining and harnessing adversarial examples publication-title: Proc Int Conf Learn Represent (ICLR) – start-page: 1 year: 2018 ident: ref22 article-title: mixup: Beyond empirical risk minimization publication-title: Proc Int Conf Learn Represent (ICLR) – year: 2014 ident: ref42 article-title: Adam: A method for stochastic optimization publication-title: arXiv 1412 6980 – start-page: 1 year: 2014 ident: ref32 article-title: Distilling the knowledge in a neural network publication-title: Proc Workshop Adv Neural Inf Process Syst (NIPS) – start-page: 1 year: 2014 ident: ref40 article-title: Unifying visual-semantic embeddings with multimodal neural language models publication-title: Proc Workshop Adv Neural Inf Process Syst (NIPS) – ident: ref14 doi: 10.1016/j.neunet.2019.03.013 – ident: ref28 doi: 10.1016/0893-6080(91)90033-2 – start-page: 1 year: 2017 ident: ref39 article-title: Shake-shake regularization publication-title: Proc ICLR Workshop – ident: ref11 doi: 10.1109/CVPR.2016.90 – year: 2014 ident: ref24 article-title: ImageNet large scale visual recognition challenge publication-title: arXiv 1409 0575 – start-page: 1026 year: 2016 ident: ref38 article-title: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification publication-title: Proc IEEE Int Conf Comput Vis (ICCV) – start-page: 1237 year: 2011 ident: ref6 article-title: Flexible, high performance convolutional neural networks for image classification publication-title: Proc Int Joint Conf Artif Intell (IJCAI) |
SSID | ssj0014847 |
Score | 2.6750655 |
Snippet | Deep convolutional neural networks (CNNs) have achieved remarkable results in image processing tasks. However, their high expression ability risks overfitting.... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 2917 |
SubjectTerms | Artificial neural networks Computer vision convolutional neural network Convolutional neural networks Data augmentation Image classification Image color analysis Image processing image-caption retrieval Jitter Labels Patching Principal component analysis Regularization Task analysis Training |
Title | Data Augmentation Using Random Image Cropping and Patching for Deep CNNs |
URI | https://ieeexplore.ieee.org/document/8795523 https://www.proquest.com/docview/2441008400 |
Volume | 30 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LT8MwDLZgJzjwGojxUg7coHu0aWiO02AaSJsQbIhblaTODrAOQXfh1-Nk3cRLiFtVJVEUJ_bnxP4McBpJI4XFmE4a2oATZA-UUTwQIbeJTUzItUsU7g9Eb8RvHuPHFThf5sIgog8-w7r79G_52dTM3FVZwxXGJsdpFVbJcZvnai1fDHjii4kRXGgFCdmxRYJMUzaGnfuHoYviknUybmThki9GyFdV-aGKvX3pbkJ_MbN5WMlTfVbounn_Rtr436lvwUYJNFl7vjO2YQXzHVj_RD9Yhd6lKhRrz8aTMgEpZz6CgN2pPJtO2PWElA3rvE4dh8OY0U92S5rb3VkxwrrsEvGFdQaDt10Yda-GnV5QVlYITCjjImh54BMbazThCUJASEfRFZIUQnJ7kRHsUKHWApsX6EjnrECpdSaUlUlkUUZ7UMmnOe4D0zbiwvJI0VBOBkmWcatigShjJA1Wg9ZiqVNT0o676hfPqXc_mjL14kmdeNJSPDU4W_Z5mZNu_Nm66tZ72bJc6hocLSSalufyLSUw49iMSHEd_N7rENZC51H7KLIjqBSvMzwm2FHoE7_fPgAsNdHF |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwzV1LT-MwEB4hOCwcgOUhymt9gBNKaVPHxAcOqAW1PCoEBXELtjPmsNsU0VQIfgt_hf_G2E0rWFZ7Q-IWRXakZL7MN7ZnvgHYqkkjhcWI_jS0AaeQPVBG8UCE3MY2NiHXrlD4rC2aV_z4JrqZgJdxLQwi-uQzLLtLf5af9szAbZXtusbYtHAqUihP8OmRFmj9_VaDrLkdhkeHnXozKHoIBCaUUR5UPcVHxhpNzElcjwQ61zJRCMntXkoEq0KtBVb20MmrWYFS61QoK-OaRSe1RA5-iuKMKBxWh43PKHjs25dRgFINYmLOUUlORe526pfXHZc3JstEp8Sp8Qfa831cPjl_z2hHc_A6-hbDRJbf5UGuy-b5L5nI7_qx5mG2CKXZwRD7P2ECswWYeSewuAjNhsoVOxjcdYsSq4z5HAl2obK012WtLrlTVn_oOZWKO0Y32Tlxk9uVYxTNswbiPau32_0luPqSV1mGyayX4QowbWtcWF5T9Chn8zhNuVWRQJQRko8uQXVk2sQUwuquv8efxC-wKjLxcEgcHJICDiXYGc-5H8qK_Hf0orPveGRh2hKsjxCUFJ6nn1C45vSayDWv_nvWL_jR7JydJqet9skaTIdu_8DnzK3DZP4wwA0KsnK96bHO4Par8fIGRkctjg |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Data+Augmentation+Using+Random+Image+Cropping+and+Patching+for+Deep+CNNs&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Takahashi%2C+Ryo&rft.au=Matsubara%2C+Takashi&rft.au=Uehara%2C+Kuniaki&rft.date=2020-09-01&rft.issn=1051-8215&rft.eissn=1558-2205&rft.volume=30&rft.issue=9&rft.spage=2917&rft.epage=2931&rft_id=info:doi/10.1109%2FTCSVT.2019.2935128&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TCSVT_2019_2935128 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon |