Reconstruction by inpainting for visual anomaly detection

•A reconstruction-by-inpainting-based anomaly detection method (RIAD) was proposed.•RIAD achieves state-of-the-art performance on anomaly detection and localization.•We compare RIAD anomaly detection results with recent anomaly detection methods.•The generality of RIAD is demonstrated by applying it...

Full description

Saved in:
Bibliographic Details
Published inPattern recognition Vol. 112; p. 107706
Main Authors Zavrtanik, Vitjan, Kristan, Matej, Skočaj, Danijel
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.04.2021
Subjects
Online AccessGet full text
ISSN0031-3203
1873-5142
DOI10.1016/j.patcog.2020.107706

Cover

Loading…
Abstract •A reconstruction-by-inpainting-based anomaly detection method (RIAD) was proposed.•RIAD achieves state-of-the-art performance on anomaly detection and localization.•We compare RIAD anomaly detection results with recent anomaly detection methods.•The generality of RIAD is demonstrated by applying it on video anomaly detection. Visual anomaly detection addresses the problem of classification or localization of regions in an image that deviate from their normal appearance. A popular approach trains an auto-encoder on anomaly-free images and performs anomaly detection by calculating the difference between the input and the reconstructed image. This approach assumes that the auto-encoder will be unable to accurately reconstruct anomalous regions. But in practice neural networks generalize well even to anomalies and reconstruct them sufficiently well, thus reducing the detection capabilities. Accurate reconstruction is far less likely if the anomaly pixels were not visible to the auto-encoder. We thus cast anomaly detection as a self-supervised reconstruction-by-inpainting problem. Our approach (RIAD) randomly removes partial image regions and reconstructs the image from partial inpaintings, thus addressing the drawbacks of auto-enocoding methods. RIAD is extensively evaluated on several benchmarks and sets a new state-of-the art on a recent highly challenging anomaly detection benchmark.
AbstractList •A reconstruction-by-inpainting-based anomaly detection method (RIAD) was proposed.•RIAD achieves state-of-the-art performance on anomaly detection and localization.•We compare RIAD anomaly detection results with recent anomaly detection methods.•The generality of RIAD is demonstrated by applying it on video anomaly detection. Visual anomaly detection addresses the problem of classification or localization of regions in an image that deviate from their normal appearance. A popular approach trains an auto-encoder on anomaly-free images and performs anomaly detection by calculating the difference between the input and the reconstructed image. This approach assumes that the auto-encoder will be unable to accurately reconstruct anomalous regions. But in practice neural networks generalize well even to anomalies and reconstruct them sufficiently well, thus reducing the detection capabilities. Accurate reconstruction is far less likely if the anomaly pixels were not visible to the auto-encoder. We thus cast anomaly detection as a self-supervised reconstruction-by-inpainting problem. Our approach (RIAD) randomly removes partial image regions and reconstructs the image from partial inpaintings, thus addressing the drawbacks of auto-enocoding methods. RIAD is extensively evaluated on several benchmarks and sets a new state-of-the art on a recent highly challenging anomaly detection benchmark.
ArticleNumber 107706
Author Zavrtanik, Vitjan
Skočaj, Danijel
Kristan, Matej
Author_xml – sequence: 1
  givenname: Vitjan
  surname: Zavrtanik
  fullname: Zavrtanik, Vitjan
  email: vitjan.zavrtanik@fri.uni-lj.si
– sequence: 2
  givenname: Matej
  surname: Kristan
  fullname: Kristan, Matej
  email: matej.kristan@fri.uni-lj.si
– sequence: 3
  givenname: Danijel
  surname: Skočaj
  fullname: Skočaj, Danijel
  email: danijel.skocaj@fri.uni-lj.si
BookMark eNqFkFFLwzAQx4NMcJt-Ax_6BTovSZu0PggydAoDQfQ5pEk6MrpkJNmg397W-uSDPh3c3e-4_2-BZs47g9AthhUGzO72q6NMyu9WBMjY4hzYBZrjitO8xAWZoTkAxTklQK_QIsY9AObDYI7qd6O8iymcVLLeZU2fWXeU1iXrdlnrQ3a28SS7TDp_kF2faZPM9-o1umxlF83NT12iz-enj_VLvn3bvK4ft7minKScm0bXjCuuFCguOSsajpksCSW4kJRpTgpaVVwrUjUAdamJrsu6AFK2mjKgS3Q_3VXBxxhMK5RNcvwgBWk7gUGMEsReTBLEKEFMEga4-AUfgz3I0P-HPUyYGYKdrQkiKmucMtqGIb3Q3v594AuF3XoX
CitedBy_id crossref_primary_10_1007_s11760_024_03608_0
crossref_primary_10_1093_jcde_qwac073
crossref_primary_10_1016_j_cie_2022_108512
crossref_primary_10_3233_JIFS_222595
crossref_primary_10_1007_s00521_024_09838_0
crossref_primary_10_1016_j_aei_2024_102759
crossref_primary_10_1007_s10489_024_05332_1
crossref_primary_10_1007_s10844_022_00722_8
crossref_primary_10_1016_j_engappai_2023_105835
crossref_primary_10_1145_3706574
crossref_primary_10_1016_j_neucom_2022_10_069
crossref_primary_10_52396_JUSTC_2022_0165
crossref_primary_10_1109_JIOT_2024_3401217
crossref_primary_10_1021_acs_analchem_2c05785
crossref_primary_10_1016_j_compind_2024_104192
crossref_primary_10_1016_j_patrec_2021_11_030
crossref_primary_10_1109_TIM_2023_3336758
crossref_primary_10_3390_app11146545
crossref_primary_10_1109_TCSVT_2023_3327448
crossref_primary_10_1007_s00138_025_01670_3
crossref_primary_10_1109_TIM_2023_3280508
crossref_primary_10_1109_TIM_2024_3481573
crossref_primary_10_1016_j_jvcir_2024_104368
crossref_primary_10_1016_j_patcog_2023_110124
crossref_primary_10_1016_j_vrih_2022_07_006
crossref_primary_10_1016_j_jpowsour_2024_235982
crossref_primary_10_1016_j_optlastec_2025_112633
crossref_primary_10_1016_j_patcog_2023_109373
crossref_primary_10_1007_s11432_024_4291_4
crossref_primary_10_1007_s40747_024_01659_x
crossref_primary_10_1016_j_engappai_2024_109235
crossref_primary_10_1016_j_eswa_2025_127261
crossref_primary_10_1109_TIM_2024_3390695
crossref_primary_10_3390_electronics13193914
crossref_primary_10_1109_TII_2024_3397360
crossref_primary_10_1109_TITS_2023_3267433
crossref_primary_10_1016_j_optlaseng_2024_108457
crossref_primary_10_1049_ell2_13289
crossref_primary_10_1109_JSEN_2022_3227547
crossref_primary_10_1016_j_media_2023_102836
crossref_primary_10_1016_j_measurement_2024_114689
crossref_primary_10_1142_S021800142451011X
crossref_primary_10_3390_s23229281
crossref_primary_10_1109_JSEN_2024_3494772
crossref_primary_10_3390_s24237435
crossref_primary_10_1007_s40747_023_01285_z
crossref_primary_10_1007_s13042_023_01913_7
crossref_primary_10_1109_TIM_2023_3300458
crossref_primary_10_1016_j_rineng_2025_104309
crossref_primary_10_1016_j_optlaseng_2023_107655
crossref_primary_10_1109_TCSVT_2023_3314801
crossref_primary_10_1016_j_neucom_2024_128845
crossref_primary_10_1007_s00371_024_03757_w
crossref_primary_10_1109_TII_2022_3182385
crossref_primary_10_1007_s12204_025_2795_7
crossref_primary_10_1016_j_autcon_2024_105332
crossref_primary_10_1109_JSEN_2024_3489971
crossref_primary_10_1016_j_patcog_2024_110761
crossref_primary_10_1111_cote_12705
crossref_primary_10_1109_TCSVT_2024_3408034
crossref_primary_10_1117_1_JEI_32_6_063020
crossref_primary_10_1177_00405175221149450
crossref_primary_10_1109_JSEN_2023_3348118
crossref_primary_10_1109_TIM_2023_3343832
crossref_primary_10_1007_s00371_024_03281_x
crossref_primary_10_1016_j_patcog_2021_108500
crossref_primary_10_1016_j_patcog_2024_111295
crossref_primary_10_1007_s00138_023_01454_7
crossref_primary_10_1007_s10489_025_06294_8
crossref_primary_10_1016_j_imavis_2023_104817
crossref_primary_10_1109_TASE_2022_3204368
crossref_primary_10_1016_j_eswa_2024_123410
crossref_primary_10_1109_TII_2024_3459612
crossref_primary_10_1016_j_knosys_2023_110611
crossref_primary_10_1007_s42979_024_02945_8
crossref_primary_10_1109_TIM_2023_3268658
crossref_primary_10_1007_s10489_024_05700_x
crossref_primary_10_1109_TAI_2022_3227142
crossref_primary_10_1016_j_eswa_2023_121509
crossref_primary_10_1117_1_JEI_32_6_063017
crossref_primary_10_1007_s11063_024_11466_7
crossref_primary_10_1109_OJIM_2022_3205680
crossref_primary_10_1016_j_neucom_2024_128622
crossref_primary_10_1016_j_neucom_2024_127532
crossref_primary_10_1016_j_compind_2024_104151
crossref_primary_10_1016_j_knosys_2023_110982
crossref_primary_10_1109_TCSVT_2023_3284165
crossref_primary_10_1109_TIM_2024_3370756
crossref_primary_10_1109_TIM_2023_3295468
crossref_primary_10_1016_j_jmsy_2024_02_001
crossref_primary_10_1016_j_patcog_2024_110862
crossref_primary_10_1109_TCSVT_2022_3218587
crossref_primary_10_3390_s22239327
crossref_primary_10_1016_j_patcog_2023_109335
crossref_primary_10_1109_TII_2024_3413965
crossref_primary_10_1111_exsy_13624
crossref_primary_10_1016_j_eswa_2025_126537
crossref_primary_10_1016_j_dsp_2024_104470
crossref_primary_10_1088_1361_6501_ad26c6
crossref_primary_10_1016_j_neucom_2021_12_093
crossref_primary_10_1109_TIM_2021_3128961
crossref_primary_10_1109_TPAMI_2023_3335152
crossref_primary_10_3390_s21196679
crossref_primary_10_1016_j_ins_2022_12_011
crossref_primary_10_1016_j_procir_2024_08_383
crossref_primary_10_1016_j_knosys_2024_111563
crossref_primary_10_1016_j_knosys_2024_112650
crossref_primary_10_1007_s11042_023_17253_1
crossref_primary_10_1016_j_eswa_2025_126665
crossref_primary_10_3390_s24010264
crossref_primary_10_1109_TIM_2023_3250225
crossref_primary_10_1109_TBDATA_2024_3350539
crossref_primary_10_1109_TASE_2024_3368142
crossref_primary_10_1109_JSEN_2024_3351767
crossref_primary_10_1109_TASE_2024_3468464
crossref_primary_10_3390_s23218750
crossref_primary_10_1109_TII_2022_3199228
crossref_primary_10_1007_s42514_024_00185_z
crossref_primary_10_1016_j_knosys_2024_111533
crossref_primary_10_1109_ACCESS_2022_3171559
crossref_primary_10_1109_LRA_2024_3352358
crossref_primary_10_1016_j_media_2024_103089
crossref_primary_10_3788_LOP231318
crossref_primary_10_1016_j_aei_2024_103064
crossref_primary_10_1109_TIM_2023_3269755
crossref_primary_10_3390_electronics13163224
crossref_primary_10_1108_IJCST_12_2023_0187
crossref_primary_10_1016_j_bspc_2024_106489
crossref_primary_10_1109_TIM_2021_3107586
crossref_primary_10_1109_TIM_2023_3338681
crossref_primary_10_1016_j_patcog_2023_109559
crossref_primary_10_1109_TIM_2023_3273681
crossref_primary_10_1109_TIM_2024_3406792
crossref_primary_10_1016_j_patcog_2025_111500
crossref_primary_10_1109_TCSVT_2024_3479887
crossref_primary_10_1016_j_inffus_2024_102631
crossref_primary_10_3390_s23010355
crossref_primary_10_1016_j_compeleceng_2024_109759
crossref_primary_10_1109_TMM_2023_3292596
crossref_primary_10_1007_s00138_023_01425_y
crossref_primary_10_32604_cmc_2024_046924
crossref_primary_10_1016_j_knosys_2025_113134
crossref_primary_10_1016_j_patcog_2021_108339
crossref_primary_10_3233_JIFS_232677
crossref_primary_10_1109_TIM_2024_3502773
crossref_primary_10_2478_amns_2024_2222
crossref_primary_10_3390_app132212436
crossref_primary_10_1016_j_neucom_2025_129479
crossref_primary_10_1016_j_cja_2024_06_007
crossref_primary_10_1007_s10845_024_02445_9
crossref_primary_10_1109_TPAMI_2024_3382009
crossref_primary_10_1016_j_knosys_2022_108846
crossref_primary_10_1109_TII_2024_3399934
crossref_primary_10_1109_TII_2024_3423331
crossref_primary_10_1016_j_engappai_2025_110480
crossref_primary_10_1038_s41598_024_69698_5
crossref_primary_10_1109_TII_2024_3384583
crossref_primary_10_1109_TIM_2023_3348901
crossref_primary_10_3390_jimaging7040069
crossref_primary_10_1007_s11042_023_16340_7
crossref_primary_10_1016_j_patrec_2024_03_018
crossref_primary_10_1016_j_optlastec_2023_110296
crossref_primary_10_1016_j_measurement_2024_115276
crossref_primary_10_1049_ipr2_13309
crossref_primary_10_1016_j_knosys_2023_111186
crossref_primary_10_1109_TMI_2023_3310716
crossref_primary_10_3390_app13063838
crossref_primary_10_1109_LRA_2023_3296935
crossref_primary_10_1016_j_patcog_2023_109765
crossref_primary_10_1016_j_patrec_2023_04_019
crossref_primary_10_1109_TIM_2022_3196436
crossref_primary_10_1016_j_patcog_2022_108874
crossref_primary_10_1109_ACCESS_2024_3406438
crossref_primary_10_1016_j_patcog_2022_109046
crossref_primary_10_1109_TII_2024_3438261
crossref_primary_10_2493_jjspe_91_378
crossref_primary_10_1109_TII_2023_3318302
crossref_primary_10_1088_1742_6596_2166_1_012062
crossref_primary_10_1209_0295_5075_acc88c
crossref_primary_10_1016_j_eswa_2025_126465
crossref_primary_10_1109_TITS_2024_3398037
crossref_primary_10_1109_TII_2024_3413322
crossref_primary_10_3390_math12243988
crossref_primary_10_1007_s10515_021_00310_0
crossref_primary_10_1016_j_patcog_2023_109992
crossref_primary_10_1038_s41598_024_58409_9
crossref_primary_10_1109_TII_2022_3142326
crossref_primary_10_1007_s11633_023_1459_z
crossref_primary_10_1109_TIM_2022_3194920
crossref_primary_10_1016_j_eswa_2025_126911
crossref_primary_10_1049_ipr2_13210
crossref_primary_10_1109_JSEN_2024_3446249
crossref_primary_10_1016_j_neucom_2023_126526
crossref_primary_10_1016_j_compind_2021_103459
crossref_primary_10_1016_j_ultras_2024_107528
crossref_primary_10_1016_j_engappai_2024_109890
crossref_primary_10_1016_j_aei_2023_102161
crossref_primary_10_1109_JSEN_2024_3450749
crossref_primary_10_1109_TIM_2021_3098381
crossref_primary_10_1109_TIE_2021_3094452
crossref_primary_10_1109_TPAMI_2024_3386971
crossref_primary_10_1007_s11042_023_17294_6
crossref_primary_10_1360_SSI_2021_0336
crossref_primary_10_1016_j_procs_2023_08_067
crossref_primary_10_1016_j_measurement_2023_113611
crossref_primary_10_1109_TII_2023_3241579
crossref_primary_10_1109_TIM_2023_3334368
crossref_primary_10_1007_s00371_024_03412_4
crossref_primary_10_1109_ACCESS_2025_3551371
crossref_primary_10_1109_JESTIE_2024_3365030
crossref_primary_10_1016_j_engappai_2023_107810
crossref_primary_10_1109_TASE_2023_3295872
crossref_primary_10_1109_ACCESS_2024_3406376
crossref_primary_10_1007_s00371_024_03778_5
crossref_primary_10_1177_09544089241258027
crossref_primary_10_1587_transinf_2024EDP7049
crossref_primary_10_1109_TCSVT_2024_3420775
crossref_primary_10_1109_TNNLS_2023_3344118
crossref_primary_10_2139_ssrn_4109686
crossref_primary_10_1016_j_patcog_2022_108897
crossref_primary_10_1007_s00521_022_07885_z
crossref_primary_10_1109_ACCESS_2023_3332475
crossref_primary_10_1109_ACCESS_2023_3339780
crossref_primary_10_1109_TCYB_2024_3357213
crossref_primary_10_1016_j_knosys_2024_112364
crossref_primary_10_1016_j_knosys_2025_113176
crossref_primary_10_1016_j_compind_2023_103990
crossref_primary_10_4274_jcp_2024_81489
crossref_primary_10_1109_JSEN_2023_3276762
crossref_primary_10_1007_s44196_023_00328_0
crossref_primary_10_1109_TGRS_2023_3258067
crossref_primary_10_1007_s10845_022_01962_9
crossref_primary_10_1109_TMM_2022_3175611
crossref_primary_10_1080_0951192X_2024_2397821
crossref_primary_10_1002_tee_24267
crossref_primary_10_1016_j_procs_2022_01_057
crossref_primary_10_1016_j_measurement_2025_116702
crossref_primary_10_1109_ACCESS_2024_3415503
crossref_primary_10_1080_10589759_2024_2355334
crossref_primary_10_1109_TIM_2024_3522399
crossref_primary_10_1109_TII_2023_3292904
crossref_primary_10_1109_ACCESS_2024_3424664
crossref_primary_10_1109_TIM_2025_3547481
crossref_primary_10_1111_cote_12744
crossref_primary_10_1007_s00034_024_02855_3
crossref_primary_10_1109_TIM_2025_3545987
crossref_primary_10_1109_ACCESS_2024_3454753
crossref_primary_10_3390_s24082440
crossref_primary_10_1080_17445760_2024_2328530
crossref_primary_10_3390_app14167301
crossref_primary_10_1016_j_engappai_2024_109852
crossref_primary_10_1016_j_engappai_2024_108762
crossref_primary_10_3390_aerospace9090480
crossref_primary_10_1007_s00371_024_03743_2
crossref_primary_10_1016_j_cviu_2025_104308
crossref_primary_10_1016_j_cviu_2024_103958
crossref_primary_10_3390_math12162480
crossref_primary_10_3934_era_2024169
Cites_doi 10.1016/j.patcog.2016.09.016
10.1016/j.media.2019.01.010
10.1109/TIP.2003.819861
10.1109/CVPR42600.2020.00975
10.1109/IJCNN.2019.8851808
10.1109/CVPR42600.2020.00867
10.1109/TIP.2013.2293423
10.5220/0007364503720380
ContentType Journal Article
Copyright 2020
Copyright_xml – notice: 2020
DBID AAYXX
CITATION
DOI 10.1016/j.patcog.2020.107706
DatabaseName CrossRef
DatabaseTitle CrossRef
DatabaseTitleList
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 1873-5142
ExternalDocumentID 10_1016_j_patcog_2020_107706
S0031320320305094
GroupedDBID --K
--M
-D8
-DT
-~X
.DC
.~1
0R~
123
1B1
1RT
1~.
1~5
29O
4.4
457
4G.
53G
5VS
7-5
71M
8P~
9JN
AABNK
AACTN
AAEDT
AAEDW
AAIAV
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AAQXK
AAXUO
AAYFN
ABBOA
ABEFU
ABFNM
ABFRF
ABHFT
ABJNI
ABMAC
ABTAH
ABXDB
ABYKQ
ACBEA
ACDAQ
ACGFO
ACGFS
ACNNM
ACRLP
ACZNC
ADBBV
ADEZE
ADJOM
ADMUD
ADMXK
ADTZH
AEBSH
AECPX
AEFWE
AEKER
AENEX
AFKWA
AFTJW
AGHFR
AGUBO
AGYEJ
AHHHB
AHJVU
AHZHX
AIALX
AIEXJ
AIKHN
AITUG
AJBFU
AJOXV
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
AOUOD
ASPBG
AVWKF
AXJTR
AZFZN
BJAXD
BKOJK
BLXMC
CS3
DU5
EBS
EFJIC
EFLBG
EJD
EO8
EO9
EP2
EP3
F0J
F5P
FD6
FDB
FEDTE
FGOYB
FIRID
FNPLU
FYGXN
G-Q
G8K
GBLVA
GBOLZ
HLZ
HVGLF
HZ~
H~9
IHE
J1W
JJJVA
KOM
KZ1
LG9
LMP
LY1
M41
MO0
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
Q38
R2-
RIG
RNS
ROL
RPZ
SBC
SDF
SDG
SDP
SDS
SES
SEW
SPC
SPCBC
SST
SSV
SSZ
T5K
TN5
UNMZH
VOH
WUQ
XJE
XPP
ZMT
ZY4
~G-
AATTM
AAXKI
AAYWO
AAYXX
ABDPE
ABWVN
ACRPL
ACVFH
ADCNI
ADNMO
AEIPS
AEUPX
AFJKZ
AFPUW
AFXIZ
AGCQF
AGQPQ
AGRNS
AIGII
AIIUN
AKBMS
AKRWK
AKYEP
ANKPU
APXCP
BNPGV
CITATION
SSH
ID FETCH-LOGICAL-c372t-7ebd967c7cc0c7a764b716a523214a36d7243887dc28b0095d2d9594025fd3603
IEDL.DBID .~1
ISSN 0031-3203
IngestDate Thu Apr 24 23:12:26 EDT 2025
Tue Jul 01 02:36:33 EDT 2025
Fri Feb 23 02:48:17 EST 2024
IsPeerReviewed true
IsScholarly true
Keywords Video anomaly detection
CNN
Inpainting
Anomaly detection
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c372t-7ebd967c7cc0c7a764b716a523214a36d7243887dc28b0095d2d9594025fd3603
ParticipantIDs crossref_citationtrail_10_1016_j_patcog_2020_107706
crossref_primary_10_1016_j_patcog_2020_107706
elsevier_sciencedirect_doi_10_1016_j_patcog_2020_107706
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate April 2021
2021-04-00
PublicationDateYYYYMMDD 2021-04-01
PublicationDate_xml – month: 04
  year: 2021
  text: April 2021
PublicationDecade 2020
PublicationTitle Pattern recognition
PublicationYear 2021
Publisher Elsevier Ltd
Publisher_xml – name: Elsevier Ltd
References Liu, Luo, Lian, Gao (bib0014) 2018
Noroozi, Favaro (bib0018) 2016
Hinami, Mei, Satoh (bib0032) 2017
Sun, Liu, Harada (bib0037) 2017; 64
Ruff, Vandermeulen, Goernitz, Deecke, Siddiqui, Binder, Müller, Kloft (bib0017) 2018; vol. 80
K. He, H. Fan, Y. Wu, S. Xie, R. Girshick, Momentum contrast for unsupervised visual representation learning, arXiv
P. Bergmann, S. Löwe, M. Fauser, D. Sattlegger, C. Steger, Improving unsupervised defect segmentation by applying structural similarity to autoencoders, arXiv
Pathak, Krahenbuhl, Donahue, Darrell, Efros (bib0019) 2016
Nguyen, Meunier (bib0013) 2019
Bergmann, Fauser, Sattlegger, Steger (bib0029) 2019
Goodfellow, Pouget-Abadie, Mirza, Xu, Warde-Farley, Ozair, Courville, Bengio (bib0010) 2014
Schlegl, Seeböck, Waldstein, Schmidt-Erfurth, Langs (bib0011) 2017
Hendrycks, Mazeika, Kadavath, Song (bib0007) 2019
Zenati, Romain, Foo, Lecouat, Chandrasekhar (bib0003) 2018
C. Huang, J. Cao, F. Ye, M. Li, Y. Zhang, C. Lu, Inverse-transform autoencoder for anomaly detection, arXiv
Schlegl, Seeböck, Waldstein, Langs, Schmidt-Erfurth (bib0012) 2019; 54
Caron, Bojanowski, Joulin, Douze (bib0021) 2018
Ronneberger, Fischer, Brox (bib0027) 2015
Hasan, Choi, Neumann, Roy-Chowdhury, Davis (bib0036) 2016
S. Akçay, A. Atapour-Abarghouei, T. P. Breckon, Skip-GANomaly: skip connected and adversarially trained encoder-decoder anomaly detection, arXiv
O.J. Hénaff, A. Razavi, C. Doersch, S. Eslami, A.v. d. Oord, Data-efficient image recognition with contrastive predictive coding, arXiv
Ji, Henriques, Vedaldi (bib0023) 2019
Ionescu, Khan, Georgescu, Shao (bib0033) 2019
P. Bergmann, M. Fauser, D. Sattlegger, C. Steger, Uninformed students: student-teacher anomaly detection with discriminative latent embeddings, arXiv
(2015).
Caron, Bojanowski, Mairal, Joulin (bib0022) 2019
Wang, Bovik, Sheikh, Simoncelli (bib0028) 2004; 13
Zhang, Isola, Efros (bib0020) 2016
W. Liu, R. Li, M. Zheng, S. Karanam, Z. Wu, B. Bhanu, R.J. Radke, O. Camps, Towards visually explaining variational autoencoders, arXiv
A. Radford, L. Metz, S. Chintala, Unsupervised representation learning with deep convolutional generative adversarial networks, arXiv
Akcay, Atapour-Abarghouei, Breckon (bib0001) 2018
A.v. d. Oord, Y. Li, O. Vinyals, Representation learning with contrastive predictive coding, arXiv
Lu, Shi, Jia (bib0031) 2013
(2019).
Gong, Liu, Le, Saha, Mansour, Venkatesh, Hengel (bib0005) 2019
(2018).
Mahadevan, Li, Bhalodia, Vasconcelos (bib0030) 2010
Xue, Zhang, Mou, Bovik (bib0009) 2013; 23
Accepted at CVPR 2020. (2019).
Golan, El-Yaniv (bib0006) 2018
Gidaris, Singh, Komodakis (bib0008) 2018
Gong (10.1016/j.patcog.2020.107706_bib0005) 2019
Nguyen (10.1016/j.patcog.2020.107706_bib0013) 2019
10.1016/j.patcog.2020.107706_bib0004
10.1016/j.patcog.2020.107706_bib0026
10.1016/j.patcog.2020.107706_bib0025
Caron (10.1016/j.patcog.2020.107706_bib0021) 2018
Schlegl (10.1016/j.patcog.2020.107706_bib0012) 2019; 54
Ronneberger (10.1016/j.patcog.2020.107706_bib0027) 2015
Liu (10.1016/j.patcog.2020.107706_bib0014) 2018
Lu (10.1016/j.patcog.2020.107706_bib0031) 2013
Zhang (10.1016/j.patcog.2020.107706_bib0020) 2016
Hinami (10.1016/j.patcog.2020.107706_bib0032) 2017
Mahadevan (10.1016/j.patcog.2020.107706_bib0030) 2010
Gidaris (10.1016/j.patcog.2020.107706_bib0008) 2018
10.1016/j.patcog.2020.107706_bib0035
Hasan (10.1016/j.patcog.2020.107706_bib0036) 2016
10.1016/j.patcog.2020.107706_bib0034
Zenati (10.1016/j.patcog.2020.107706_bib0003) 2018
Hendrycks (10.1016/j.patcog.2020.107706_bib0007) 2019
Goodfellow (10.1016/j.patcog.2020.107706_bib0010) 2014
10.1016/j.patcog.2020.107706_bib0016
Sun (10.1016/j.patcog.2020.107706_bib0037) 2017; 64
10.1016/j.patcog.2020.107706_bib0015
Xue (10.1016/j.patcog.2020.107706_bib0009) 2013; 23
Ionescu (10.1016/j.patcog.2020.107706_bib0033) 2019
Ji (10.1016/j.patcog.2020.107706_bib0023) 2019
Schlegl (10.1016/j.patcog.2020.107706_bib0011) 2017
Bergmann (10.1016/j.patcog.2020.107706_bib0029) 2019
Ruff (10.1016/j.patcog.2020.107706_bib0017) 2018; vol. 80
Caron (10.1016/j.patcog.2020.107706_bib0022) 2019
Akcay (10.1016/j.patcog.2020.107706_bib0001) 2018
Pathak (10.1016/j.patcog.2020.107706_bib0019) 2016
Wang (10.1016/j.patcog.2020.107706_bib0028) 2004; 13
10.1016/j.patcog.2020.107706_bib0002
Noroozi (10.1016/j.patcog.2020.107706_bib0018) 2016
10.1016/j.patcog.2020.107706_bib0024
Golan (10.1016/j.patcog.2020.107706_bib0006) 2018
References_xml – start-page: 2959
  year: 2019
  end-page: 2968
  ident: bib0022
  article-title: Unsupervised pre-training of image features on non-curated data
  publication-title: Proceedings of the IEEE International Conference on Computer Vision
– start-page: 2720
  year: 2013
  end-page: 2727
  ident: bib0031
  article-title: Abnormal event detection at 150 FPS in MATLAB
  publication-title: Proceedings of the IEEE International Conference on Computer Vision
– volume: 13
  start-page: 600
  year: 2004
  end-page: 612
  ident: bib0028
  article-title: Image quality assessment: from error visibility to structural similarity
  publication-title: IEEE Trans. Image Process.
– reference: S. Akçay, A. Atapour-Abarghouei, T. P. Breckon, Skip-GANomaly: skip connected and adversarially trained encoder-decoder anomaly detection, arXiv:
– year: 2019
  ident: bib0023
  article-title: Invariant information clustering for unsupervised image classification and segmentation
  publication-title: ICCV
– reference: (2019).
– volume: 54
  start-page: 30
  year: 2019
  end-page: 44
  ident: bib0012
  article-title: f-AnoGAN: Fast unsupervised anomaly detection with generative adversarial networks
  publication-title: Med. Image Anal.
– start-page: 649
  year: 2016
  end-page: 666
  ident: bib0020
  article-title: Colorful image colorization
  publication-title: European Conference on Computer Vision
– start-page: 15637
  year: 2019
  end-page: 15648
  ident: bib0007
  article-title: Using self-supervised learning can improve model robustness and uncertainty
  publication-title: Advances in Neural Information Processing Systems
– start-page: 9592
  year: 2019
  end-page: 9600
  ident: bib0029
  article-title: MVTec AD–A comprehensive real-world dataset for unsupervised anomaly detection
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– reference: (2015).
– start-page: 234
  year: 2015
  end-page: 241
  ident: bib0027
  article-title: U-Net: convolutional networks for biomedical image segmentation
  publication-title: International Conference on Medical Image Computing and Computer-Assisted Intervention
– start-page: 3619
  year: 2017
  end-page: 3627
  ident: bib0032
  article-title: Joint detection and recounting of abnormal events by learning deep generic knowledge
  publication-title: Proceedings of the IEEE International Conference on Computer Vision
– start-page: 9758
  year: 2018
  end-page: 9769
  ident: bib0006
  article-title: Deep anomaly detection using geometric transformations
  publication-title: Advances in Neural Information Processing Systems 31
– start-page: 132
  year: 2018
  end-page: 149
  ident: bib0021
  article-title: Deep clustering for unsupervised learning of visual features
  publication-title: Proceedings of the European Conference on Computer Vision (ECCV)
– start-page: 7842
  year: 2019
  end-page: 7851
  ident: bib0033
  article-title: Object-centric auto-encoders and dummy anomalies for abnormal event detection in video
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– start-page: 1975
  year: 2010
  end-page: 1981
  ident: bib0030
  article-title: Anomaly detection in crowded scenes
  publication-title: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition
– start-page: 622
  year: 2018
  end-page: 637
  ident: bib0001
  article-title: GANomaly: semi-supervised anomaly detection via adversarial training
  publication-title: Asian Conference on Computer Vision
– volume: 64
  start-page: 187
  year: 2017
  end-page: 201
  ident: bib0037
  article-title: Online growing neural gas for anomaly detection in changing surveillance scenes
  publication-title: Pattern Recognit.
– start-page: 2672
  year: 2014
  end-page: 2680
  ident: bib0010
  article-title: Generative adversarial nets
  publication-title: Advances in Neural Information Processing Systems
– reference: O.J. Hénaff, A. Razavi, C. Doersch, S. Eslami, A.v. d. Oord, Data-efficient image recognition with contrastive predictive coding, arXiv:
– reference: P. Bergmann, S. Löwe, M. Fauser, D. Sattlegger, C. Steger, Improving unsupervised defect segmentation by applying structural similarity to autoencoders, arXiv:
– start-page: 727
  year: 2018
  end-page: 736
  ident: bib0003
  article-title: Adversarially learned anomaly detection
  publication-title: 2018 IEEE International Conference on Data Mining (ICDM)
– start-page: 6536
  year: 2018
  end-page: 6545
  ident: bib0014
  article-title: Future frame prediction for anomaly detection–a new baseline
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– start-page: 1705
  year: 2019
  end-page: 1714
  ident: bib0005
  article-title: Memorizing normality to detect anomaly: memory-augmented deep autoencoder for unsupervised anomaly detection
  publication-title: Proceedings of the IEEE International Conference on Computer Vision
– volume: 23
  start-page: 684
  year: 2013
  end-page: 695
  ident: bib0009
  article-title: Gradient magnitude similarity deviation: a highly efficient perceptual image quality index
  publication-title: IEEE Trans. Image Process.
– reference: A. Radford, L. Metz, S. Chintala, Unsupervised representation learning with deep convolutional generative adversarial networks, arXiv:
– start-page: 2536
  year: 2016
  end-page: 2544
  ident: bib0019
  article-title: Context encoders: feature learning by inpainting
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– reference: (2018).
– start-page: 146
  year: 2017
  end-page: 157
  ident: bib0011
  article-title: Unsupervised anomaly detection with generative adversarial networks to guide marker discovery
  publication-title: International Conference on Information Processing in Medical Imaging
– reference: P. Bergmann, M. Fauser, D. Sattlegger, C. Steger, Uninformed students: student-teacher anomaly detection with discriminative latent embeddings, arXiv:
– start-page: 69
  year: 2016
  end-page: 84
  ident: bib0018
  article-title: Unsupervised learning of visual representations by solving jigsaw puzzles
  publication-title: European Conference on Computer Vision
– reference: K. He, H. Fan, Y. Wu, S. Xie, R. Girshick, Momentum contrast for unsupervised visual representation learning, arXiv:
– year: 2018
  ident: bib0008
  article-title: Unsupervised representation learning by predicting image rotations
  publication-title: International Conference of Learning Representations
– volume: vol. 80
  start-page: 4393
  year: 2018
  end-page: 4402
  ident: bib0017
  article-title: Deep one-class classification
  publication-title: Proceedings of the 35th International Conference on Machine Learning
– reference: W. Liu, R. Li, M. Zheng, S. Karanam, Z. Wu, B. Bhanu, R.J. Radke, O. Camps, Towards visually explaining variational autoencoders, arXiv:
– reference: , Accepted at CVPR 2020. (2019).
– reference: A.v. d. Oord, Y. Li, O. Vinyals, Representation learning with contrastive predictive coding, arXiv:
– reference: C. Huang, J. Cao, F. Ye, M. Li, Y. Zhang, C. Lu, Inverse-transform autoencoder for anomaly detection, arXiv:
– start-page: 1273
  year: 2019
  end-page: 1283
  ident: bib0013
  article-title: Anomaly detection in video sequence with appearance-motion correspondence
  publication-title: Proceedings of the IEEE International Conference on Computer Vision
– start-page: 733
  year: 2016
  end-page: 742
  ident: bib0036
  article-title: Learning temporal regularity in video sequences
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– start-page: 9758
  year: 2018
  ident: 10.1016/j.patcog.2020.107706_bib0006
  article-title: Deep anomaly detection using geometric transformations
– start-page: 733
  year: 2016
  ident: 10.1016/j.patcog.2020.107706_bib0036
  article-title: Learning temporal regularity in video sequences
– start-page: 3619
  year: 2017
  ident: 10.1016/j.patcog.2020.107706_bib0032
  article-title: Joint detection and recounting of abnormal events by learning deep generic knowledge
– start-page: 727
  year: 2018
  ident: 10.1016/j.patcog.2020.107706_bib0003
  article-title: Adversarially learned anomaly detection
– volume: vol. 80
  start-page: 4393
  year: 2018
  ident: 10.1016/j.patcog.2020.107706_bib0017
  article-title: Deep one-class classification
– ident: 10.1016/j.patcog.2020.107706_bib0016
– start-page: 6536
  year: 2018
  ident: 10.1016/j.patcog.2020.107706_bib0014
  article-title: Future frame prediction for anomaly detection–a new baseline
– volume: 64
  start-page: 187
  year: 2017
  ident: 10.1016/j.patcog.2020.107706_bib0037
  article-title: Online growing neural gas for anomaly detection in changing surveillance scenes
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2016.09.016
– start-page: 69
  year: 2016
  ident: 10.1016/j.patcog.2020.107706_bib0018
  article-title: Unsupervised learning of visual representations by solving jigsaw puzzles
– start-page: 1705
  year: 2019
  ident: 10.1016/j.patcog.2020.107706_bib0005
  article-title: Memorizing normality to detect anomaly: memory-augmented deep autoencoder for unsupervised anomaly detection
– start-page: 1273
  year: 2019
  ident: 10.1016/j.patcog.2020.107706_bib0013
  article-title: Anomaly detection in video sequence with appearance-motion correspondence
– start-page: 1975
  year: 2010
  ident: 10.1016/j.patcog.2020.107706_bib0030
  article-title: Anomaly detection in crowded scenes
– year: 2018
  ident: 10.1016/j.patcog.2020.107706_bib0008
  article-title: Unsupervised representation learning by predicting image rotations
– start-page: 146
  year: 2017
  ident: 10.1016/j.patcog.2020.107706_bib0011
  article-title: Unsupervised anomaly detection with generative adversarial networks to guide marker discovery
– volume: 54
  start-page: 30
  year: 2019
  ident: 10.1016/j.patcog.2020.107706_bib0012
  article-title: f-AnoGAN: Fast unsupervised anomaly detection with generative adversarial networks
  publication-title: Med. Image Anal.
  doi: 10.1016/j.media.2019.01.010
– volume: 13
  start-page: 600
  issue: 4
  year: 2004
  ident: 10.1016/j.patcog.2020.107706_bib0028
  article-title: Image quality assessment: from error visibility to structural similarity
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2003.819861
– ident: 10.1016/j.patcog.2020.107706_bib0024
  doi: 10.1109/CVPR42600.2020.00975
– start-page: 7842
  year: 2019
  ident: 10.1016/j.patcog.2020.107706_bib0033
  article-title: Object-centric auto-encoders and dummy anomalies for abnormal event detection in video
– ident: 10.1016/j.patcog.2020.107706_bib0026
– start-page: 234
  year: 2015
  ident: 10.1016/j.patcog.2020.107706_bib0027
  article-title: U-Net: convolutional networks for biomedical image segmentation
– start-page: 2720
  year: 2013
  ident: 10.1016/j.patcog.2020.107706_bib0031
  article-title: Abnormal event detection at 150 FPS in MATLAB
– ident: 10.1016/j.patcog.2020.107706_bib0025
– ident: 10.1016/j.patcog.2020.107706_bib0002
  doi: 10.1109/IJCNN.2019.8851808
– start-page: 2959
  year: 2019
  ident: 10.1016/j.patcog.2020.107706_bib0022
  article-title: Unsupervised pre-training of image features on non-curated data
– start-page: 9592
  year: 2019
  ident: 10.1016/j.patcog.2020.107706_bib0029
  article-title: MVTec AD–A comprehensive real-world dataset for unsupervised anomaly detection
– start-page: 2672
  year: 2014
  ident: 10.1016/j.patcog.2020.107706_bib0010
  article-title: Generative adversarial nets
– start-page: 2536
  year: 2016
  ident: 10.1016/j.patcog.2020.107706_bib0019
  article-title: Context encoders: feature learning by inpainting
– ident: 10.1016/j.patcog.2020.107706_bib0035
  doi: 10.1109/CVPR42600.2020.00867
– start-page: 622
  year: 2018
  ident: 10.1016/j.patcog.2020.107706_bib0001
  article-title: GANomaly: semi-supervised anomaly detection via adversarial training
– start-page: 132
  year: 2018
  ident: 10.1016/j.patcog.2020.107706_bib0021
  article-title: Deep clustering for unsupervised learning of visual features
– ident: 10.1016/j.patcog.2020.107706_bib0015
– start-page: 649
  year: 2016
  ident: 10.1016/j.patcog.2020.107706_bib0020
  article-title: Colorful image colorization
– year: 2019
  ident: 10.1016/j.patcog.2020.107706_bib0023
  article-title: Invariant information clustering for unsupervised image classification and segmentation
– volume: 23
  start-page: 684
  issue: 2
  year: 2013
  ident: 10.1016/j.patcog.2020.107706_bib0009
  article-title: Gradient magnitude similarity deviation: a highly efficient perceptual image quality index
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2013.2293423
– ident: 10.1016/j.patcog.2020.107706_bib0034
  doi: 10.5220/0007364503720380
– start-page: 15637
  year: 2019
  ident: 10.1016/j.patcog.2020.107706_bib0007
  article-title: Using self-supervised learning can improve model robustness and uncertainty
– ident: 10.1016/j.patcog.2020.107706_bib0004
SSID ssj0017142
Score 2.704343
Snippet •A reconstruction-by-inpainting-based anomaly detection method (RIAD) was proposed.•RIAD achieves state-of-the-art performance on anomaly detection and...
SourceID crossref
elsevier
SourceType Enrichment Source
Index Database
Publisher
StartPage 107706
SubjectTerms Anomaly detection
CNN
Inpainting
Video anomaly detection
Title Reconstruction by inpainting for visual anomaly detection
URI https://dx.doi.org/10.1016/j.patcog.2020.107706
Volume 112
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LSwMxEA6lXrz4Fuuj5OA1djfJbppjKZaq2JOF3pbNY6VStwtWoRd_uzP7qAqi4G0JmWWZZCfzwZfvI-Qy5C6M-t4yrTPJZJSGzGRxAE9WW64hpvRYup_E46m8nUWzFhk2d2GQVlnX_qqml9W6HunV2ewV8zne8UXZQTQAFyhigpqgqF4He_rqfUPzQH_vSjFchAynNtfnSo5XAeVu-QgokeOQUuh79NPx9OXIGe2RnbpXpIPqc_ZJy-cHZLfxYaD1b3lINGLITyVYatZ0nhcA-ZHSTKErpW_zl1d4U5ovn9PFmjq_KhlY-RGZjq4fhmNWWyIwKxRfMeWN07GyytrAqlTF0gDgSQFN8lCmInaKSwF1w1neN9g-Oe50pAEkRpkTcSCOSTtf5v6E0NgZqQGeaquFtNwbrX2m-takGXRlgekQ0WQisbVeONpWLJKGGPaUVPlLMH9Jlb8OYZuootLL-GO-apKcfFv3BEr6r5Gn_448I9scmSkl_-actGF5_AW0FivTLfdOl2wNbu7Gkw-JKMzv
linkProvider Elsevier
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1dS8MwFL3M7UFf_BbnZx58DWuTtFkex3B07uNpg72VJmllol3BKezfm7TpVBAF30rILeWmPbmHnpwLcOcT7QfdVGEhMoZZkPhYZqFnrpRQRJiYssfSZBpGc_awCBYN6NdnYays0mF_heklWruRjstmp1gu7RlfaztoG4BTa2LCdqBl3alYE1q94Siabn8mcJ9VpuHUx3Z2fYKulHkVBvFWj4YoEjvEuW199NMO9WXXGRzCvisXUa96oiNopPkxHNStGJD7Mk9AWBr5aQaL5AYt88KwfqtqRqYwRe_L1zdzpyRfvSTPG6TTdSnCyk9hPrif9SPsuiJgRTlZY55KLUKuuFKe4gkPmTScJzGEkvgsoaHmhFEDHVqRrrQVlCZaBMLwxCDTNPToGTTzVZ6eAwq1ZMIwVKEEZYqkUog0410lk8wUZp5sA60zEStnGW47VzzHtTbsKa7yF9v8xVX-2oC3UUVlmfHHfF4nOf629LFB9V8jL_4deQu70WwyjsfD6egS9ogVqpRynCtomqVKr02lsZY37k36AKNrz6A
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Reconstruction+by+inpainting+for+visual+anomaly+detection&rft.jtitle=Pattern+recognition&rft.au=Zavrtanik%2C+Vitjan&rft.au=Kristan%2C+Matej&rft.au=Sko%C4%8Daj%2C+Danijel&rft.date=2021-04-01&rft.issn=0031-3203&rft.volume=112&rft.spage=107706&rft_id=info:doi/10.1016%2Fj.patcog.2020.107706&rft.externalDBID=n%2Fa&rft.externalDocID=10_1016_j_patcog_2020_107706
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0031-3203&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0031-3203&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0031-3203&client=summon