Limited Data, Unlimited Potential: A Study on ViTs Augmented by Masked Autoencoders
Vision Transformers (ViTs) have become ubiquitous in computer vision. Despite their success, ViTs lack inductive biases, which can make it difficult to train them with limited data. To address this challenge, prior studies suggest training ViTs with self-supervised learning (SSL) and fine-tuning seq...
Saved in:
Main Authors | , , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
31.10.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Vision Transformers (ViTs) have become ubiquitous in computer vision. Despite
their success, ViTs lack inductive biases, which can make it difficult to train
them with limited data. To address this challenge, prior studies suggest
training ViTs with self-supervised learning (SSL) and fine-tuning sequentially.
However, we observe that jointly optimizing ViTs for the primary task and a
Self-Supervised Auxiliary Task (SSAT) is surprisingly beneficial when the
amount of training data is limited. We explore the appropriate SSL tasks that
can be optimized alongside the primary task, the training schemes for these
tasks, and the data scale at which they can be most effective. Our findings
reveal that SSAT is a powerful technique that enables ViTs to leverage the
unique characteristics of both the self-supervised and primary tasks, achieving
better performance than typical ViTs pre-training with SSL and fine-tuning
sequentially. Our experiments, conducted on 10 datasets, demonstrate that SSAT
significantly improves ViT performance while reducing carbon footprint. We also
confirm the effectiveness of SSAT in the video domain for deepfake detection,
showcasing its generalizability. Our code is available at
https://github.com/dominickrei/Limited-data-vits. |
---|---|
AbstractList | Vision Transformers (ViTs) have become ubiquitous in computer vision. Despite
their success, ViTs lack inductive biases, which can make it difficult to train
them with limited data. To address this challenge, prior studies suggest
training ViTs with self-supervised learning (SSL) and fine-tuning sequentially.
However, we observe that jointly optimizing ViTs for the primary task and a
Self-Supervised Auxiliary Task (SSAT) is surprisingly beneficial when the
amount of training data is limited. We explore the appropriate SSL tasks that
can be optimized alongside the primary task, the training schemes for these
tasks, and the data scale at which they can be most effective. Our findings
reveal that SSAT is a powerful technique that enables ViTs to leverage the
unique characteristics of both the self-supervised and primary tasks, achieving
better performance than typical ViTs pre-training with SSL and fine-tuning
sequentially. Our experiments, conducted on 10 datasets, demonstrate that SSAT
significantly improves ViT performance while reducing carbon footprint. We also
confirm the effectiveness of SSAT in the video domain for deepfake detection,
showcasing its generalizability. Our code is available at
https://github.com/dominickrei/Limited-data-vits. |
Author | Das, Abhijit Das, Srijan Jain, Tanmay Balaji, Pranav Karmakar, Soumyajit Marjit, Shyam Ryoo, Michael S Li, Xiang Reilly, Dominick |
Author_xml | – sequence: 1 givenname: Srijan surname: Das fullname: Das, Srijan – sequence: 2 givenname: Tanmay surname: Jain fullname: Jain, Tanmay – sequence: 3 givenname: Dominick surname: Reilly fullname: Reilly, Dominick – sequence: 4 givenname: Pranav surname: Balaji fullname: Balaji, Pranav – sequence: 5 givenname: Soumyajit surname: Karmakar fullname: Karmakar, Soumyajit – sequence: 6 givenname: Shyam surname: Marjit fullname: Marjit, Shyam – sequence: 7 givenname: Xiang surname: Li fullname: Li, Xiang – sequence: 8 givenname: Abhijit surname: Das fullname: Das, Abhijit – sequence: 9 givenname: Michael S surname: Ryoo fullname: Ryoo, Michael S |
BackLink | https://doi.org/10.48550/arXiv.2310.20704$$DView paper in arXiv |
BookMark | eNotj81OwzAQhH2AAxQegBN-AFL8Ezsut6j8SqmK1MA12tobZJE4KHEQeXtC6Wl3Z1aj-c7JSegCEnLF2TI1SrFb6H_891LIWRAsY-kZ2RW-9REdvYcIN_QtNMf7tYsYoofmjuZ0F0c30S7Qd18ONB8_2tmbn_YT3cDwOW_5GDsMtnPYDxfktIZmwMvjXJDy8aFcPyfF9ullnRcJ6CxNOHdoM7bSWgstMlQcrFFWyXoPmjHhYG7IBRjDscaVlRlHKWoujFHGCikX5Po_9kBVffW-hX6q_uiqA538BQodSos |
ContentType | Journal Article |
Copyright | http://creativecommons.org/licenses/by/4.0 |
Copyright_xml | – notice: http://creativecommons.org/licenses/by/4.0 |
DBID | AKY GOX |
DOI | 10.48550/arxiv.2310.20704 |
DatabaseName | arXiv Computer Science arXiv.org |
DatabaseTitleList | |
Database_xml | – sequence: 1 dbid: GOX name: arXiv.org url: http://arxiv.org/find sourceTypes: Open Access Repository |
DeliveryMethod | fulltext_linktorsrc |
ExternalDocumentID | 2310_20704 |
GroupedDBID | AKY GOX |
ID | FETCH-LOGICAL-a674-11dec7096662627e51ac85c53fba6002da07012a881efe9c371e32f128858c233 |
IEDL.DBID | GOX |
IngestDate | Mon Jan 08 05:36:53 EST 2024 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | false |
IsScholarly | false |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-a674-11dec7096662627e51ac85c53fba6002da07012a881efe9c371e32f128858c233 |
OpenAccessLink | https://arxiv.org/abs/2310.20704 |
ParticipantIDs | arxiv_primary_2310_20704 |
PublicationCentury | 2000 |
PublicationDate | 2023-10-31 |
PublicationDateYYYYMMDD | 2023-10-31 |
PublicationDate_xml | – month: 10 year: 2023 text: 2023-10-31 day: 31 |
PublicationDecade | 2020 |
PublicationYear | 2023 |
Score | 1.898961 |
SecondaryResourceType | preprint |
Snippet | Vision Transformers (ViTs) have become ubiquitous in computer vision. Despite
their success, ViTs lack inductive biases, which can make it difficult to train... |
SourceID | arxiv |
SourceType | Open Access Repository |
SubjectTerms | Computer Science - Artificial Intelligence Computer Science - Computer Vision and Pattern Recognition |
Title | Limited Data, Unlimited Potential: A Study on ViTs Augmented by Masked Autoencoders |
URI | https://arxiv.org/abs/2310.20704 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwdV1NS8QwEA3rnryIorJ-koNHiyRtmtRbUddFWBXsSm8lSVNZXFrZtuL-e2fail685YsQJiTzHpl5IeSiiJSKpMm9HEXuAMALD0B94AllbA-oFeY7zx_D2SJ4SEU6IvQnF0avv5afvT6wqa8QfACHkyj4ucU5hmzdP6X942QnxTWM_x0HGLNr-uMkprtkZ0B3NO63Y4-MXLlPXoYkInqrG31JF-VqqD9XDcbq6NU1jSkG9G1oVdLXZVLTuH3r1DJzajZ0rut3KMVtU6HsJIYeH5BkepfczLzhLwNPhzLwGMudlUAXQiAQXDrBtFXCCr8wGl_Gcg2LZ1wrxVzhIutL5nxegPNQQlnu-4dkXFalmxBqmLUSSBgeNeBWoQphlkICdLDGAro4IpPOAtlHL1eRoXGyzjjH_3edkG38SL2_lU_JuFm37gzcbWPOO5t_AyivfOI |
link.rule.ids | 228,230,783,888 |
linkProvider | Cornell University |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Limited+Data%2C+Unlimited+Potential%3A+A+Study+on+ViTs+Augmented+by+Masked+Autoencoders&rft.au=Das%2C+Srijan&rft.au=Jain%2C+Tanmay&rft.au=Reilly%2C+Dominick&rft.au=Balaji%2C+Pranav&rft.date=2023-10-31&rft_id=info:doi/10.48550%2Farxiv.2310.20704&rft.externalDocID=2310_20704 |