Learning to Transfer: Transferring Latent Task Structures and Its Application to Person-Specific Facial Action Unit Detection

In this article we explore the problem of constructing person-specific models for the detection of facial Action Units (AUs), addressing the problem from the point of view of Transfer Learning and Multi-Task Learning. Our starting point is the fact that some expressions, such as smiles, are very eas...

Full description

Saved in:
Bibliographic Details
Published inProceedings / IEEE International Conference on Computer Vision pp. 3774 - 3782
Main Authors Almaev, Timur, Martinez, Brais, Valstar, Michel
Format Conference Proceeding Journal Article
LanguageEnglish
Published IEEE 01.12.2015
Subjects
Online AccessGet full text
ISSN2380-7504
DOI10.1109/ICCV.2015.430

Cover

Loading…
Abstract In this article we explore the problem of constructing person-specific models for the detection of facial Action Units (AUs), addressing the problem from the point of view of Transfer Learning and Multi-Task Learning. Our starting point is the fact that some expressions, such as smiles, are very easily elicited, annotated, and automatically detected, while others are much harder to elicit and to annotate. We thus consider a novel problem: all AU models for the target subject are to be learnt using person-specific annotated data for a reference AU (AU12 in our case), and no data or little data regarding the target AU. In order to design such a model, we propose a novel Multi-Task Learning and the associated Transfer Learning framework, in which we consider both relations across subjects and AUs. That is to say, we consider a tensor structure among the tasks. Our approach hinges on learning the latent relations among tasks using one single reference AU, and then transferring these latent relations to other AUs. We show that we are able to effectively make use of the annotated data for AU12 when learning other person-specific AU models, even in the absence of data for the target task. Finally, we show the excellent performance of our method when small amounts of annotated data for the target tasks are made available.
AbstractList In this article we explore the problem of constructing person-specific models for the detection of facial Action Units (AUs), addressing the problem from the point of view of Transfer Learning and Multi-Task Learning. Our starting point is the fact that some expressions, such as smiles, are very easily elicited, annotated, and automatically detected, while others are much harder to elicit and to annotate. We thus consider a novel problem: all AU models for the target subject are to be learnt using person-specific annotated data for a reference AU (AU12 in our case), and no data or little data regarding the target AU. In order to design such a model, we propose a novel Multi-Task Learning and the associated Transfer Learning framework, in which we consider both relations across subjects and AUs. That is to say, we consider a tensor structure among the tasks. Our approach hinges on learning the latent relations among tasks using one single reference AU, and then transferring these latent relations to other AUs. We show that we are able to effectively make use of the annotated data for AU12 when learning other person-specific AU models, even in the absence of data for the target task. Finally, we show the excellent performance of our method when small amounts of annotated data for the target tasks are made available.
Author Martinez, Brais
Valstar, Michel
Almaev, Timur
Author_xml – sequence: 1
  givenname: Timur
  surname: Almaev
  fullname: Almaev, Timur
  email: psxta4@nottingham.ac.uk
  organization: Sch. of Comput. Sci., Univ. of Nottingham, Nottingham, UK
– sequence: 2
  givenname: Brais
  surname: Martinez
  fullname: Martinez, Brais
  email: Brais.Martinez@nottingham.ac.uk
  organization: Sch. of Comput. Sci., Univ. of Nottingham, Nottingham, UK
– sequence: 3
  givenname: Michel
  surname: Valstar
  fullname: Valstar, Michel
  email: Michel.Valstar@nottingham.ac.uk
  organization: Sch. of Comput. Sci., Univ. of Nottingham, Nottingham, UK
BookMark eNo9jD1PwzAYhA0CibYwMrF4ZEmx4zh22KpCoVIlkNqyRm-cN8iQOsF2Bwb-O_1ATHene-6G5Mx1Dgm55mzMOSvu5tPp2zhlXI4zwU7IkGe5EloUnJ2SQSo0S5Rk2QUZhvDBmChSnQ_IzwLBO-veaezoyoMLDfr7f-f3zQIiukhXED7pMvqtiVuPgYKr6TwGOun71hqItnP7k1f0oXPJskdjG2voDIyFlk7MAVg7G-kDRjzES3LeQBvw6k9HZD17XE2fk8XL03w6WSQ2zVRMBOcy1Y2CSucgQBUaK5lzUfHGpKJWwuSVKSQYI2vIZSYwVah3AyZYDXUjRuT2-Nv77muLIZYbGwy2LTjstqHkmucsl0rqHXpzRC0ilr23G_Dfpco4U1qJX-jubhY
CODEN IEEPAD
ContentType Conference Proceeding
Journal Article
DBID 6IE
6IH
CBEJK
RIE
RIO
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
DOI 10.1109/ICCV.2015.430
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Proceedings Order Plan (POP) 1998-present by volume
IEEE Xplore All Conference Proceedings
IEEE Electronic Library (IEL)
IEEE Proceedings Order Plans (POP) 1998-present
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
DatabaseTitleList Technology Research Database

Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
EISBN 1467383910
9781467383912
EISSN 2380-7504
EndPage 3782
ExternalDocumentID 7410787
Genre orig-research
GroupedDBID 29O
6IE
6IF
6IH
6IK
6IL
6IM
6IN
AAJGR
AAWTH
ACGFS
ADZIZ
ALMA_UNASSIGNED_HOLDINGS
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CBEJK
CHZPO
IPLJI
M43
OCL
RIE
RIL
RIO
RNS
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-i247t-311528f7ab86a3a798eb5613b1fc23d73c6bc95acc5da6543e27e88f7030dadf3
IEDL.DBID RIE
IngestDate Thu Jul 10 23:25:14 EDT 2025
Wed Aug 27 01:57:44 EDT 2025
IsDoiOpenAccess false
IsOpenAccess true
IsPeerReviewed false
IsScholarly true
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-i247t-311528f7ab86a3a798eb5613b1fc23d73c6bc95acc5da6543e27e88f7030dadf3
Notes ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Conference-1
ObjectType-Feature-3
content type line 23
SourceType-Conference Papers & Proceedings-2
OpenAccessLink https://nottingham-repository.worktribe.com/output/769162
PQID 1816065758
PQPubID 23500
PageCount 9
ParticipantIDs proquest_miscellaneous_1816065758
ieee_primary_7410787
PublicationCentury 2000
PublicationDate 20151201
PublicationDateYYYYMMDD 2015-12-01
PublicationDate_xml – month: 12
  year: 2015
  text: 20151201
  day: 01
PublicationDecade 2010
PublicationTitle Proceedings / IEEE International Conference on Computer Vision
PublicationTitleAbbrev ICCV
PublicationYear 2015
Publisher IEEE
Publisher_xml – name: IEEE
SSID ssj0039286
ssib030089929
Score 2.1782277
Snippet In this article we explore the problem of constructing person-specific models for the detection of facial Action Units (AUs), addressing the problem from the...
SourceID proquest
ieee
SourceType Aggregation Database
Publisher
StartPage 3774
SubjectTerms Computer vision
Conferences
Data models
Encoding
Face recognition
Facial
Facial muscles
Gold
Hinges
Learning
Mathematical analysis
Tasks
Tensors
Training
Title Learning to Transfer: Transferring Latent Task Structures and Its Application to Person-Specific Facial Action Unit Detection
URI https://ieeexplore.ieee.org/document/7410787
https://www.proquest.com/docview/1816065758
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3NS8MwFA9zJ09TN3F-EcGj7bp-JKm3MR2bOBm4yW4jX5UxaIW2F8H_3Ze020A9eAm5vBDykvf7Je8jCN0KrkhIo8RJJPOcUBPtxIH0HSEICyTxFLUZctMXMl6ET8to2UB3u1wYrbUNPtOu6VpfvspkaZ7KeoB-gGj0AB1AW-VqbfdO4Bn_lYH6ygoD7DOyr6nZmwyHbyaQK3JDG_BsBv9lfi2mjFpoup1NFUqycctCuPLzR6HG_073CHX22Xt4tsOlY9TQ6Qlq1XQT14c5b6OvurbqOy4ybDELpO93PfPgh5-BiqYFnvN8g19tqdkS7ueYpwpPihwP9u5vM8jM0nfH_mmfrCUecfMgjwc2dwIbeosfdGGDv9IOWowe58OxU__G4Kz9kBaOKcvjs4RywQgPOI2ZFub2IfqJ9ANFQbNCxhGXMlLcZKxqn2oGAqAbxVUSnKJmmqX6DGEdksQLBOd9AjaEqphzkPXjfugnvuC0i9pmNVcfVcGNVb2QXXSz1dcKDoHxbPBUZ2W-ApoCFzFgnuz8b9ELdGiUX8WhXKImLJi-AjZRiGu7jb4B9cfL3g
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LSwMxEB60HvRUtRXfRvDo9rGPZNdbqZZW21KwSm9LXitS2BW6vQj-dyfZbQvqwVsuE0Imme_LvAJwI7iiPgsSJ5Fhy_E11U7kSdcRgoaepC3FbIXcaEz7L_7jLJhtwe26FkZrbZPPdMMMbSxfZXJpXGVNRD9ENLYNO4j7flBUa61Oj9cyESwD9oUdRuAP6aarZnPQ7b6aVK6g4duUZzP9LwNsUaVXhdFqPUUyybyxzEVDfv5o1fjfBe9DfVO_RyZrZDqALZ0eQrUknKS8zosafJXdVd9InhGLWih9tx4Zlx8ZIhlNczLlizl5ts1ml_hCJzxVZJAvSGcTADeTTCyBd-yv9sm7JD1uXPKkY6sniCG45F7nNv0rrcNL72Ha7TvlfwzOu-uz3DGNedwwYVyElHucRaEW5v0h2ol0PcVQt0JGAZcyUNzUrGqX6RAFUDeKq8Q7gkqapfoYiPZp0vIE522KVoSpiHOUdaO27yau4OwEamY344-i5UZcbuQJXK_0FeM1MLENnupsuYiRqOBTDLlnePq36BXs9qejYTwcjJ_OYM8chCIr5RwquHn6ArlFLi7tkfoGyJDPKw
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=proceeding&rft.title=Proceedings+%2F+IEEE+International+Conference+on+Computer+Vision&rft.atitle=Learning+to+Transfer%3A+Transferring+Latent+Task+Structures+and+Its+Application+to+Person-Specific+Facial+Action+Unit+Detection&rft.au=Almaev%2C+Timur&rft.au=Martinez%2C+Brais&rft.au=Valstar%2C+Michel&rft.date=2015-12-01&rft.pub=IEEE&rft.eissn=2380-7504&rft.spage=3774&rft.epage=3782&rft_id=info:doi/10.1109%2FICCV.2015.430&rft.externalDocID=7410787