Context-Aware Human Trajectories Prediction via Latent Variational Model

Understanding human-contextual interaction to predict human trajectories is a challenging problem. Most of previous trajectory prediction approaches focused on modeling the human-human interaction located in a near neighborhood and neglected the influence of individuals which are farther in the scen...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on circuits and systems for video technology Vol. 31; no. 5; pp. 1876 - 1889
Main Authors Diaz Berenguer, Abel, Alioscha-Perez, Mitchel, Oveneke, Meshia Cedric, Sahli, Hichem
Format Journal Article
LanguageEnglish
Published New York IEEE 01.05.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN1051-8215
1558-2205
DOI10.1109/TCSVT.2020.3014869

Cover

Loading…
Abstract Understanding human-contextual interaction to predict human trajectories is a challenging problem. Most of previous trajectory prediction approaches focused on modeling the human-human interaction located in a near neighborhood and neglected the influence of individuals which are farther in the scene as well as the scene layout. To alleviate these limitations, in this article we propose a model to address pedestrian trajectory prediction using a latent variable model aware of the human-contextual interaction. Our proposal relies on contextual information that influences the trajectory of pedestrians to encode human-contextual interaction. We model the uncertainty about future trajectories via latent variational model and captures relative interpersonal influences among all the subjects within the scene and their interaction with the scene layout to decode their trajectories. In extensive experiments, on publicly available datasets, it is shown that using contextual information and latent variational model, our trajectory prediction model achieves competitive results compared to state-of-the-art models.
AbstractList Understanding human-contextual interaction to predict human trajectories is a challenging problem. Most of previous trajectory prediction approaches focused on modeling the human-human interaction located in a near neighborhood and neglected the influence of individuals which are farther in the scene as well as the scene layout. To alleviate these limitations, in this article we propose a model to address pedestrian trajectory prediction using a latent variable model aware of the human-contextual interaction. Our proposal relies on contextual information that influences the trajectory of pedestrians to encode human-contextual interaction. We model the uncertainty about future trajectories via latent variational model and captures relative interpersonal influences among all the subjects within the scene and their interaction with the scene layout to decode their trajectories. In extensive experiments, on publicly available datasets, it is shown that using contextual information and latent variational model, our trajectory prediction model achieves competitive results compared to state-of-the-art models.
Author Alioscha-Perez, Mitchel
Oveneke, Meshia Cedric
Sahli, Hichem
Diaz Berenguer, Abel
Author_xml – sequence: 1
  givenname: Abel
  orcidid: 0000-0003-4970-6517
  surname: Diaz Berenguer
  fullname: Diaz Berenguer, Abel
  email: aberengu@etrovub.be
  organization: Electronics and Informatics Department (ETRO), VUB-NPU Joint Audio-Visual SignalProcessing (AVSP) Research Laboratory, Vrije Universiteit Brussel (VUB), Brussels, Belgium
– sequence: 2
  givenname: Mitchel
  orcidid: 0000-0002-8488-5824
  surname: Alioscha-Perez
  fullname: Alioscha-Perez, Mitchel
  email: maperezg@etrovub.be
  organization: Electronics and Informatics Department (ETRO), VUB-NPU Joint Audio-Visual SignalProcessing (AVSP) Research Laboratory, Vrije Universiteit Brussel (VUB), Brussels, Belgium
– sequence: 3
  givenname: Meshia Cedric
  orcidid: 0000-0003-4076-4614
  surname: Oveneke
  fullname: Oveneke, Meshia Cedric
  email: mcovenek@etrovub.be
  organization: Electronics and Informatics Department (ETRO), VUB-NPU Joint Audio-Visual SignalProcessing (AVSP) Research Laboratory, Vrije Universiteit Brussel (VUB), Brussels, Belgium
– sequence: 4
  givenname: Hichem
  orcidid: 0000-0002-1774-2970
  surname: Sahli
  fullname: Sahli, Hichem
  email: hsahli@etrovub.be
  organization: Electronics and Informatics Department (ETRO), VUB-NPU Joint Audio-Visual SignalProcessing (AVSP) Research Laboratory, Vrije Universiteit Brussel (VUB), Brussels, Belgium
BookMark eNp9kE9PwzAMxSMEEtvgC8ClEueOxGna5DhNwJCGQKLsGqWtK2Xq2pFk_Pn2tNvEgQMnW9b72X5vTE7brkVCrhidMkbVbT5_XeVToECnnLJEpuqEjJgQMgag4rTvqWCxBCbOydj7NR1ESTYii3nXBvwK8ezTOIwWu41po9yZNZahcxZ99OKwsmWwXRt9WBMtTcA2RCvjrBmGpomeugqbC3JWm8bj5bFOyNv9XT5fxMvnh8f5bBmXoESIjUpSLgtTmKrgUCXAVMZ5zWpRJ5hmNAMJUGVCFJABxRoNlzJJsSp7qIaMT8jNYe_Wde879EGvu53r3_AaBEDvEFLVq-CgKl3nvcNab53dGPetGdVDYnqfmB4S08fEekj-gUob9iaDM7b5H70-oBYRf28pllIlgf8ANth7Cw
CODEN ITCTEM
CitedBy_id crossref_primary_10_1109_TCSVT_2021_3076078
crossref_primary_10_1007_s10489_022_03524_1
crossref_primary_10_1109_TITS_2023_3274777
crossref_primary_10_1016_j_measurement_2023_112675
crossref_primary_10_1016_j_patcog_2023_109633
crossref_primary_10_1109_TCSVT_2023_3298755
crossref_primary_10_1007_s00521_022_07562_1
crossref_primary_10_1109_TITS_2023_3345296
crossref_primary_10_1016_j_patcog_2022_108552
crossref_primary_10_3390_electronics12030611
crossref_primary_10_3390_act11080237
crossref_primary_10_1109_TCSVT_2023_3314895
crossref_primary_10_1109_TCSVT_2024_3400391
crossref_primary_10_1109_TCSVT_2022_3229694
crossref_primary_10_1109_TCSVT_2024_3493966
crossref_primary_10_1109_TITS_2024_3509954
crossref_primary_10_1109_TCSVT_2023_3307442
crossref_primary_10_1109_TITS_2023_3342040
crossref_primary_10_1109_TCSVT_2023_3324868
crossref_primary_10_1109_TCSVT_2024_3439128
crossref_primary_10_1007_s11277_023_10753_1
crossref_primary_10_3390_act11090247
Cites_doi 10.1109/CVPR.2009.5206848
10.1109/TPAMI.2016.2599174
10.1109/ICCV.2019.00246
10.1016/j.neunet.2018.09.002
10.1109/TIV.2017.2788193
10.1109/TCSVT.2018.2857489
10.1109/CVPR.2017.667
10.1007/978-3-319-91131-1_4
10.1109/CVPR.2017.233
10.1109/CVPR.2019.01240
10.1109/ICPR.2018.8545447
10.1109/ROBOT.2010.5509779
10.1109/CVPR.2011.5995468
10.1109/CVPR.2019.01236
10.1109/CVPR.2016.110
10.1109/ICRA.2018.8460504
10.1111/j.1467-8659.2007.01089.x
10.1109/CVPR.2019.00587
10.1038/nature01852
10.1109/TCSVT.2014.2358029
10.1007/978-3-642-11261-4_11
10.1109/CVPR.2014.283
10.1609/aaai.v33i01.33015885
10.1109/TMM.2018.2834873
10.1109/ICRA.2017.7989199
10.1109/CVPR.2018.00240
10.1103/PhysRevE.51.4282
10.18653/v1/D15-1166
10.1016/j.neucom.2011.12.038
10.1109/CVPR.2019.00664
10.1109/CVPR.2019.00144
10.1109/WACV.2018.00135
10.1109/CVPR.2017.493
10.1109/TCSVT.2008.927109
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021
DBID 97E
RIA
RIE
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
DOI 10.1109/TCSVT.2020.3014869
DatabaseName IEEE Xplore (IEEE)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
DatabaseTitleList
Technology Research Database
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1558-2205
EndPage 1889
ExternalDocumentID 10_1109_TCSVT_2020_3014869
9160982
Genre orig-research
GrantInformation_xml – fundername: VUB-IRMO Joint Ph.D. Grant
– fundername: INNOVIRIS Project ADVISE–Anomaly Detection in Video Security Footage
  funderid: 10.13039/501100004744
– fundername: Flemish Government (AI Research Program)
GroupedDBID -~X
0R~
29I
4.4
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
HZ~
H~9
ICLAB
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
O9-
OCL
P2P
RIA
RIE
RNS
RXW
TAE
TN5
VH1
AAYXX
CITATION
RIG
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c295t-a94638babadb32d4219733f1f5f4e67072822d755b2720efea38846edc8baf273
IEDL.DBID RIE
ISSN 1051-8215
IngestDate Mon Jun 30 04:29:14 EDT 2025
Tue Jul 01 00:41:13 EDT 2025
Thu Apr 24 22:57:59 EDT 2025
Wed Aug 27 02:30:02 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 5
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c295t-a94638babadb32d4219733f1f5f4e67072822d755b2720efea38846edc8baf273
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0002-8488-5824
0000-0003-4076-4614
0000-0003-4970-6517
0000-0002-1774-2970
PQID 2522215269
PQPubID 85433
PageCount 14
ParticipantIDs crossref_primary_10_1109_TCSVT_2020_3014869
ieee_primary_9160982
proquest_journals_2522215269
crossref_citationtrail_10_1109_TCSVT_2020_3014869
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2021-05-01
PublicationDateYYYYMMDD 2021-05-01
PublicationDate_xml – month: 05
  year: 2021
  text: 2021-05-01
  day: 01
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE transactions on circuits and systems for video technology
PublicationTitleAbbrev TCSVT
PublicationYear 2021
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref14
ref11
ref10
murphy (ref42) 2012
ref17
ref19
ref18
kingma (ref45) 2014
ref50
ref46
sutskever (ref30) 2014
ref48
ref47
ref41
ref43
ref8
ref7
ref4
ref3
ref6
ref5
robicquet (ref21) 2016
pellegrini (ref16) 2009
kendon (ref40) 1990; 7
ref35
ref34
ref37
ref31
becker (ref15) 2019
chung (ref33) 2015
gerazov (ref44) 2018
ballan (ref12) 2016
murino (ref49) 2017
ref2
ref1
ref39
simonyan (ref36) 2015
srivastava (ref32) 2015; 37
ref23
ref26
ref25
ref20
ref22
bahdanau (ref38) 2015
ref28
ref27
ref29
rudenko (ref9) 2019
radwan (ref24) 2018
References_xml – ident: ref37
  doi: 10.1109/CVPR.2009.5206848
– ident: ref31
  doi: 10.1109/TPAMI.2016.2599174
– ident: ref34
  doi: 10.1109/ICCV.2019.00246
– ident: ref23
  doi: 10.1016/j.neunet.2018.09.002
– year: 2019
  ident: ref9
  article-title: Human motion trajectory prediction: A survey
  publication-title: arXiv 1905 06113
– start-page: 3104
  year: 2014
  ident: ref30
  article-title: Sequence to sequence learning with neural networks
  publication-title: Neural Information Processing Systems
– ident: ref1
  doi: 10.1109/TIV.2017.2788193
– ident: ref7
  doi: 10.1109/TCSVT.2018.2857489
– ident: ref39
  doi: 10.1109/CVPR.2017.667
– ident: ref10
  doi: 10.1007/978-3-319-91131-1_4
– start-page: 1
  year: 2015
  ident: ref38
  article-title: Neural machine translation by jointly learning to align and translate
  publication-title: Proc 3rd Int Conf Learn Represent (ICLR)
– volume: 7
  year: 1990
  ident: ref40
  publication-title: Conducting InteractionPatterns of Behavior in Focused Encounters
– ident: ref14
  doi: 10.1109/CVPR.2017.233
– ident: ref28
  doi: 10.1109/CVPR.2019.01240
– ident: ref26
  doi: 10.1109/ICPR.2018.8545447
– ident: ref2
  doi: 10.1109/ROBOT.2010.5509779
– ident: ref20
  doi: 10.1109/CVPR.2011.5995468
– ident: ref27
  doi: 10.1109/CVPR.2019.01236
– ident: ref22
  doi: 10.1109/CVPR.2016.110
– ident: ref5
  doi: 10.1109/ICRA.2018.8460504
– ident: ref46
  doi: 10.1111/j.1467-8659.2007.01089.x
– ident: ref29
  doi: 10.1109/CVPR.2019.00587
– start-page: 138
  year: 2019
  ident: ref15
  article-title: Red: A simple but effective baseline predictor for the trajnet benchmark
  publication-title: Computer Vision
– ident: ref18
  doi: 10.1038/nature01852
– ident: ref17
  doi: 10.1109/TCSVT.2014.2358029
– start-page: 697
  year: 2016
  ident: ref12
  article-title: Knowledge transfer for scene-specific motion prediction
  publication-title: Computer Vision
– start-page: 261
  year: 2009
  ident: ref16
  article-title: You'll never walk alone: Modeling social behavior for multi-target tracking
  publication-title: Proc IEEE 12th Int Conf Comput Vis
– start-page: 1
  year: 2015
  ident: ref36
  article-title: Very deep convolutional networks for large-scale image recognition
  publication-title: 3rd Int Conf Learn Repr (ICLR)
– start-page: 1
  year: 2014
  ident: ref45
  article-title: Auto-encoding variational Bayes
  publication-title: Proc Int Conf Learn Represent (ICLR)
– start-page: 2980
  year: 2015
  ident: ref33
  article-title: A recurrent latent variable model for sequential data
  publication-title: Proc Adv Neural Inf Process Syst
– start-page: 1
  year: 2018
  ident: ref24
  article-title: Effective interaction-aware trajectory prediction using temporal convolutional neural networks
  publication-title: Proc IEEE/RSJ Int Conf Intell Robot Syst (IROS)
– ident: ref48
  doi: 10.1007/978-3-642-11261-4_11
– year: 2017
  ident: ref49
  publication-title: Group and Crowd Behavior for Computer Vision
– ident: ref4
  doi: 10.1109/CVPR.2014.283
– ident: ref43
  doi: 10.1609/aaai.v33i01.33015885
– ident: ref50
  doi: 10.1109/TMM.2018.2834873
– volume: 37
  start-page: 843
  year: 2015
  ident: ref32
  article-title: Unsupervised learning of video representations using LSTMs
  publication-title: Proc 32nd Int Conf Mach Learn
– ident: ref3
  doi: 10.1109/ICRA.2017.7989199
– start-page: 549
  year: 2016
  ident: ref21
  article-title: Learning social etiquette: Human trajectory understanding in crowded scenes
  publication-title: Computer Vision
– ident: ref11
  doi: 10.1109/CVPR.2018.00240
– ident: ref19
  doi: 10.1103/PhysRevE.51.4282
– year: 2012
  ident: ref42
  publication-title: Machine Learning A Probabilistic Perspective
– ident: ref35
  doi: 10.18653/v1/D15-1166
– ident: ref41
  doi: 10.1016/j.neucom.2011.12.038
– ident: ref47
  doi: 10.1109/CVPR.2019.00664
– ident: ref13
  doi: 10.1109/CVPR.2019.00144
– ident: ref25
  doi: 10.1109/WACV.2018.00135
– ident: ref6
  doi: 10.1109/CVPR.2017.493
– year: 2018
  ident: ref44
  article-title: A variational prosody model for mapping the context-sensitive variation of functional prosodic prototypes
  publication-title: arXiv 1806 08685
– ident: ref8
  doi: 10.1109/TCSVT.2008.927109
SSID ssj0014847
Score 2.4539654
Snippet Understanding human-contextual interaction to predict human trajectories is a challenging problem. Most of previous trajectory prediction approaches focused on...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 1876
SubjectTerms Attention mechanisms
Computational modeling
Context modeling
convolutional neural networks
Decoding
human trajectory prediction
Layouts
Pedestrians
Prediction models
Predictive models
Proposals
recurrent neural networks
Stochastic processes
Trajectory
variational model
Title Context-Aware Human Trajectories Prediction via Latent Variational Model
URI https://ieeexplore.ieee.org/document/9160982
https://www.proquest.com/docview/2522215269
Volume 31
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3fS8MwED7mnvTBX1OcTsmDb9qtTZp2fRzDMcSJ4Db2VpI2AXV0MjsF_3ovaVeGivjWh4SG3CX3fXeXO4BLT2u0y2nkJMiVHR8tjoM8hDquTJA0CyaU9XeM7oPhxL-d8VkNrqu3MEopm3ym2ubTxvLTRbIyrrIOQhk36uKFu4XErXirVUUM_K5tJoZwwfzN4-sHMm7UGfcfp2OkghQZqhlokps3jJDtqvLjKrb2ZbAHo_XKirSSl_Yql-3k81vRxv8ufR92S6BJeoVmHEBNZYews1F-sAFDW5oKmW_vQywVsf58gsbr2XrykUKTh6WJ4xjZkfcnQe4QmGY5mSK_Ln2IxPRSmx_BZHAz7g-dsrOCk9CI546IfDx3UkiRSkZTH6-tkDHtaa59FYRuaJJL05BzacK0SivBughUVJrgJI2I5xjq2SJTJ0ColB6TAR5zFvqcauGnNOCca8mQOXLVBG-91XFSlh033S_msaUfbhRb8cRGPHEpniZcVXNei6Ibf45umP2uRpZb3YTWWqJxeS7fYlRDajr5BtHp77POYJuarBWb0tiCer5cqXOEHbm8sPr2BUJX0bk
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3fT8IwEL4QfFAf_IVGFLUPvulg69qNPRIiQQViIhDelnZrE5WAwaGJf73XMghRY3zbQ9s1vbb3fXfXO4BLT2vUy2nkJMiVHYYax0EeQh1XJkiahS-UtXd0e0F7wO5GfFSA69VbGKWUDT5TVfNpffnpNJkbU1kNoYwb1fHC3cDxWbR4rbXyGbC6LSeGgMH8z-PLJzJuVOs3H4d9JIMUOappaMKb19SQravy4zK2Gqa1C93l3BaBJS_VeSaryee3tI3_nfwe7ORQkzQWe2MfCmpyANtrCQhL0LbJqZD7Nj7ETBFr0Seovp6tLR9JNHmYGU-OkR55fxKkg9B0kpEhMuzcikhMNbXxIQxaN_1m28lrKzgJjXjmiIjhyZNCilT6NGV4cYW-rz3NNVNB6IYmvDQNOZfGUau0En4doYpKE-ykEfMcQXEynahjIFRKz5cBHnQ_ZJxqwVIacM619JE7clUGb7nUcZInHjf1L8axJSBuFFvxxEY8cS6eMlyt-rwu0m782bpk1nvVMl_qMlSWEo3zk_kW40akppZvEJ383usCNtv9bifu3PbuT2GLmhgWG-BYgWI2m6szBCGZPLd77wtxGNUJ
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Context-Aware+Human+Trajectories+Prediction+via+Latent+Variational+Model&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Diaz+Berenguer%2C+Abel&rft.au=Alioscha-Perez%2C+Mitchel&rft.au=Oveneke%2C+Meshia+Cedric&rft.au=Sahli%2C+Hichem&rft.date=2021-05-01&rft.issn=1051-8215&rft.eissn=1558-2205&rft.volume=31&rft.issue=5&rft.spage=1876&rft.epage=1889&rft_id=info:doi/10.1109%2FTCSVT.2020.3014869&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TCSVT_2020_3014869
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon