Development and testing of an image transformer for explainable autonomous driving systems

Purpose Perception has been identified as the main cause underlying most autonomous vehicle related accidents. As the key technology in perception, deep learning (DL) based computer vision models are generally considered to be black boxes due to poor interpretability. These have exacerbated user dis...

Full description

Saved in:
Bibliographic Details
Published inJournal of Intelligent and Connected Vehicles Vol. 5; no. 3; pp. 235 - 249
Main Authors Dong, Jiqian, Chen, Sikai, Miralinaghi, Mohammad, Chen, Tiantian, Labi, Samuel
Format Journal Article
LanguageEnglish
Published Bingley Emerald Publishing Limited 11.10.2022
Emerald Group Publishing Limited
Tsinghua University Press
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Purpose Perception has been identified as the main cause underlying most autonomous vehicle related accidents. As the key technology in perception, deep learning (DL) based computer vision models are generally considered to be black boxes due to poor interpretability. These have exacerbated user distrust and further forestalled their widespread deployment in practical usage. This paper aims to develop explainable DL models for autonomous driving by jointly predicting potential driving actions with corresponding explanations. The explainable DL models can not only boost user trust in autonomy but also serve as a diagnostic approach to identify any model deficiencies or limitations during the system development phase. Design/methodology/approach This paper proposes an explainable end-to-end autonomous driving system based on “Transformer,” a state-of-the-art self-attention (SA) based model. The model maps visual features from images collected by onboard cameras to guide potential driving actions with corresponding explanations, and aims to achieve soft attention over the image’s global features. Findings The results demonstrate the efficacy of the proposed model as it exhibits superior performance (in terms of correct prediction of actions and explanations) compared to the benchmark model by a significant margin with much lower computational cost on a public data set (BDD-OIA). From the ablation studies, the proposed SA module also outperforms other attention mechanisms in feature fusion and can generate meaningful representations for downstream prediction. Originality/value In the contexts of situational awareness and driver assistance, the proposed model can perform as a driving alarm system for both human-driven vehicles and autonomous vehicles because it is capable of quickly understanding/characterizing the environment and identifying any infeasible driving actions. In addition, the extra explanation head of the proposed model provides an extra channel for sanity checks to guarantee that the model learns the ideal causal relationships. This provision is critical in the development of autonomous systems.
AbstractList PurposePerception has been identified as the main cause underlying most autonomous vehicle related accidents. As the key technology in perception, deep learning (DL) based computer vision models are generally considered to be black boxes due to poor interpretability. These have exacerbated user distrust and further forestalled their widespread deployment in practical usage. This paper aims to develop explainable DL models for autonomous driving by jointly predicting potential driving actions with corresponding explanations. The explainable DL models can not only boost user trust in autonomy but also serve as a diagnostic approach to identify any model deficiencies or limitations during the system development phase.Design/methodology/approachThis paper proposes an explainable end-to-end autonomous driving system based on “Transformer,” a state-of-the-art self-attention (SA) based model. The model maps visual features from images collected by onboard cameras to guide potential driving actions with corresponding explanations, and aims to achieve soft attention over the image’s global features.FindingsThe results demonstrate the efficacy of the proposed model as it exhibits superior performance (in terms of correct prediction of actions and explanations) compared to the benchmark model by a significant margin with much lower computational cost on a public data set (BDD-OIA). From the ablation studies, the proposed SA module also outperforms other attention mechanisms in feature fusion and can generate meaningful representations for downstream prediction.Originality/valueIn the contexts of situational awareness and driver assistance, the proposed model can perform as a driving alarm system for both human-driven vehicles and autonomous vehicles because it is capable of quickly understanding/characterizing the environment and identifying any infeasible driving actions. In addition, the extra explanation head of the proposed model provides an extra channel for sanity checks to guarantee that the model learns the ideal causal relationships. This provision is critical in the development of autonomous systems.
Purpose Perception has been identified as the main cause underlying most autonomous vehicle related accidents. As the key technology in perception, deep learning (DL) based computer vision models are generally considered to be black boxes due to poor interpretability. These have exacerbated user distrust and further forestalled their widespread deployment in practical usage. This paper aims to develop explainable DL models for autonomous driving by jointly predicting potential driving actions with corresponding explanations. The explainable DL models can not only boost user trust in autonomy but also serve as a diagnostic approach to identify any model deficiencies or limitations during the system development phase. Design/methodology/approach This paper proposes an explainable end-to-end autonomous driving system based on “Transformer,” a state-of-the-art self-attention (SA) based model. The model maps visual features from images collected by onboard cameras to guide potential driving actions with corresponding explanations, and aims to achieve soft attention over the image’s global features. Findings The results demonstrate the efficacy of the proposed model as it exhibits superior performance (in terms of correct prediction of actions and explanations) compared to the benchmark model by a significant margin with much lower computational cost on a public data set (BDD-OIA). From the ablation studies, the proposed SA module also outperforms other attention mechanisms in feature fusion and can generate meaningful representations for downstream prediction. Originality/value In the contexts of situational awareness and driver assistance, the proposed model can perform as a driving alarm system for both human-driven vehicles and autonomous vehicles because it is capable of quickly understanding/characterizing the environment and identifying any infeasible driving actions. In addition, the extra explanation head of the proposed model provides an extra channel for sanity checks to guarantee that the model learns the ideal causal relationships. This provision is critical in the development of autonomous systems.
Purpose – Perception has been identified as the main cause underlying most autonomous vehicle related accidents. As the key technology in perception, deep learning (DL) based computer vision models are generally considered to be black boxes due to poor interpretability. These have exacerbated user distrust and further forestalled their widespread deployment in practical usage. This paper aims to develop explainable DL models for autonomous driving by jointly predicting potential driving actions with corresponding explanations. The explainable DL models can not only boost user trust in autonomy but also serve as a diagnostic approach to identify any model deficiencies or limitations during the system development phase. Design/methodology/approach – This paper proposes an explainable end-to-end autonomous driving system based on “Transformer,” a state-of-the-art self-attention (SA) based model. The model maps visual features from images collected by onboard cameras to guide potential driving actions with corresponding explanations, and aims to achieve soft attention over the image’s global features. Findings – The results demonstrate the efficacy of the proposed model as it exhibits superior performance (in terms of correct prediction of actions and explanations) compared to the benchmark model by a significant margin with much lower computational cost on a public data set (BDD-OIA). From the ablation studies, the proposed SA module also outperforms other attention mechanisms in feature fusion and can generate meaningful representations for downstream prediction. Originality/value – In the contexts of situational awareness and driver assistance, the proposed model can perform as a driving alarm system for both human-driven vehicles and autonomous vehicles because it is capable of quickly understanding/characterizing the environment and identifying any infeasible driving actions. In addition, the extra explanation head of the proposed model provides an extra channel for sanity checks to guarantee that the model learns the ideal causal relationships. This provision is critical in the development of autonomous systems.
Author Chen, Sikai
Miralinaghi, Mohammad
Dong, Jiqian
Chen, Tiantian
Labi, Samuel
Author_xml – sequence: 1
  givenname: Jiqian
  surname: Dong
  fullname: Dong, Jiqian
  email: dong282@purdue.edu
– sequence: 2
  givenname: Sikai
  surname: Chen
  fullname: Chen, Sikai
  email: chen1670@purdue.edu
– sequence: 3
  givenname: Mohammad
  surname: Miralinaghi
  fullname: Miralinaghi, Mohammad
  email: smiralin@purdue.edu
– sequence: 4
  givenname: Tiantian
  surname: Chen
  fullname: Chen, Tiantian
  email: tt-nicole.chen@connect.polyu.hk
– sequence: 5
  givenname: Samuel
  surname: Labi
  fullname: Labi, Samuel
  email: labi@purdue.edu
BookMark eNp9UcuO1DAQjNAisSz7AdwscQ607cRJjmh4DVqJC3DgYrXtzsirxB5sz4r9exwGIUCIUz_UVdWqetxchBioaZ5yeM45jC_e73efW1CtACFaAMEfNJdCTlM7jSAufusfNdc5ewOS856PE1w2X17RHS3xuFIoDINjhXLx4cDiXEfmVzwQKwlDnmNaKbFaGH07LugDmoUYnkoMcY2nzFzydxs03-dCa37SPJxxyXT9s141n968_rh71958eLvfvbxpbSdkadE6S4MzhnAGNU-EEqTpRT-SUAq7nhSMsxFGOj4j9ZYbiTQJNRkhlOvlVbM_87qIt_qY6s_pXkf0-scipoPGVLxdSFvjOhCT6kHZDqlyGi6sAGOlAomicj07cx1T_HqqVujbeEqhvq_FUOW6YejGesXPVzbFnBPNv1Q56C0RvSWiQektEb0lUjHDXxjrCxYfQ3XXL_9FwhlJ1X9c3D_F_ohffgczA6He
CitedBy_id crossref_primary_10_1080_19427867_2024_2335084
crossref_primary_10_1111_mice_13115
crossref_primary_10_1016_j_commtr_2024_100142
crossref_primary_10_1016_j_trc_2024_104497
crossref_primary_10_1109_TITS_2024_3474469
crossref_primary_10_1109_TVT_2024_3373533
crossref_primary_10_1016_j_trc_2023_104358
crossref_primary_10_3390_s23198281
crossref_primary_10_1016_j_commtr_2023_100116
crossref_primary_10_1016_j_tbs_2024_100934
crossref_primary_10_1061_JTEPBS_TEENG_8137
crossref_primary_10_1061_JTEPBS_TEENG_7860
crossref_primary_10_26599_JICV_2023_9210036
crossref_primary_10_1016_j_commtr_2024_100127
crossref_primary_10_26599_JICV_2023_9210027
crossref_primary_10_1109_JAS_2023_123744
Cites_doi 10.1109/CVPR42600.2020.00968
10.1109/tits.2019.2950416
10.5220/0007520305640572
10.1177/2041304110394727
10.1002/9780470168073
10.1109/CVPR42600.2020.01009
10.1111/mice.12702
10.1109/ICCV.2017.320
10.1109/CVPR.2017.376
10.1111/mice.12495
10.1016/j.apergo.2017.07.009
10.1109/CVPR.2019.01214
10.1016/j.trc.2021.103288
10.1146/annurev-control-060117-105157
10.1016/j.trc.2020.102912
10.3389/fbuil.2020.590036
10.1109/ITSC.2015.329
10.1016/j.trc.2018.07.001
10.1146/annurev-vision-082114-035733
10.1109/WACV45572.2020.9093524
10.1109/ITSC45102.2020.9294550
10.1109/CVPR.2018.00474
10.1613/jair.301
10.1061/9780784483053.077
10.1109/CVPR.2016.90
10.1109/CVPR42600.2020.00954
10.1109/TPAMI.2016.2577031
10.1016/j.trc.2021.103143
10.1016/j.trc.2020.02.013
10.1109/CVPR42600.2020.00271
10.1016/j.trc.2021.103192
10.1109/ICECA.2018.8474604
10.1016/j.trc.2019.01.027
10.1108/jicv-03-2021-0004
10.1016/j.trc.2019.11.024
10.1016/j.trc.2021.103018
10.1016/j.trc.2018.05.007
10.1016/j.commtr.2021.100017
10.1109/isc253183.2021.9562832
10.1016/j.trc.2017.01.023
ContentType Journal Article
Copyright Jiqian Dong, Sikai Chen, Mohammad Miralinaghi, Tiantian Chen and Samuel Labi. Published in .
Jiqian Dong, Sikai Chen, Mohammad Miralinaghi, Tiantian Chen and Samuel Labi. Published in Journal of Intelligent and Connected Vehicles. This work is published under http://creativecommons.org/licences/by/4.0/legalcode (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Copyright_xml – notice: Jiqian Dong, Sikai Chen, Mohammad Miralinaghi, Tiantian Chen and Samuel Labi. Published in .
– notice: Jiqian Dong, Sikai Chen, Mohammad Miralinaghi, Tiantian Chen and Samuel Labi. Published in Journal of Intelligent and Connected Vehicles. This work is published under http://creativecommons.org/licences/by/4.0/legalcode (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
DBID XDTOA
AAYXX
CITATION
7SC
7TB
8FD
ABUWG
AEUYN
AFKRA
AZQEC
BENPR
CCPQU
DWQXO
FR3
JQ2
L7M
L~C
L~D
PHGZM
PHGZT
PIMPY
PKEHL
PQEST
PQQKQ
PQUKI
DOA
DOI 10.1108/JICV-06-2022-0021
DatabaseName Emerald Open Access
CrossRef
Computer and Information Systems Abstracts
Mechanical & Transportation Engineering Abstracts
Technology Research Database
ProQuest Central (Alumni)
ProQuest One Sustainability
ProQuest Central UK/Ireland
ProQuest Central Essentials
ProQuest Central
ProQuest One Community College
ProQuest Central Korea
Engineering Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
ProQuest Central Premium
ProQuest One Academic
Publicly Available Content Database
ProQuest One Academic Middle East (New)
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Academic
ProQuest One Academic UKI Edition
DOAJ Directory of Open Access Journals
DatabaseTitle CrossRef
Publicly Available Content Database
Technology Research Database
Computer and Information Systems Abstracts – Academic
ProQuest One Academic Middle East (New)
Mechanical & Transportation Engineering Abstracts
ProQuest Central Essentials
ProQuest One Academic Eastern Edition
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
ProQuest Central (Alumni Edition)
ProQuest One Community College
Computer and Information Systems Abstracts Professional
ProQuest Central
ProQuest One Sustainability
ProQuest One Academic UKI Edition
ProQuest Central Korea
Engineering Research Database
ProQuest Central (New)
ProQuest One Academic
Advanced Technologies Database with Aerospace
ProQuest One Academic (New)
DatabaseTitleList Publicly Available Content Database


Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: XDTOA
  name: Emerald Open Access
  url: https://www.emerald.com/insight
  sourceTypes: Publisher
– sequence: 3
  dbid: BENPR
  name: ProQuest Central
  url: https://www.proquest.com/central
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 2399-9802
EndPage 249
ExternalDocumentID oai_doaj_org_article_cbd40296506c4ae2b3b12c20bc3603a2
10_1108_JICV_06_2022_0021
10.1108/JICV-06-2022-0021
GroupedDBID AAFWJ
ADBBV
AFKRA
AFPKN
ALMA_UNASSIGNED_HOLDINGS
BCNDV
BENPR
EBS
GEI
GROUPED_DOAJ
OK1
PIMPY
XDTOA
AAGBP
AAYXX
ABVLG
AUCOK
CITATION
ESBDL
H13
JAVBF
M~E
7SC
7TB
8FD
ABUWG
AEUYN
AZQEC
CCPQU
DWQXO
FR3
JQ2
L7M
L~C
L~D
PHGZM
PHGZT
PKEHL
PQEST
PQQKQ
PQUKI
ID FETCH-LOGICAL-c423t-acdce7dbbeaf06f9ea303b5258e266a45e608fb2b3d1fae5c1b3ae9269b226d53
IEDL.DBID DOA
ISSN 2399-9802
IngestDate Wed Aug 27 01:32:43 EDT 2025
Mon Jun 30 12:01:07 EDT 2025
Tue Jul 01 03:31:17 EDT 2025
Thu Apr 24 23:09:39 EDT 2025
Thu Oct 13 10:06:59 EDT 2022
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 3
Keywords Computer vision
Transformer
Autonomous driving
Explainable deep learning
Language English
License Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence maybe seen at
https://www.emerald.com/insight/site-policies
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c423t-acdce7dbbeaf06f9ea303b5258e266a45e608fb2b3d1fae5c1b3ae9269b226d53
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
OpenAccessLink https://doaj.org/article/cbd40296506c4ae2b3b12c20bc3603a2
PQID 2722647748
PQPubID 4931665
PageCount 15
ParticipantIDs emerald_primary_10_1108_JICV-06-2022-0021
crossref_citationtrail_10_1108_JICV_06_2022_0021
proquest_journals_2722647748
crossref_primary_10_1108_JICV_06_2022_0021
doaj_primary_oai_doaj_org_article_cbd40296506c4ae2b3b12c20bc3603a2
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2022-10-11
PublicationDateYYYYMMDD 2022-10-11
PublicationDate_xml – month: 10
  year: 2022
  text: 2022-10-11
  day: 11
PublicationDecade 2020
PublicationPlace Bingley
PublicationPlace_xml – name: Bingley
PublicationTitle Journal of Intelligent and Connected Vehicles
PublicationYear 2022
Publisher Emerald Publishing Limited
Emerald Group Publishing Limited
Tsinghua University Press
Publisher_xml – name: Emerald Publishing Limited
– name: Emerald Group Publishing Limited
– name: Tsinghua University Press
References (key2022100809110052200_ref041) 2021; 130
(key2022100809110052200_ref002) 2016
(key2022100809110052200_ref046) 2020
(key2022100809110052200_ref049) 2021; 4
(key2022100809110052200_ref015) 2015
(key2022100809110052200_ref025) 2019
World Bank (key2022100809110052200_ref039) 2005
(key2022100809110052200_ref018) 2018; 96
FHWA (key2022100809110052200_ref012) 2019
(key2022100809110052200_ref042) 2017
(key2022100809110052200_ref019) 2017
(key2022100809110052200_ref028) 2017; 39
(key2022100809110052200_ref050) 2018; 92
(key2022100809110052200_ref031) 2018; 1
key2022100809110052200_ref026
(key2022100809110052200_ref038) 2017; 65
(key2022100809110052200_ref032) 2007
(key2022100809110052200_ref017) 2020
(key2022100809110052200_ref043) 2020
(key2022100809110052200_ref013) 2020; 6
(key2022100809110052200_ref048) 2021; 124
(key2022100809110052200_ref001) 2021; 128
(key2022100809110052200_ref023) 2014
(key2022100809110052200_ref004) 2019; 35
(key2022100809110052200_ref005) 2021; 36
(key2022100809110052200_ref009) 2020
(key2022100809110052200_ref006) 2019; 21
(key2022100809110052200_ref030) 2018
(key2022100809110052200_ref010) 2018
(key2022100809110052200_ref011) 2021
(key2022100809110052200_ref037) 2011
(key2022100809110052200_ref029) 2016; 2
TRB (key2022100809110052200_ref035) 2018
(key2022100809110052200_ref016) 2020; 111
(key2022100809110052200_ref003) 2021; 125
(key2022100809110052200_ref014) 2016
(key2022100809110052200_ref008) 2021; 128
(key2022100809110052200_ref034) 2019
(key2022100809110052200_ref024) 2019; 101
(key2022100809110052200_ref044) 2016
(key2022100809110052200_ref020) 2020
(key2022100809110052200_ref040) 2020
(key2022100809110052200_ref021) 2019
(key2022100809110052200_ref027) 2021; 1
(key2022100809110052200_ref022) 2017; 77
(key2022100809110052200_ref007) 2021
(key2022100809110052200_ref045) 2020; 114
(key2022100809110052200_ref047) 2020
TRB (key2022100809110052200_ref036) 2019
(key2022100809110052200_ref033) 2018
References_xml – volume-title: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
  year: 2020
  ident: key2022100809110052200_ref020
  article-title: Advisable learning for self-driving vehicles by internalizing observation-to-action rules
  doi: 10.1109/CVPR42600.2020.00968
– volume: 21
  issue: 11
  year: 2019
  ident: key2022100809110052200_ref006
  article-title: Traffic graph convolutional recurrent neural network: a deep learning framework for network-scale traffic learning and forecasting
  publication-title: IEEE Transactions on Intelligent Transportation Systems
  doi: 10.1109/tits.2019.2950416
– volume-title: VISIGRAPP 2019 – Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications
  year: 2019
  ident: key2022100809110052200_ref034
  article-title: Exploring applications of deep reinforcement learning for real-world autonomous driving systems
  doi: 10.5220/0007520305640572
– volume-title: Proceedings of the Institution of Mechanical Engineers. Part I: Journal of Systems and Control Engineering
  year: 2011
  ident: key2022100809110052200_ref037
  article-title: Autonomous vehicle control systems – a review of decision making
  doi: 10.1177/2041304110394727
– year: 2007
  ident: key2022100809110052200_ref032
  article-title: Transportation decision making: principles of project evaluation and programming, transportation decision making: principles of project evaluation and programming
  doi: 10.1002/9780470168073
– volume-title: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
  year: 2020
  ident: key2022100809110052200_ref047
  article-title: Exploring self-attention for image recognition
  doi: 10.1109/CVPR42600.2020.01009
– volume: 36
  issue: 7
  year: 2021
  ident: key2022100809110052200_ref005
  article-title: Graph neural network and reinforcement learning for multi-agent cooperative control of connected autonomous vehicles
  publication-title: Computer-Aided Civil and Infrastructure Engineering
  doi: 10.1111/mice.12702
– start-page: 1
  year: 2016
  ident: key2022100809110052200_ref002
  article-title: End to end learning for self-driving cars
– volume-title: Proceedings of the IEEE International Conference on Computer Vision
  year: 2017
  ident: key2022100809110052200_ref019
  article-title: Interpretable learning for self-driving cars by visualizing causal attention
  doi: 10.1109/ICCV.2017.320
– volume-title: Proceedings – 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017
  year: 2017
  ident: key2022100809110052200_ref042
  article-title: End-to-end learning of driving models from large-scale video datasets
  doi: 10.1109/CVPR.2017.376
– volume: 35
  issue: 4
  year: 2019
  ident: key2022100809110052200_ref004
  article-title: A deep learning algorithm for simulating autonomous driving considering prior knowledge and temporal information
  publication-title: Computer-Aided Civil and Infrastructure Engineering
  doi: 10.1111/mice.12495
– volume: 65
  year: 2017
  ident: key2022100809110052200_ref038
  article-title: More than the useful field: considering peripheral vision in driving
  publication-title: Applied Ergonomics
  doi: 10.1016/j.apergo.2017.07.009
– volume-title: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
  year: 2019
  ident: key2022100809110052200_ref021
  article-title: Monocular 3D object detection leveraging accurate proposals and shape reconstruction
  doi: 10.1109/CVPR.2019.01214
– volume: 130
  year: 2021
  ident: key2022100809110052200_ref041
  article-title: Multi-scale driver behavior modeling based on deep spatial-temporal representation for intelligent vehicles
  publication-title: Transportation Research Part C: Emerging Technologies
  doi: 10.1016/j.trc.2021.103288
– volume: 1
  start-page: 187
  issue: 1
  year: 2018
  ident: key2022100809110052200_ref031
  article-title: Planning and decision-making for autonomous vehicles
  publication-title: Annual Review of Control, Robotics, and Autonomous Systems
  doi: 10.1146/annurev-control-060117-105157
– volume: 124
  year: 2021
  ident: key2022100809110052200_ref048
  article-title: Urban flow prediction with spatial–temporal neural ODEs
  publication-title: Transportation Research Part C: Emerging Technologies
  doi: 10.1016/j.trc.2020.102912
– year: 2016
  ident: key2022100809110052200_ref044
  article-title: Tesla driver dies in first fatal crash while using autopilot mode
  publication-title: The Guardian
– volume: 6
  year: 2020
  ident: key2022100809110052200_ref013
  article-title: Vehicle connectivity and automation: a sibling relationship
  publication-title: Frontiers in Built Environment
  doi: 10.3389/fbuil.2020.590036
– volume-title: IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC
  year: 2015
  ident: key2022100809110052200_ref015
  article-title: Vision-based driver assistance systems: survey, taxonomy and advances
  doi: 10.1109/ITSC.2015.329
– volume-title: Transportation Research Board Conference Proceedings
  year: 2018
  ident: key2022100809110052200_ref035
  article-title: Socioeconomic impacts of automated and connected vehicle: summary of the sixth EU – US
– start-page: 3
  year: 2019
  ident: key2022100809110052200_ref025
  article-title: Self-driving uber car that hit and killed woman did not recognize that pedestrians jaywalk
  publication-title: NBC News
– start-page: 2732
  volume-title: In 2021 IEEE International Intelligent Transportation Systems Conference (ITSC)
  year: 2021
  ident: key2022100809110052200_ref007
  article-title: Image transformer for explainable autonomous driving system
– volume: 96
  year: 2018
  ident: key2022100809110052200_ref018
  article-title: Calibrating trust through knowledge: introducing the concept of informed safety for automation in vehicles
  publication-title: Transportation Research Part C: Emerging Technologies
  doi: 10.1016/j.trc.2018.07.001
– year: 2005
  ident: key2022100809110052200_ref039
  article-title: A framework for the economic evaluation of transport projects, transport notes
– volume: 2
  issue: 1
  year: 2016
  ident: key2022100809110052200_ref029
  article-title: Capabilities and limitations of peripheral vision
  publication-title: Annual Review of Vision Science
  doi: 10.1146/annurev-vision-082114-035733
– volume-title: Proceedings – 2020 IEEE Winter Conference on Applications of Computer Vision, WACV 2020
  year: 2020
  ident: key2022100809110052200_ref040
  article-title: Periphery-fovea multi-resolution driving model guided by human attention
  doi: 10.1109/WACV45572.2020.9093524
– volume-title: Rep. Nr. FHWA-HOP-19-053, Prepared by the Volpe National Transportation Syst
  year: 2019
  ident: key2022100809110052200_ref012
  article-title: Evaluation methods and techniques: advanced transportation and congestion management technologies deployment program, tech
– volume-title: 2020 IEEE 23rd International Conference on Intelligent Transportation Systems, ITSC 2020
  year: 2020
  ident: key2022100809110052200_ref009
  article-title: Spatio-weighted information fusion and DRL-based control for connected autonomous vehicles
  doi: 10.1109/ITSC45102.2020.9294550
– volume-title: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
  year: 2018
  ident: key2022100809110052200_ref030
  article-title: MobileNetV2: inverted residuals and linear bottlenecks
  doi: 10.1109/CVPR.2018.00474
– volume-title: Autonomous Vehicle Implementation Predictions: Implications for Transport Planning”, Transportation Research Board Annual Meeting
  year: 2014
  ident: key2022100809110052200_ref023
  doi: 10.1613/jair.301
– volume-title: CICTP 2020: Transportation Evolution Impacting Future Mobility – Selected Papers from the 20th COTA International Conference of Transportation Professionals
  year: 2020
  ident: key2022100809110052200_ref017
  article-title: R-CNN based 3D object detection for autonomous driving
  doi: 10.1061/9780784483053.077
– volume-title: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
  year: 2016
  ident: key2022100809110052200_ref014
  article-title: Deep residual learning for image recognition
  doi: 10.1109/CVPR.2016.90
– volume-title: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
  year: 2020
  ident: key2022100809110052200_ref043
  article-title: Explainable object-induced action decision for autonomous vehicles
  doi: 10.1109/CVPR42600.2020.00954
– volume: 39
  issue: 6
  year: 2017
  ident: key2022100809110052200_ref028
  article-title: Faster R-CNN: towards real-time object detection with region proposal networks
  publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence
  doi: 10.1109/TPAMI.2016.2577031
– ident: key2022100809110052200_ref026
– volume: 128
  year: 2021
  ident: key2022100809110052200_ref001
  article-title: Why did you predict that? Towards explainable artificial neural networks for travel demand analysis
  publication-title: Transportation Research Part C: Emerging Technologies
  doi: 10.1016/j.trc.2021.103143
– volume: 114
  year: 2020
  ident: key2022100809110052200_ref045
  article-title: Forecasting road traffic speeds by considering area-wide spatio-temporal dependencies based on a graph convolutional neural network (GCN)
  publication-title: Transportation Research Part C: Emerging Technologies
  doi: 10.1016/j.trc.2020.02.013
– volume-title: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
  year: 2020
  ident: key2022100809110052200_ref046
  article-title: BDD100K: a diverse driving dataset for heterogeneous multitask learning
  doi: 10.1109/CVPR42600.2020.00271
– volume-title: Transportation Research Circular
  year: 2019
  ident: key2022100809110052200_ref036
  article-title: TRB forum on preparing for automated vehicles and shared mobility: mini-workshop on the importance and role of connectivity
– volume: 128
  year: 2021
  ident: key2022100809110052200_ref008
  article-title: Space-weighted information fusion using deep reinforcement learning: the context of tactical control of lane-changing autonomous vehicles and connectivity range assessment
  publication-title: Transportation Research Part C: Emerging Technologies
  doi: 10.1016/j.trc.2021.103192
– volume-title: Proceedings of the 2nd International Conference on Electronics, Communication and Aerospace Technology, ICECA 2018
  year: 2018
  ident: key2022100809110052200_ref033
  article-title: Computer vision based advanced driver assistance system algorithms with optimization techniques-a review
  doi: 10.1109/ICECA.2018.8474604
– volume: 101
  year: 2019
  ident: key2022100809110052200_ref024
  article-title: DeepPF: a deep learning based architecture for metro passenger flow prediction
  publication-title: Transportation Research Part C: Emerging Technologies
  doi: 10.1016/j.trc.2019.01.027
– volume: 4
  issue: 2
  year: 2021
  ident: key2022100809110052200_ref049
  article-title: Dynamic prediction of traffic incident duration on urban expressways: a deep learning approach based on LSTM and MLP
  publication-title: Journal of Intelligent and Connected Vehicles
  doi: 10.1108/jicv-03-2021-0004
– volume: 111
  year: 2020
  ident: key2022100809110052200_ref016
  article-title: Cyber-physical system architecture for automating the mapping of truck loads to bridge behavior using computer vision in connected highway corridors
  publication-title: Transportation Research Part C: Emerging Technologies
  doi: 10.1016/j.trc.2019.11.024
– volume: 125
  year: 2021
  ident: key2022100809110052200_ref003
  article-title: Explainable, automated urban interventions to improve pedestrian and vehicle safety
  publication-title: Transportation Research Part C: Emerging Technologies
  doi: 10.1016/j.trc.2021.103018
– volume: 92
  year: 2018
  ident: key2022100809110052200_ref050
  article-title: Automated vision inspection of rail surface cracks: a double-layer data-driven framework
  publication-title: Transportation Research Part C: Emerging Technologies
  doi: 10.1016/j.trc.2018.05.007
– volume: 1
  year: 2021
  ident: key2022100809110052200_ref027
  article-title: Connected autonomous vehicles for improving mixed traffic efficiency in unsignalized intersections with deep reinforcement learning
  publication-title: Communications in Transportation Research
  doi: 10.1016/j.commtr.2021.100017
– year: 2021
  ident: key2022100809110052200_ref011
  article-title: GAQ-EBkSP: a DRL-based urban traffic dynamic rerouting framework using fog-cloud architecture
  doi: 10.1109/isc253183.2021.9562832
– volume: 77
  year: 2017
  ident: key2022100809110052200_ref022
  article-title: Platoons of connected vehicles can double throughput in urban roads
  publication-title: Transportation Research Part C: Emerging Technologies
  doi: 10.1016/j.trc.2017.01.023
– year: 2018
  ident: key2022100809110052200_ref010
  article-title: What does explainable AI really mean? A new conceptualization of perspectives
SSID ssib031151890
ssj0002808927
Score 2.3594975
Snippet Purpose Perception has been identified as the main cause underlying most autonomous vehicle related accidents. As the key technology in perception, deep...
PurposePerception has been identified as the main cause underlying most autonomous vehicle related accidents. As the key technology in perception, deep...
Purpose – Perception has been identified as the main cause underlying most autonomous vehicle related accidents. As the key technology in perception, deep...
SourceID doaj
proquest
crossref
emerald
SourceType Open Website
Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 235
SubjectTerms Ablation
Alarm systems
Artificial intelligence
Attention
Automation
autonomous driving
Autonomy
Computer vision
Connectivity
Control theory
Deep learning
Design
Driving ability
explainable deep learning
Neural networks
Perception
Situational awareness
transformer
Vehicles
SummonAdditionalLinks – databaseName: Emerald Open Access
  dbid: XDTOA
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Lb9QwELbK9sIFgQCx9CEfuIAUbezETnKkLVXpAS4tWnGx_BiXlbbZajcr8fOZ8XoXqlaVOOUhx1Fm7JlvnJnPjH1AlwmdrWKhKwhFHUpVOHCxEBUawkaEStmUIPtNX1zXl1M13WPft7UwKa1ysxyT7PSsX1GQOqHEbbTCO8IB2r3m8uvpD8rakZSRTs5qQkvWk1_D7fwZ25cNgo8R25-eXeUoC4cYccuINq28UFFn0bWlzP86H-3xnrdKpP4PKnf_Wu_kks5fshcZS_LPG-W_YnvQv2Y__0kD4rYPfCAejf6GLyJe8tktGhA-bOEqLDkeOPy-m-cyKm7XAxU6LNYrHpYzWm_gG7rn1Rt2ff7l6vSiyBsoFB5R0lBYHzw0wTmwsdSxA4sOyympWkC_bGsFumyjk64KIlpQXrjKQid15xCVBVW9ZaN-0cM7xoXV1raqEQjgEGFB6zvvIkAdVARX6zErt4IyPrOL0yYXc5OijLI1JFtTakOyNSTbMfu0e-RuQ63xVOMTkv6uIbFipxuL5Y3Jk8x4FzAc7lDv2tcW8LuckF6Wzle6rKwcs49Zd4--8J7mx-xwq12Tp_XKyIbqjhExt-__o6sD9jydUx6MOGSjYbmGI4QygzvOw_MPF8Lr9g
  priority: 102
  providerName: Emerald
– databaseName: ProQuest Central
  dbid: BENPR
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1LT9wwELba5dIeEPShLi_50EsrRXXs2HFOCBCIckAIlQr1YvkxRkg02e5mJX4-nqx3ARVxipI4iTIznvk8ngchX5PJhMaKWCgBoagCk4UDF4tSJEVYl0FIOwTInqvTq-rsWl5nh9ssh1UudeKgqEPn0Uf-g9eY8pnAit6f_CuwaxTuruYWGm_JWlLBWo_I2uHx-cXlysvCNdMNr_N2Jra8Oft59BtDfTiGsaOFe2aQhrr9_yXnPiroweqcbJD1DBfpwYK_m-QNtB_I-ydFBD-SP0_ifqhtA-2xcEZ7Q7uYTunt36QxaL_EpzCl6UDhfnKX86aonfeY2dDNZzRMb9HBQBf1nWefyNXJ8a-j0yJ3TCh8gkV9YX3wUAfnwEamYgM2WSgnudSQDLGtJCimo-NOhDJakL50wkLDVeMSdYMUn8mo7Vr4QmhplbVa1mVCbAlSgfaNdxGgCjKCq9SYsCXZjM_lxLGrxZ0ZlhVMG6S0YcogpQ1Seky-rx6ZLGppvDb4EHmxGohlsIcL3fTG5FllvAtp_dsklKl8ZSH9lyu558x5oZiwfEy-ZU6--MFncjAmO0temzyPZ-ZR6rZev71N3g1vwliXcoeM-ukcdhNc6d1elskHBEnomQ
  priority: 102
  providerName: ProQuest
Title Development and testing of an image transformer for explainable autonomous driving systems
URI https://www.emerald.com/insight/content/doi/10.1108/JICV-06-2022-0021/full/html
https://www.proquest.com/docview/2722647748
https://doaj.org/article/cbd40296506c4ae2b3b12c20bc3603a2
Volume 5
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1LS8QwEA6iFz2IT1xf5OBFoZg2TdoefaIeVERl8RLymIiiVdYK_nwnaVdXFL14KikpKZPpzDfpzDeEbKDLhEpzn0gOLskdE4kB45OUoyEsUseFjgmyp_LoKj_pi_5Iq6-QE9bSA7eC27bGYYhTIZCQNteQGW7SzGbMWC4Z19H6os8bCabu45ERK6us6H5jhlY3J8d71yHFJwvp68GzfXFEka__W1Hup2GO3uZwhkx3MJHutK83S8agniNTI-SB8-RmJN-H6trRJhBm1Lf0yeOQ3j2ipaDNEJfCgOKFwtvzQ1cvRfVrEyoaMPSnbnAXDhZoy-v8skCuDg8u946SrlNCYhEONYm2zkLhjAHtmfQVaPRMRmSiBHTAOhcgWekNCs-lXoOwqeEaqkxWBuGXE3yRjNdPNSwRmmqpdSmKFJEaQikobWWNB8id8GBy2SNsKDZlOxrx0M3iQcVwgpUqSFoxqYKkVZB0j2x9PPLccmj8Nnk37MXHxEB_HW-gUqhOKdRfStEjm91O_rjgFz3okdXhXqvu-31RWREKjBEal8v_8T4rZDKuFzJh0lUy3gxeYQ3BTGPWycTuwen5xXrUXxz19y_Pdt4BbPvyrA
linkProvider Directory of Open Access Journals
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9QwEB5V2wNwQOUlFgr4AAeQIhIncZIDQrS02m3LCqEWVVyMH5OqUkm2u1kBf4rfyEw22bYC9dZTlMSJk5nxzGd7HgAvyWRiYeIyUDH6IPFhGli0ZRDFpAizyMepaR1kJ2p0lOwdp8dr8KePhWG3yl4ntora147XyN_KjEM-Cazk76fnAVeN4t3VvoTGUiz28fdPmrLN340_En9fSbm7c7g9CrqqAoEj6NAExnmHmbcWTRmqskBDWtymMs2RjJVJUlRhXlppYx-VBlMX2dhgIVVh6Qs8V4kglb-exCqUA1jf2pl8_rJa1ZF5mBcy67ZPucTO3nj7K7sWSXabZ4t6xQC2dQL-CQa-MAitldvdgLsdPBUflvJ0D9awug93LiUtfADfLvkZCVN50XCijupE1CWditMfpKFE0-NhnAk6CPw1PevitIRZNBxJUS_mws9OeUFDLPNJzx_C0Y3Q8hEMqrrCxyAio4zJ0ywihEgQDnNXOFsiJj4t0SZqCGFPNu269OVcReNMt9OYMNdMaR0qzZTWTOkhvFk9Ml3m7riu8RbzYtWQ0263F-rZie5GsXbW03y7IFSrXGKQ_stG0snQOpKG2MghvO44-d8Or8jBEDZ7XutOb8z1hZQ_uf72C7g1Ovx0oA_Gk_2ncLt9K_vZRJswaGYLfEZQqbHPO_kU8P2mh8RfenAnxQ
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9QwELaqrYTggMpLLBTwAQ4gRU2cxEkOCNHHqtuiVYUoqrgYP8ZVpZJsd7MC_hq_jpmss20F6q2nKO9kZjzzjT0Pxl6jyYRKpz6SKbgoc3EeGTA-SlJUhEXi0lx3AbITuX-cHZzkJ2vsT58LQ2GVvU7sFLVrLM2Rb4mCUj4RrJRbPoRFHO2OPkwvIuogRSutfTuNpYgcwu-f6L7N3493kddvhBjtfdnZj0KHgcgijGgjbZ2FwhkD2sfSV6BRo5tc5CWg4dJZDjIuvREmdYnXkNvEpBoqISuDX-OoYwSq__WCvKIBW9_emxx9Xs3wiDIuK1GEpVRqt3Mw3vlKYUaCQujJul4zhl3PgH8Sgy-NQ2fxRhvsfoCq_ONSth6wNagfsntXChg-Yt-uxBxxXTveUtGO-pQ3Hnf52Q_UVrztsTHMOG44_Jqeh5wtrhctZVU0izl3szOa3ODL2tLzx-z4Vmj5hA3qpoanjCdaal3mRYJoEeEclLayxgNkLvdgMjlkcU82ZUMpc-qoca46lyYuFVFaxVIRpRVResjerW6ZLut43HTxNvFidSGV4O4ONLNTFUa0ssah710hwpU204D_ZRJhRWxsKuNUiyF7Gzj53xdek4Mh2-x5rYIOmatLiX928-lX7A4OBfVpPDl8zu52D6WQm2STDdrZAl4gamrNyyCenH2_7RHxFw0jK_o
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Development+and+testing+of+an+image+transformer+for+explainable+autonomous+driving+systems&rft.jtitle=Journal+of+intelligent+and+connected+vehicles&rft.au=Jiqian+Dong&rft.au=Sikai+Chen&rft.au=Mohammad+Miralinaghi&rft.au=Tiantian+Chen&rft.date=2022-10-11&rft.pub=Tsinghua+University+Press&rft.eissn=2399-9802&rft.volume=5&rft.issue=3&rft.spage=235&rft.epage=249&rft_id=info:doi/10.1108%2FJICV-06-2022-0021&rft.externalDBID=DOA&rft.externalDocID=oai_doaj_org_article_cbd40296506c4ae2b3b12c20bc3603a2
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2399-9802&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2399-9802&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2399-9802&client=summon