GA-Nav: Efficient Terrain Segmentation for Robot Navigation in Unstructured Outdoor Environments
We propose GA-Nav, a novel group-wise attention mechanism to identify safe and navigable regions in unstructured environments from RGB images. Our group-wise attention method extracts multi-scale features from each type of terrain independently and classifies terrains based on their navigability lev...
Saved in:
Published in | IEEE robotics and automation letters Vol. 7; no. 3; pp. 1 - 8 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
Piscataway
IEEE
01.07.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
ISSN | 2377-3766 2377-3766 |
DOI | 10.1109/LRA.2022.3187278 |
Cover
Loading…
Abstract | We propose GA-Nav, a novel group-wise attention mechanism to identify safe and navigable regions in unstructured environments from RGB images. Our group-wise attention method extracts multi-scale features from each type of terrain independently and classifies terrains based on their navigability levels using coarse-grained semantic segmentation. Our novel loss can be embedded within any backbone network to explicitly focus on the different groups' features, at a low spatial resolution. Our design leads to efficient inference while maintaining a high level of accuracy compared to existing SOTA methods. Our extensive evaluations on the RUGD and RELLIS-3D datasets shows that GA-Nav achieves the state-of-the-art performance on RUGD and RELLIS-3D datasets. We interface GA-Nav with a deep reinforcement learning-based navigation algorithm and highlight its benefits in terms of navigation in real-world unstructured terrains. We integrate our GA-Nav-based navigation algorithm with ClearPath Jackal and Husky robots, and observe an improvement in terms of navigation success rate and better trajectory selections. Code, videos, and a full technical report are available at https://gamma.umd.edu/offroad/ . |
---|---|
AbstractList | We propose GA-Nav, a novel group-wise attention mechanism to identify safe and navigable regions in unstructured environments from RGB images. Our group-wise attention method extracts multi-scale features from each type of terrain independently and classifies terrains based on their navigability levels using coarse-grained semantic segmentation. Our novel loss can be embedded within any backbone network to explicitly focus on the different groups' features, at a low spatial resolution. Our design leads to efficient inference while maintaining a high level of accuracy compared to existing SOTA methods. Our extensive evaluations on the RUGD and RELLIS-3D datasets shows that GA-Nav achieves the state-of-the-art performance on RUGD and RELLIS-3D datasets. We interface GA-Nav with a deep reinforcement learning-based navigation algorithm and highlight its benefits in terms of navigation in real-world unstructured terrains. We integrate our GA-Nav-based navigation algorithm with ClearPath Jackal and Husky robots, and observe an improvement in terms of navigation success rate and better trajectory selections. Code, videos, and a full technical report are available at https://gamma.umd.edu/offroad/ . We propose GA-Nav, a novel group-wise attention mechanism to identify safe and navigable regions in unstructured environments from RGB images. Our group-wise attention method extracts multi-scale features from each type of terrain independently and classifies terrains based on their navigability levels using coarse-grained semantic segmentation. Our novel loss can be embedded within any backbone network to explicitly focus on the different groups’ features, at a low spatial resolution. Our design leads to efficient inference while maintaining a high level of accuracy compared to existing SOTA methods. Our extensive evaluations on the RUGD and RELLIS-3D datasets shows that GA-Nav achieves the state-of-the-art performance on RUGD and RELLIS-3D datasets. We interface GA-Nav with a deep reinforcement learning-based navigation algorithm and highlight its benefits in terms of navigation in real-world unstructured terrains. We integrate our GA-Nav-based navigation algorithm with ClearPath Jackal and Husky robots, and observe an improvement in terms of navigation success rate and better trajectory selections. |
Author | Chandra, Rohan Guan, Tianrui Kothandaraman, Divya Manocha, Dinesh Weerakoon, Kasun Sathyamoorthy, Adarsh Jagan |
Author_xml | – sequence: 1 givenname: Tianrui orcidid: 0000-0002-6892-9778 surname: Guan fullname: Guan, Tianrui organization: Department of Computer Science – sequence: 2 givenname: Divya surname: Kothandaraman fullname: Kothandaraman, Divya organization: Department of Computer Science – sequence: 3 givenname: Rohan orcidid: 0000-0003-4843-6375 surname: Chandra fullname: Chandra, Rohan organization: Department of Computer Science – sequence: 4 givenname: Adarsh Jagan orcidid: 0000-0002-9678-4748 surname: Sathyamoorthy fullname: Sathyamoorthy, Adarsh Jagan organization: Department of Electrical and Computer Engineering – sequence: 5 givenname: Kasun surname: Weerakoon fullname: Weerakoon, Kasun organization: Department of Electrical and Computer Engineering – sequence: 6 givenname: Dinesh orcidid: 0000-0001-7047-9801 surname: Manocha fullname: Manocha, Dinesh organization: Department of Computer Science |
BookMark | eNp9kEtLAzEYRYNUsNbuBTcDrqfmMZOHuyK1CsVCbddjmklKSpvUTKbgvzdliogLV3lwTr7cew16zjsNwC2CI4SgeJgtxiMMMR4RxBlm_AL0MWEsJ4zS3q_9FRg2zRZCiErMiCj74GM6zt_k8TGbGGOV1S5mSx2CtC5715t9OstovcuMD9nCr33MEm033WWCVq6JoVWxDbrO5m2sfQIn7miDdye7uQGXRu4aPTyvA7B6niyfXvLZfPr6NJ7lCgsUc1wQTgq1LozChquiTn9UWnFTMGKkhrDgQiBWClITTCVUUkAGS1SWmql1CjMA9927h-A_W93Eauvb4NLIClNOCaSUw0TRjlLBN03QplK2SxhT5l2FYHUqtEqFVqdCq3OhSYR_xEOwexm-_lPuOsVqrX9wwRFEApNv_5CCJw |
CODEN | IRALC6 |
CitedBy_id | crossref_primary_10_1109_LRA_2024_3354629 crossref_primary_10_1109_LRA_2023_3320013 crossref_primary_10_1109_TITS_2024_3508839 crossref_primary_10_1007_s00138_024_01533_3 crossref_primary_10_1109_LRA_2023_3291949 crossref_primary_10_1109_LRA_2024_3474548 crossref_primary_10_1016_j_eswa_2023_122919 crossref_primary_10_1016_j_jterra_2024_101028 crossref_primary_10_1109_LRA_2023_3284356 crossref_primary_10_1109_JSEN_2023_3266802 crossref_primary_10_1016_j_compag_2023_108581 crossref_primary_10_3390_s23136008 crossref_primary_10_1109_LRA_2024_3524912 crossref_primary_10_3390_app14177682 crossref_primary_10_1007_s11370_025_00591_4 crossref_primary_10_1016_j_compag_2024_108623 crossref_primary_10_1109_LRA_2024_3371910 crossref_primary_10_1109_LRA_2022_3229907 crossref_primary_10_1016_j_eswa_2024_125465 crossref_primary_10_1016_j_dsp_2024_104579 crossref_primary_10_1109_TRO_2024_3431828 crossref_primary_10_1016_j_eswa_2024_125964 crossref_primary_10_1109_LRA_2024_3376148 crossref_primary_10_1109_TAES_2023_3312627 crossref_primary_10_1007_s10514_023_10113_9 crossref_primary_10_1109_LRA_2024_3418270 crossref_primary_10_3390_math13050810 crossref_primary_10_1016_j_displa_2024_102688 crossref_primary_10_1016_j_engappai_2024_109016 crossref_primary_10_3390_drones8090496 crossref_primary_10_1002_rob_22461 crossref_primary_10_1155_2023_7147168 |
Cites_doi | 10.1007/s11263-018-1070-x 10.1109/ICRA46639.2022.9812238 10.1109/ICRA48506.2021.9561251 10.1007/978-3-319-10602-1_48 10.1109/LRA.2021.3057023 10.11159/jacr.2014.002 10.1109/CVPR.2017.660 10.1109/CVPR46437.2021.00681 10.1109/IROS40897.2019.8968283 10.1109/IROS.2015.7353480 10.7551/mitpress/9407.001.0001 10.25165/j.ijabe.20181104.4278 10.1109/ICIT.2015.7125155 10.1109/ICCV48922.2021.00717 10.1109/CVPR46437.2021.01139 10.1007/s11263-009-0275-4 10.1109/TIP.2020.3042065 10.1007/s11263-021-01515-2 10.1109/TPAMI.2016.2572683 10.1023/B:AURO.0000047286.62481.1d 10.1109/WACV51458.2022.00235 10.48550/arXiv.1909.11065 10.1007/s10514-007-9077-0 10.1007/s10846-017-0760-x 10.1016/j.neucom.2019.11.118 10.1109/ICCV48922.2021.01196 10.1109/CVPR46437.2021.01625 10.1109/100.580977 10.1109/CVPR.2016.350 10.1109/CVPR.2016.90 10.1109/CVPR.2017.544 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
DOI | 10.1109/LRA.2022.3187278 |
DatabaseName | IEEE Xplore (IEEE) IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Xplore CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Technology Research Database |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Xplore url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 2377-3766 |
EndPage | 8 |
ExternalDocumentID | 10_1109_LRA_2022_3187278 9810192 |
Genre | orig-research |
GrantInformation_xml | – fundername: ARO grantid: W911NF2110026 – fundername: Army Cooperative grantid: W911NF2120076 |
GroupedDBID | 0R~ 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFS AGQYO AGSQL AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ EBS EJD IFIPE IPLJI JAVBF KQ8 M43 M~E O9- OCL RIA RIE AAYXX CITATION RIG 7SC 7SP 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c291t-243834cb4fc2f8c4d001cec8f473fae00489917593d326a0ca90705155e7cb273 |
IEDL.DBID | RIE |
ISSN | 2377-3766 |
IngestDate | Sun Jun 29 16:55:44 EDT 2025 Tue Jul 01 03:54:14 EDT 2025 Thu Apr 24 22:52:47 EDT 2025 Wed Aug 27 02:23:56 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 3 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c291t-243834cb4fc2f8c4d001cec8f473fae00489917593d326a0ca90705155e7cb273 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0002-6892-9778 0000-0002-9678-4748 0000-0003-4843-6375 0000-0001-7047-9801 0000-0002-6276-4968 |
PQID | 2686306680 |
PQPubID | 4437225 |
PageCount | 8 |
ParticipantIDs | crossref_primary_10_1109_LRA_2022_3187278 crossref_citationtrail_10_1109_LRA_2022_3187278 proquest_journals_2686306680 ieee_primary_9810192 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2022-07-01 |
PublicationDateYYYYMMDD | 2022-07-01 |
PublicationDate_xml | – month: 07 year: 2022 text: 2022-07-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | Piscataway |
PublicationPlace_xml | – name: Piscataway |
PublicationTitle | IEEE robotics and automation letters |
PublicationTitleAbbrev | LRA |
PublicationYear | 2022 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref13 ref35 ref15 Poudel (ref36) 2019 ref37 ref14 ref30 ref11 ref33 ref10 ref2 ref1 Lee (ref32) 2015 ref17 ref16 ref38 ref19 ref18 Cheng (ref12) 2021; 34 Xie (ref7) 2021; 34 ref24 ref23 ref26 Dosovitskiy (ref29) 2021 ref25 ref20 ref22 ref21 ref28 ref27 ref8 Contributors (ref34) 2020 ref9 ref4 Vaswani (ref31) 2017; 30 ref3 ref6 ref5 |
References_xml | – volume: 30 volume-title: Proc. Adv. Neural Inf. Process. Syst. year: 2017 ident: ref31 article-title: Attention is all you need – ident: ref2 doi: 10.1007/s11263-018-1070-x – ident: ref13 doi: 10.1109/ICRA46639.2022.9812238 – ident: ref25 doi: 10.1109/ICRA48506.2021.9561251 – ident: ref22 doi: 10.1007/978-3-319-10602-1_48 – ident: ref16 doi: 10.1109/LRA.2021.3057023 – ident: ref20 doi: 10.11159/jacr.2014.002 – ident: ref5 doi: 10.1109/CVPR.2017.660 – ident: ref15 doi: 10.1109/CVPR46437.2021.00681 – volume-title: Proc. 9th Int. Conf. Learn. Representations year: 2021 ident: ref29 article-title: An image is worth 16x16 words: Transformers for image recognition at scale – volume: 34 start-page: 12077 volume-title: Adv. Neural Inf. Process. Syst. year: 2021 ident: ref7 article-title: SegFormer: Simple and efficient design for semantic segmentation with transformers – ident: ref24 doi: 10.1109/IROS40897.2019.8968283 – ident: ref18 doi: 10.1109/IROS.2015.7353480 – ident: ref3 doi: 10.7551/mitpress/9407.001.0001 – start-page: 289 volume-title: Proc. 30th Brit. Mach. Vis. Conf. year: 2019 ident: ref36 article-title: Fast-SCNN: Fast semantic segmentation network – ident: ref4 doi: 10.25165/j.ijabe.20181104.4278 – ident: ref17 doi: 10.1109/ICIT.2015.7125155 – ident: ref8 doi: 10.1109/ICCV48922.2021.00717 – ident: ref26 doi: 10.1109/CVPR46437.2021.01139 – ident: ref21 doi: 10.1007/s11263-009-0275-4 – ident: ref35 doi: 10.1109/TIP.2020.3042065 – ident: ref37 doi: 10.1007/s11263-021-01515-2 – ident: ref33 doi: 10.1109/TPAMI.2016.2572683 – ident: ref10 doi: 10.1023/B:AURO.0000047286.62481.1d – ident: ref28 doi: 10.1109/WACV51458.2022.00235 – ident: ref6 doi: 10.48550/arXiv.1909.11065 – ident: ref19 doi: 10.1007/s10514-007-9077-0 – start-page: 562 volume-title: Proc. 18th Int. Conf. Artif. Intell. Statist. year: 2015 ident: ref32 article-title: Deeply-supervised nets – ident: ref9 doi: 10.1007/s10846-017-0760-x – volume: 34 start-page: 17864 volume-title: Adv. Neural Inf. Process. Syst. year: 2021 ident: ref12 article-title: Per-pixel classification is not all you need for semantic segmentation – ident: ref14 doi: 10.1016/j.neucom.2019.11.118 – year: 2020 ident: ref34 article-title: Mmsegmentation, an open source semantic segmentation toolbox – ident: ref11 doi: 10.1109/ICCV48922.2021.01196 – ident: ref27 doi: 10.1109/CVPR46437.2021.01625 – ident: ref38 doi: 10.1109/100.580977 – ident: ref1 doi: 10.1109/CVPR.2016.350 – ident: ref30 doi: 10.1109/CVPR.2016.90 – ident: ref23 doi: 10.1109/CVPR.2017.544 |
SSID | ssj0001527395 |
Score | 2.478657 |
Snippet | We propose GA-Nav, a novel group-wise attention mechanism to identify safe and navigable regions in unstructured environments from RGB images. Our group-wise... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 1 |
SubjectTerms | AI-Based Methods AI-Enabled Robotics Algorithms Color imagery Computer architecture Computer networks Datasets Deep Learning for Visual Perception Deep Learning Methods Feature extraction Image segmentation Machine learning Navigation Robots Semantics Spatial resolution Terrain Transformers Vision-Based Navigation |
Title | GA-Nav: Efficient Terrain Segmentation for Robot Navigation in Unstructured Outdoor Environments |
URI | https://ieeexplore.ieee.org/document/9810192 https://www.proquest.com/docview/2686306680 |
Volume | 7 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3NT8IwFH8BTnrwC40okh68mDgYZWyrN2KGxAgmCAm3uXatB3UzODx48G_3tRv4GeNtWV6bra8fv1_b93sAxx5iaIXQ25LU71oOkgqLRYxZTkwVVXaMhYza58gdTJ3LWXdWgtNVLIyU0lw-k039aM7y41Qs9FZZi2k1KoYTbhmJWx6r9bGfopXEWHd5Emmz1tW4h_yPUqSlPq7S_peVx6RS-TH_mkWlvwnD5efkd0num4uMN8XrN6XG_37vFmwU6JL08u6wDSWZ7MD6J83BKtxe9KxR9HJGAiMegRWQiZzrRBHkRt49FqFICUEwS8YpTzOC1kaHA1-i0bSQnF3MZUyuF1mcomHwKVxuF6b9YHI-sIo0C5agrJ1ZVKuVOoI7SlDlCyfGxhRS-MrxOiqSeowjiPS6rBMj1otsESGhNqlhpCc4tvoeVJI0kftAdGSrxwX3ENg5-PfMi12lM-dyJCJtxWvQWrogFIUGuU6F8RAaLmKzEJ0WaqeFhdNqcLIq8ZTrb_xhW9U-WNkVzV-D-tLLYTFAn0Pq-i6yJde3D34vdQhruu78Zm4dKti08gjxR8YbUB6-BQ3T_d4Bn6DYKA |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV07T8MwED5VMAADr4IoTw8sSKRNjZvEbBUqFChFKq3ULcSOzQA0CBIGfj1nJylPIbYoOudxZ_u-s33fAez7iKE1Qm9H0aDlMAwqHB5x7rCYaqrdGBtZts--1x2xi3FrXIHDaS6MUsoePlN1c2n38uNEZmaprMENGxXHCXcW_T7jebbWx4qK4RLjrXIv0uWN3qCNESClGJgG6KeDL77HFlP5MQNbt3K6BFflB-WnSe7rWSrq8u0bV-N_v3gZFgt8Sdp5h1iBipqswsIn1sEq3J61nX70ekw6lj4CH0CG6tmUiiA36u6xSEaaEISzZJCIJCUobZk48CYKjQrS2exZxeQ6S-MEBTufEubWYHTaGZ50naLQgiMpb6YONXylTAqmJdWBZDEqUyoZaOYf6UiZUY4w0m_xoxjRXuTKCENqWxxG-VKg1tdhZpJM1AYQk9vqCyl8hHYM_577sadN7VyBoUhTixo0ShOEsmAhN8UwHkIbjbg8RKOFxmhhYbQaHExbPOUMHH_IVo0NpnKF-muwXVo5LIboS0i9wMN4yQvczd9b7cFcd3jVC3vn_cstmDfvyc_pbsMMqlntIBpJxa7thO8DLdpH |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=GA-Nav%3A+Efficient+Terrain+Segmentation+for+Robot+Navigation+in+Unstructured+Outdoor+Environments&rft.jtitle=IEEE+robotics+and+automation+letters&rft.au=Guan%2C+Tianrui&rft.au=Kothandaraman%2C+Divya&rft.au=Chandra%2C+Rohan&rft.au=Sathyamoorthy%2C+Adarsh+Jagan&rft.date=2022-07-01&rft.pub=The+Institute+of+Electrical+and+Electronics+Engineers%2C+Inc.+%28IEEE%29&rft.eissn=2377-3766&rft.volume=7&rft.issue=3&rft.spage=8138&rft_id=info:doi/10.1109%2FLRA.2022.3187278&rft.externalDBID=NO_FULL_TEXT |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2377-3766&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2377-3766&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2377-3766&client=summon |