Camera pose estimation framework for array‐structured images
Despite the significant progress in camera pose estimation and structure‐from‐motion reconstruction from unstructured images, methods that exploit a priori information on camera arrangements have been overlooked. Conventional state‐of‐the‐art methods do not exploit the geometric structure to recover...
Saved in:
Published in | ETRI journal Vol. 44; no. 1; pp. 10 - 23 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
Electronics and Telecommunications Research Institute (ETRI)
01.02.2022
한국전자통신연구원 |
Subjects | |
Online Access | Get full text |
ISSN | 1225-6463 2233-7326 |
DOI | 10.4218/etrij.2021-0303 |
Cover
Abstract | Despite the significant progress in camera pose estimation and structure‐from‐motion reconstruction from unstructured images, methods that exploit
a priori information on camera arrangements have been overlooked. Conventional state‐of‐the‐art methods do not exploit the geometric structure to recover accurate camera poses from a set of patch images in an array for mosaic‐based imaging that creates a wide field‐of‐view image by sewing together a collection of regular images. We propose a camera pose estimation framework that exploits the array‐structured image settings in each incremental reconstruction step. It consists of the two‐way registration, the 3D point outlier elimination and the bundle adjustment with a constraint term for consistent rotation vectors to reduce reprojection errors during optimization. We demonstrate that by using individual images' connected structures at different camera pose estimation steps, we can estimate camera poses more accurately from all structured mosaic‐based image sets, including omnidirectional scenes. |
---|---|
AbstractList | Despite the significant progress in camera pose estimation and structure-from-motion reconstruction from unstructured images, methods that exploit a priori information on camera arrangements have been overlooked. Conventional state-of-the-art methods do not exploit the geometric structure to recover accurate camera poses from a set of patch images in an array for mosaic-based imaging that creates a wide field-of-view image by sewing together a collection of regular images. We propose a camera pose estimation framework that exploits the array-structured image settings in each incremental reconstruction step. It consists of the two-way registration, the 3D point outlier elimination and the bundle adjustment with a constraint term for consistent rotation vectors to reduce reprojection errors during optimization. We demonstrate that by using individual images' connected structures at different camera pose estimation steps, we can estimate camera poses more accurately from all structured mosaic-based image sets, including omnidirectional scenes. Despite the significant progress in camera pose estimation and structure‐from‐motion reconstruction from unstructured images, methods that exploit a priori information on camera arrangements have been overlooked. Conventional state‐of‐the‐art methods do not exploit the geometric structure to recover accurate camera poses from a set of patch images in an array for mosaic‐based imaging that creates a wide field‐of‐view image by sewing together a collection of regular images. We propose a camera pose estimation framework that exploits the array‐structured image settings in each incremental reconstruction step. It consists of the two‐way registration, the 3D point outlier elimination and the bundle adjustment with a constraint term for consistent rotation vectors to reduce reprojection errors during optimization. We demonstrate that by using individual images' connected structures at different camera pose estimation steps, we can estimate camera poses more accurately from all structured mosaic‐based image sets, including omnidirectional scenes. Despite the significant progress in camera pose estimation and structure-from-motion reconstruction from unstructured images, methods that exploit a priori information on camera arrangements have been overlooked. Conventional state-of-the-art methods do not exploit the geometric structure to recover accurate camera poses from a set of patch images in an array for mosaic-based imaging that creates a wide field-of-view image by sewing together a collection of regular images. We propose a camera pose estimation framework that exploits the array-structured image settings in each incremental reconstruction step. It consists of the two-way registration, the 3D point outlier elimination and the bundle adjustment with a constraint term for consistent rotation vectors to reduce reprojection errors during optimization. We demonstrate that by using individual images' connected structures at different camera pose estimation steps, we can estimate camera poses more accurately from all structured mosaic-based image sets, including omnidirectional scenes. KCI Citation Count: 0 |
Author | Park, Woojune Kim, Joonsoo Kim, Jung Hee Yun, Kuk‐Jin Shin, Min‐Jung Kang, Suk‐Ju |
Author_xml | – sequence: 1 givenname: Min‐Jung surname: Shin fullname: Shin, Min‐Jung organization: Sogang University – sequence: 2 givenname: Woojune surname: Park fullname: Park, Woojune organization: Sogang University – sequence: 3 givenname: Jung Hee surname: Kim fullname: Kim, Jung Hee organization: Sogang University – sequence: 4 givenname: Joonsoo orcidid: 0000-0002-6470-0773 surname: Kim fullname: Kim, Joonsoo organization: Electronics and Telecommunications Research Institute – sequence: 5 givenname: Kuk‐Jin surname: Yun fullname: Yun, Kuk‐Jin organization: Electronics and Telecommunications Research Institute – sequence: 6 givenname: Suk‐Ju orcidid: 0000-0002-4809-956X surname: Kang fullname: Kang, Suk‐Ju email: sjkang@sogang.ac.kr organization: Sogang University |
BackLink | https://www.kci.go.kr/kciportal/ci/sereArticleSearch/ciSereArtiView.kci?sereArticleSearchBean.artiId=ART002828622$$DAccess content in National Research Foundation of Korea (NRF) |
BookMark | eNqFkM9qGzEQxkVJoXbac6977WETafRndy-FYJzWEAgE9yzGu5KR7VhhtMb41kfIM-ZJol2HHHpoLjMw8_2-Gb4pu9jHvWPsu-BXCkR97XoKmyvgIEouufzEJgBSlpUEc8EmAkCXRhn5hU1T2nAOXOl6wn7O8NERFk8xucKlPjxiH-K-8JTnx0jbwkcqkAhPL3-fU0-Htj-Q64osXLv0lX32uEvu21u_ZH9u58vZ7_Lu_tdidnNXtgqaqmyrFSgjlBcVx_yK0tIgrw1Ao3KtV9pD52onhTQSfWaM4QK1rtWwAHnJfpx99-Tttg02Yhj7Otot2ZuH5cI22atSTdYuztou4sY-Uf6UTiMwDiKtLVIf2p2zXEDndYfC1Y3SpsJGgsYKjJC-XXWD1_XZq6WYEjn_7ie4HWK3Y-x2iN0OsWdC_0O0oR8z7QnD7j-cOXPHsHOnj87Y-fIBBChVyVevt5m7 |
CitedBy_id | crossref_primary_10_3788_LOP232187 crossref_primary_10_3390_rs17061013 crossref_primary_10_1109_ACCESS_2023_3332141 |
Cites_doi | 10.1145/3197517.3201384 10.1023/A:1008191222954 10.1109/CVPR42600.2020.00257 10.1109/CVPR.2018.00480 10.1023/B:VISI.0000029664.99615.94 10.1523/JNEUROSCI.23-22-08135.2003 10.1109/ICCV.2017.253 10.1007/s11263-016-0902-9 10.1007/978-3-030-58517-4_20 10.1016/j.cviu.2018.08.001 10.1109/ICRA.2014.6906584 10.1109/CVPR.2019.00567 10.1007/3-540-44480-7_21 10.1109/CVPR.2016.445 10.1109/ICRA.2017.7989236 10.1167/3.1.9 10.1007/978-3-030-58586-0_9 10.1145/3072959.3073599 10.1109/ICCV.2011.6126513 10.1068/p080125 10.1109/CVPR.2006.19 10.1109/ICRA.2014.6907579 10.1109/ICRA40945.2020.9197374 10.1117/12.316427 10.1145/1141911.1141964 10.1023/A:1008176507526 10.1109/CVPR42600.2020.00202 10.1109/CVPR.2018.00218 10.1007/978-0-387-31439-6_488 10.1109/TPAMI.2009.161 10.1016/j.robot.2017.03.019 10.1109/ISCAS51556.2021.9401585 10.1109/TIP.2004.840690 10.1109/CVPR42600.2020.00200 10.1109/TPAMI.2004.17 10.1109/ICCV.2013.69 10.1109/IC3D48390.2019.8975995 10.1109/ICPR.2004.1334404 10.1007/978-3-319-10578-9_5 10.1007/978-3-030-58610-2_37 10.1109/TPAMI.2017.2693984 10.1145/3386569.3392485 10.1007/978-3-030-01237-3_47 10.1007/978-3-030-01231-1_30 10.1145/358669.358692 10.1007/978-3-642-15552-9_3 10.1109/CVPR.2011.5995552 10.1109/TPAMI.2005.44 10.1109/CVPR.2019.00989 10.1109/TRO.2015.2463671 10.1109/CVPR.2001.990963 10.1109/CVPR.2016.590 10.1109/ICCV.2015.105 10.1016/j.measurement.2018.10.087 10.1201/9781315208244 10.1109/CVPR46437.2021.00049 10.1109/TPAMI.2015.2505283 10.1162/105474603322955950 10.1016/S0167-8655(02)00287-8 10.1109/SIBGRA.2002.1167133 10.1145/218380.218398 10.1109/48.989892 10.1017/S096249291700006X 10.1109/ICCV.2007.4408984 10.1109/CVPR.1997.609359 10.1080/135062800394658 10.1109/CVPR.2011.5995569 10.1057/palgrave.jors.2601183 10.5194/isprs-annals-IV-1-W1-237-2017 10.1109/CVPR.2007.383246 |
ContentType | Journal Article |
Copyright | 1225‐6463/$ © 2022 ETRI |
Copyright_xml | – notice: 1225‐6463/$ © 2022 ETRI |
DBID | AAYXX CITATION DOA ACYCR |
DOI | 10.4218/etrij.2021-0303 |
DatabaseName | CrossRef DOAJ Directory of Open Access Journals Korean Citation Index |
DatabaseTitle | CrossRef |
DatabaseTitleList | |
Database_xml | – sequence: 1 dbid: DOA name: DOAJ Directory of Open Access Journals url: https://www.doaj.org/ sourceTypes: Open Website |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 2233-7326 |
EndPage | 23 |
ExternalDocumentID | oai_kci_go_kr_ARTI_9948749 oai_doaj_org_article_012df5da1e894567a9325a72613fcbd9 10_4218_etrij_2021_0303 ETR212447 |
Genre | article |
GrantInformation_xml | – fundername: Institute of Information & Communications Technology Planning & Evaluation funderid: 2018‐0‐00207 – fundername: Information Technology Research Center (ITRC) funderid: IITP‐2021‐2018‐0‐01421 – fundername: National Research Foundation of Korea (NRF) funderid: 2021R1A2C1004208 |
GroupedDBID | -~X .4S .DC .UV 0R~ 1OC 29G 2WC 5GY 5VS 9ZL AAKPC AAYBS ACGFS ACXQS ACYCR ADBBV ADDVE AENEX ALMA_UNASSIGNED_HOLDINGS ARCSS AVUZU BCNDV DU5 E3Z EBS EDO EJD GROUPED_DOAJ IPNFZ ITG ITH JDI KQ8 KVFHK MK~ ML~ O9- OK1 P5Y RIG RNS TR2 TUS WIN XSB AAYXX ADMLS CITATION OVT AAMMB AEFGJ AGXDD AIDQK AIDYY |
ID | FETCH-LOGICAL-c4297-c7b24614f170a1224536a08622948628b5f2de8e31363af4296601a5584f2de23 |
IEDL.DBID | DOA |
ISSN | 1225-6463 |
IngestDate | Sun Mar 09 07:52:04 EDT 2025 Wed Aug 27 01:27:22 EDT 2025 Tue Jul 01 02:03:20 EDT 2025 Thu Apr 24 22:55:57 EDT 2025 Wed Jan 22 16:27:01 EST 2025 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 1 |
Language | English |
License | http://doi.wiley.com/10.1002/tdm_license_1.1 http://onlinelibrary.wiley.com/termsAndConditions#vor |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c4297-c7b24614f170a1224536a08622948628b5f2de8e31363af4296601a5584f2de23 |
Notes | Funding information Min‐jung Shin, Woojune Park, and Jung Hee Kim should be considered joint first authors. Institute of Information & Communications Technology Planning & Evaluation, Grant/Award Number: 2018‐0‐00207; Information Technology Research Center (ITRC), Grant/Award Number: IITP‐2021‐2018‐0‐01421; National Research Foundation of Korea (NRF), Grant/Award Number: 2021R1A2C1004208 https://doi.org/10.4218/etrij.2021-0303 |
ORCID | 0000-0002-4809-956X 0000-0002-6470-0773 |
OpenAccessLink | https://doaj.org/article/012df5da1e894567a9325a72613fcbd9 |
PageCount | 14 |
ParticipantIDs | nrf_kci_oai_kci_go_kr_ARTI_9948749 doaj_primary_oai_doaj_org_article_012df5da1e894567a9325a72613fcbd9 crossref_primary_10_4218_etrij_2021_0303 crossref_citationtrail_10_4218_etrij_2021_0303 wiley_primary_10_4218_etrij_2021_0303_ETR212447 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | February 2022 2022-02-00 2022-02-01 2022-02 |
PublicationDateYYYYMMDD | 2022-02-01 |
PublicationDate_xml | – month: 02 year: 2022 text: February 2022 |
PublicationDecade | 2020 |
PublicationTitle | ETRI journal |
PublicationYear | 2022 |
Publisher | Electronics and Telecommunications Research Institute (ETRI) 한국전자통신연구원 |
Publisher_xml | – name: Electronics and Telecommunications Research Institute (ETRI) – name: 한국전자통신연구원 |
References | 2010; 32 2004; 60 2017; 26 2015; 31 2004; 26 2000; 7 2020; 39 1981; 24 2018; 40 2005; 27 2016; 38 2016; 120 2003; 12 2002; 27 2018; 174 2017; 93 2017; 36 2004; 38 2020 2004; 35 2003; 24 2003; 3 2018 2001; 15 2015 2014 2018; 37 2019; 134 2001; 52 1979; 8 2003; 23 2005; 14 e_1_2_9_75_1 e_1_2_9_31_1 e_1_2_9_52_1 e_1_2_9_50_1 e_1_2_9_73_1 e_1_2_9_79_1 e_1_2_9_10_1 e_1_2_9_35_1 e_1_2_9_56_1 e_1_2_9_12_1 e_1_2_9_33_1 e_1_2_9_54_1 Wrobel B. (e_1_2_9_78_1) 2001; 15 e_1_2_9_71_1 e_1_2_9_14_1 e_1_2_9_39_1 e_1_2_9_16_1 e_1_2_9_37_1 e_1_2_9_58_1 e_1_2_9_18_1 e_1_2_9_41_1 e_1_2_9_64_1 e_1_2_9_20_1 e_1_2_9_62_1 e_1_2_9_22_1 e_1_2_9_45_1 e_1_2_9_68_1 e_1_2_9_24_1 e_1_2_9_43_1 e_1_2_9_66_1 e_1_2_9_8_1 e_1_2_9_6_1 e_1_2_9_81_1 e_1_2_9_4_1 e_1_2_9_60_1 e_1_2_9_2_1 e_1_2_9_26_1 e_1_2_9_49_1 e_1_2_9_28_1 e_1_2_9_47_1 e_1_2_9_30_1 e_1_2_9_53_1 e_1_2_9_74_1 e_1_2_9_51_1 e_1_2_9_72_1 e_1_2_9_11_1 e_1_2_9_34_1 e_1_2_9_57_1 e_1_2_9_13_1 e_1_2_9_32_1 e_1_2_9_55_1 e_1_2_9_76_1 Shah R. (e_1_2_9_46_1) 2015 e_1_2_9_70_1 e_1_2_9_15_1 e_1_2_9_38_1 e_1_2_9_17_1 e_1_2_9_36_1 e_1_2_9_59_1 e_1_2_9_19_1 e_1_2_9_42_1 e_1_2_9_63_1 e_1_2_9_40_1 e_1_2_9_61_1 e_1_2_9_21_1 e_1_2_9_67_1 e_1_2_9_23_1 e_1_2_9_44_1 e_1_2_9_65_1 e_1_2_9_7_1 e_1_2_9_80_1 e_1_2_9_5_1 e_1_2_9_3_1 Inc. The MathWorks (e_1_2_9_77_1) 2020 e_1_2_9_9_1 e_1_2_9_25_1 e_1_2_9_27_1 e_1_2_9_48_1 e_1_2_9_69_1 e_1_2_9_29_1 |
References_xml | – volume: 39 start-page: 86:1 year: 2020 end-page: 15 article-title: Immersive light field video with a layered mesh representation publication-title: ACM Trans. Graph. – volume: 26 start-page: 756 year: 2004 end-page: 770 article-title: An efficient solution to the five‐point relative pose problem publication-title: IEEE Trans. Pattern Anal. Mach. Intell. – volume: 36 start-page: 1 issue: 4 year: 2017 end-page: 3 article-title: Tanks and temples: benchmarking large‐scale scene reconstruction publication-title: ACM Trans. Graph – volume: 15 start-page: 41 year: 2001 article-title: Multiple view geometry in computer vision publication-title: Künstliche Intell. – volume: 26 start-page: 305 year: 2017 end-page: 364 article-title: A survey of structure from motion* publication-title: Acta Numerica – volume: 24 start-page: 1171 year: 2003 end-page: 1179 article-title: Comprehensive interest points based imaging mosaic publication-title: Pattern Recogn. Lett. – volume: 14 start-page: 241 issue: 2 year: 2005 end-page: 252 article-title: Image registration for image‐based rendering publication-title: IEEE Trans. Image Process. – volume: 40 start-page: 958 year: 2018 end-page: 972 article-title: Robust relative rotation averaging publication-title: IEEE Trans. Pattern Anal. Mach. Intell. – volume: 38 start-page: 199 year: 2004 end-page: 218 article-title: A theory of shape by space carving publication-title: Int. J. Comput. Vis. – volume: 35 start-page: 151 year: 2004 end-page: 173 article-title: Photorealistic scene reconstruction by voxel coloring publication-title: Int. J. Comput. Vis. – volume: 31 start-page: 1147 year: 2015 end-page: 1163 article-title: ORB‐SLAM: a versatile and accurate monocular SLAM system publication-title: IEEE Trans. Robot. – start-page: 552 year: 2014 end-page: 560 – volume: 27 start-page: 418 year: 2005 end-page: 433 article-title: A quasi‐dense approach to surface reconstruction from uncalibrated images publication-title: IEEE Trans. Pattern Anal. Mach. Intell. – volume: 3 start-page: 86 issue: 1 year: 2003 end-page: 94 article-title: What you see is what you need publication-title: J. Vision – volume: 32 start-page: 1362 year: 2010 end-page: 1376 article-title: Accurate, dense, and robust multiview stereopsis publication-title: IEEE Trans. Pattern Anal. Mach. Intell. – volume: 120 start-page: 153 year: 2016 end-page: 168 article-title: Large‐scale data for multiple‐view stereopsis publication-title: Int. J. Comput. Vis. – volume: 8 start-page: 125 year: 1979 end-page: 134 article-title: Motion parallax as an independent cue for depth perception publication-title: Perception – volume: 174 year: 2018 – volume: 134 start-page: 345 year: 2019 end-page: 358 article-title: Multi‐camera calibration for accurate geometric measurements in industrial environments publication-title: Measurement – volume: 12 start-page: 663 year: 2003 end-page: 664 article-title: Virtual reality technology publication-title: Presence: Teleoperators Virtual Environ. – volume: 27 start-page: 79 year: 2002 end-page: 99 article-title: Mosaic‐based positioning and improved motion‐estimation methods for automatic navigation of submersible vehicles publication-title: IEEE J. Ocean. Eng. – volume: 38 start-page: 2024 year: 2016 end-page: 2039 article-title: Learning depth from single monocular images using deep convolutional neural fields publication-title: IEEE Trans. Pattern Anal. Mach. Intell. – volume: 93 start-page: 27 year: 2017 end-page: 42 article-title: S‐ptam: Stereo parallel tracking and mapping publication-title: Robotics Autonomous Syst. – start-page: 488 year: 2018 end-page: 503 – volume: 37 start-page: 1 year: 2018 end-page: 15 article-title: Deep blending for free‐viewpoint image‐based rendering publication-title: ACM Trans. Graph. – year: 2020 – volume: 23 start-page: 8135 year: 2003 end-page: 8142 article-title: Motion parallax is computed in the updating of human spatial memory publication-title: J. Neurosci. – volume: 7 start-page: 1 year: 2000 end-page: 15 article-title: Current approaches to change blindness publication-title: Visual Cognition – year: 2015 – volume: 24 start-page: 381 year: 1981 end-page: 395 article-title: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography publication-title: Commun. ACM – volume: 52 start-page: 245 year: 2001 article-title: Numerical optimization publication-title: J. Oper. Res. Soc. – volume: 60 start-page: 91 year: 2004 end-page: 110 article-title: Distinctive image features from scale‐invariant keypoints publication-title: Int. J. Comput. Vision – ident: e_1_2_9_8_1 doi: 10.1145/3197517.3201384 – ident: e_1_2_9_49_1 doi: 10.1023/A:1008191222954 – volume: 15 start-page: 41 year: 2001 ident: e_1_2_9_78_1 article-title: Multiple view geometry in computer vision publication-title: Künstliche Intell. – ident: e_1_2_9_55_1 doi: 10.1109/CVPR42600.2020.00257 – ident: e_1_2_9_48_1 doi: 10.1109/CVPR.2018.00480 – ident: e_1_2_9_70_1 doi: 10.1023/B:VISI.0000029664.99615.94 – ident: e_1_2_9_6_1 doi: 10.1523/JNEUROSCI.23-22-08135.2003 – ident: e_1_2_9_51_1 doi: 10.1109/ICCV.2017.253 – ident: e_1_2_9_58_1 doi: 10.1007/s11263-016-0902-9 – ident: e_1_2_9_67_1 doi: 10.1007/978-3-030-58517-4_20 – ident: e_1_2_9_17_1 doi: 10.1016/j.cviu.2018.08.001 – ident: e_1_2_9_34_1 doi: 10.1109/ICRA.2014.6906584 – ident: e_1_2_9_56_1 doi: 10.1109/CVPR.2019.00567 – ident: e_1_2_9_43_1 – ident: e_1_2_9_73_1 doi: 10.1007/3-540-44480-7_21 – ident: e_1_2_9_22_1 doi: 10.1109/CVPR.2016.445 – ident: e_1_2_9_35_1 doi: 10.1109/ICRA.2017.7989236 – ident: e_1_2_9_3_1 doi: 10.1167/3.1.9 – ident: e_1_2_9_4_1 – ident: e_1_2_9_19_1 doi: 10.1007/978-3-030-58586-0_9 – ident: e_1_2_9_59_1 doi: 10.1145/3072959.3073599 – ident: e_1_2_9_27_1 doi: 10.1109/ICCV.2011.6126513 – ident: e_1_2_9_5_1 doi: 10.1068/p080125 – ident: e_1_2_9_12_1 doi: 10.1109/CVPR.2006.19 – ident: e_1_2_9_66_1 doi: 10.1109/ICRA.2014.6907579 – ident: e_1_2_9_71_1 – ident: e_1_2_9_36_1 doi: 10.1109/ICRA40945.2020.9197374 – ident: e_1_2_9_60_1 doi: 10.1117/12.316427 – ident: e_1_2_9_42_1 doi: 10.1145/1141911.1141964 – ident: e_1_2_9_44_1 – ident: e_1_2_9_52_1 – ident: e_1_2_9_50_1 doi: 10.1023/A:1008176507526 – ident: e_1_2_9_30_1 – ident: e_1_2_9_57_1 doi: 10.1109/CVPR42600.2020.00202 – ident: e_1_2_9_32_1 doi: 10.1109/CVPR.2018.00218 – ident: e_1_2_9_68_1 doi: 10.1007/978-0-387-31439-6_488 – ident: e_1_2_9_54_1 doi: 10.1109/TPAMI.2009.161 – ident: e_1_2_9_29_1 doi: 10.1016/j.robot.2017.03.019 – ident: e_1_2_9_79_1 – ident: e_1_2_9_26_1 doi: 10.1109/ISCAS51556.2021.9401585 – ident: e_1_2_9_16_1 doi: 10.1109/TIP.2004.840690 – ident: e_1_2_9_39_1 doi: 10.1109/CVPR42600.2020.00200 – ident: e_1_2_9_37_1 doi: 10.1109/TPAMI.2004.17 – ident: e_1_2_9_24_1 doi: 10.1109/ICCV.2013.69 – ident: e_1_2_9_80_1 doi: 10.1109/IC3D48390.2019.8975995 – ident: e_1_2_9_61_1 doi: 10.1109/ICPR.2004.1334404 – ident: e_1_2_9_21_1 doi: 10.1007/978-3-319-10578-9_5 – ident: e_1_2_9_38_1 doi: 10.1007/978-3-030-58610-2_37 – ident: e_1_2_9_18_1 doi: 10.1109/TPAMI.2017.2693984 – ident: e_1_2_9_9_1 doi: 10.1145/3386569.3392485 – volume-title: Multistage SfM: a coarse‐to‐fine approach for 3D reconstruction year: 2015 ident: e_1_2_9_46_1 – ident: e_1_2_9_81_1 doi: 10.1007/978-3-030-01237-3_47 – ident: e_1_2_9_41_1 doi: 10.1007/978-3-030-01231-1_30 – ident: e_1_2_9_72_1 doi: 10.1145/358669.358692 – ident: e_1_2_9_74_1 doi: 10.1007/978-3-642-15552-9_3 – ident: e_1_2_9_76_1 doi: 10.1109/CVPR.2011.5995552 – ident: e_1_2_9_53_1 doi: 10.1109/TPAMI.2005.44 – ident: e_1_2_9_33_1 doi: 10.1109/CVPR.2019.00989 – ident: e_1_2_9_28_1 doi: 10.1109/TRO.2015.2463671 – ident: e_1_2_9_20_1 doi: 10.1109/CVPR.2001.990963 – ident: e_1_2_9_40_1 doi: 10.1109/CVPR.2016.590 – ident: e_1_2_9_47_1 doi: 10.1109/ICCV.2015.105 – ident: e_1_2_9_69_1 doi: 10.1016/j.measurement.2018.10.087 – ident: e_1_2_9_11_1 doi: 10.1201/9781315208244 – ident: e_1_2_9_25_1 doi: 10.1109/CVPR46437.2021.00049 – ident: e_1_2_9_31_1 doi: 10.1109/TPAMI.2015.2505283 – ident: e_1_2_9_10_1 doi: 10.1162/105474603322955950 – ident: e_1_2_9_64_1 doi: 10.1016/S0167-8655(02)00287-8 – ident: e_1_2_9_15_1 doi: 10.1109/SIBGRA.2002.1167133 – ident: e_1_2_9_7_1 doi: 10.1145/218380.218398 – ident: e_1_2_9_62_1 doi: 10.1109/48.989892 – ident: e_1_2_9_23_1 doi: 10.1017/S096249291700006X – volume-title: Computer vision toolbox year: 2020 ident: e_1_2_9_77_1 – ident: e_1_2_9_13_1 doi: 10.1109/ICCV.2007.4408984 – ident: e_1_2_9_63_1 doi: 10.1109/CVPR.1997.609359 – ident: e_1_2_9_2_1 doi: 10.1080/135062800394658 – ident: e_1_2_9_45_1 doi: 10.1109/CVPR.2011.5995569 – ident: e_1_2_9_75_1 doi: 10.1057/palgrave.jors.2601183 – ident: e_1_2_9_65_1 doi: 10.5194/isprs-annals-IV-1-W1-237-2017 – ident: e_1_2_9_14_1 doi: 10.1109/CVPR.2007.383246 |
SSID | ssj0020458 |
Score | 2.2774477 |
Snippet | Despite the significant progress in camera pose estimation and structure‐from‐motion reconstruction from unstructured images, methods that exploit
a priori... Despite the significant progress in camera pose estimation and structure-from-motion reconstruction from unstructured images, methods that exploit a priori... |
SourceID | nrf doaj crossref wiley |
SourceType | Open Website Enrichment Source Index Database Publisher |
StartPage | 10 |
SubjectTerms | camera pose estimation mosaic-based image omnidirectional image structure from motion 전자/정보통신공학 |
Title | Camera pose estimation framework for array‐structured images |
URI | https://onlinelibrary.wiley.com/doi/abs/10.4218%2Fetrij.2021-0303 https://doaj.org/article/012df5da1e894567a9325a72613fcbd9 https://www.kci.go.kr/kciportal/ci/sereArticleSearch/ciSereArtiView.kci?sereArticleSearchBean.artiId=ART002828622 |
Volume | 44 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
ispartofPNX | ETRI Journal, 2022, 44(1), , pp.10-23 |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV09T8MwELVQJxgQn6IUkIUYWEKTOLFriQWqVoWBAbVSN8tx7AqK0iqFgX_PnZNUZUBdWBqptSX3XZJ7Lzm_I-TGaOtMDuqEh1kcJI7BNReFJmChMRY0LlJ-rLZ44aNJ8jxNpxutvrAmrLIHroDrwg00d2muI9uTkOyFBsKRagHEnzmT5X7rXijDRkzVUgtf_6HUgrM14AlnlalPAvmsi42q3kEYxlhK1PTKqvORt-2HLFOU7jdZ9dlmeED2a5pIH6rlHZIdWxyRvQ3zwGNy39f4QIkuFytL0Suj2oRIXVNuRYGPUl2W-juoXGK_SptTGDazqxMyGQ7G_VFQ90IIDGQMERiRofNb4iIRanwbljKuUY7EMoHPXpa6OLc9yyLGmXYwh4PU0inwC_whZqekVSwKe0ZoqjngEDLBLARCMi21zOCWnUGuliyRbXLXIKJMbRSO_So-FAgGhFB5CBVCqBDCNrldT1hWHhl_D31EiNfD0NzafwEhV3XI1baQt8k1BEjNzZufj8fZQs1LBRLgSUkAROC_6Pr4bVuRGoxfYyQ54vw_1tYhuzFukPB13RekBeG1l0BbPrMrf4b-AAGu44o |
linkProvider | Directory of Open Access Journals |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV27TsMwFLVQOwAD4inKM0IMLKFpnEezIJWqVQulA2pQxWI5jl2VoqRKy8DGJ_CNfAn3JmnUIiHEkkiJHTn3-vqe49gnhFwKLpUIgZ04RmDqlqIQczVD6NQQQgLHRciPqy36Tse37ob2cGkvTKYPUUy4YWSk4zUGOE5IY5RbkJbQi_Nk_AIMz8Q1QSj4WUZsY5ZIufHkP_sF7cJPgUi7oOfqjuXQTOAHH1L98YiV3JRK-EPGiRK1ClzTzNPeJls5ZNQamY93yJqMdsnmkpDgHrlpcpxc0qbxTGqom5FtSNTUYumVBthU40nC378-PjPN2LdEhhoUHMnZPvHbrUGzo-d_RtAF5A9XF26AOnCWqrkGx29jNnU4khPTs-BYD2xlhrIuaY06lCuo4wDx4jagDbxh0gNSiuJIHhLN5g5YwqAuleAWj3KPewEM4AFkbo9aXoVcL2zCRC4bjn-veGVAH9CILDUiQyMyNGKFXBUVpplixu9Fb9HIRTGUuk4vxMmI5ZHDIIOGyg55TdY9QHsuB8RpcxeYH1UiCKGBF-AiNhHjtD6eRzGbJAwIQZd5YBAX36KaevCvFrHW4NFEyOMe_bvGOVnvDB56rNft3x-TDRP3SaTLu09ICfwqTwG9zIOzvHt-A4rX5hQ |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LS8NAEF6kBdGD-MT6DOLBS2ySzaO5CLW2tCpFpBHxsmw2u6VWkpLWgzd_gr_RX-JMkgYriHhJINkNm5mdzPcls18IORVcKhEBO3GN0NJtRSHmTEPo1BBCAsdFyI_VFn23G9jXj868mhDXwuT6EOULN4yM7HmNAT6JFAa5DVkJnThLR89A8CwsCUK9zypq5cFUrzYfgqegZF34JRBZF0xc3bVdmuv74EXqPy6xkJoyBX9IOHGqFnFrlng662StQIxaM3fxBlmS8SZZ_aYjuEUuWhzfLWmTZCo1lM3I1yNqal55pQE01Xia8rfP949cMvY1lZEGDYdyuk2CTnvQ6urFjxF0AenD04UXogycrUzP4PhpzKEuR25i-TZsG6GjrEg2JDWpS7mCPi7wLu4A2MATFt0hlTiJ5S7RHO6CJQzqUQle8Sn3uR_C8zuExO1T26-R87lNmChUw_HnFS8M2AMakWVGZGhEhkaskbOywyQXzPi96SUauWyGStfZgSQdsiJwGCTQSDkRN2XDB7DncQCcDveA-FElwggGeAIuYmMxyvrjfpiwccqAD_SYDwbx8C7qmQf_GhFrD-4tRDze3r97HJPlu6sOu-31b_bJioWrJLLi7gNSAbfKQ8Aus_ComJ1fLOjlPQ |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Camera+pose+estimation+framework+for+array%E2%80%90structured+images&rft.jtitle=ETRI+journal&rft.au=%EC%8B%A0%EB%AF%BC%EC%A0%95&rft.au=%EB%B0%95%EC%9A%B0%EC%A4%80&rft.au=%EA%B9%80%EC%A0%95%ED%9D%AC&rft.au=%EA%B9%80%EC%A4%80%EC%88%98&rft.date=2022-02-01&rft.pub=%ED%95%9C%EA%B5%AD%EC%A0%84%EC%9E%90%ED%86%B5%EC%8B%A0%EC%97%B0%EA%B5%AC%EC%9B%90&rft.issn=1225-6463&rft.eissn=2233-7326&rft.spage=10&rft.epage=23&rft_id=info:doi/10.4218%2Fetrij.2021-0303&rft.externalDBID=n%2Fa&rft.externalDocID=oai_kci_go_kr_ARTI_9948749 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1225-6463&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1225-6463&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1225-6463&client=summon |