AIの合理性と人間–AI系の合理性を目指す信頼較正

Saved in:
Bibliographic Details
Published in認知科学 Vol. 29; no. 3; pp. 364 - 370
Main Author 山田, 誠二
Format Journal Article
LanguageJapanese
Published 日本認知科学会 01.09.2022
Subjects
Online AccessGet full text
ISSN1341-7924
1881-5995
DOI10.11225/cs.2022.034

Cover

Author 山田, 誠二
Author_xml – sequence: 1
  fullname: 山田, 誠二
  organization: 国立情報学研究所
BookMark eNpVkE9LAkEYxocwSK1bX2Nt3nec3ZmjLP0RhC51HmZn1nIxix0v3bSgOnQQhSC6FB3qkpduJvRd0jS_RUoRdHmeB37wHH45kmkcN2JC1oEWABD5hnEFpIgFyopLJAtCgMel5Jn5ZkXwAonFFZJzLqGUcx9YloSl8qjd_-xcTTsXk9bTqP08HgxmN72PVq9Unr6-_YNn3eldf3J9OWrfjt8fZvfDr-H55OVxlSxXdd3Fa7-dJ_tbm3vhjlfZ3S6HpYqXIEf0fCuBotGCWRsICVVNRSQiSk0MTFsDGFnmB37kmwA1A2stBJwbXfW5kDRmeRL-_CauqQ9idZLWjnR6qnTarJl6rBLjnEKp2CIWHtTcwx81hzpViWbf09RtsQ
ContentType Journal Article
Copyright 2022 日本認知科学会
Copyright_xml – notice: 2022 日本認知科学会
DOI 10.11225/cs.2022.034
DeliveryMethod fulltext_linktorsrc
Discipline Psychology
EISSN 1881-5995
EndPage 370
ExternalDocumentID article_jcss_29_3_29_2022_034_article_char_ja
GroupedDBID AAFWJ
ABIVO
ABJNI
ACGFS
ALMA_UNASSIGNED_HOLDINGS
JSF
KQ8
OK1
RJT
ID FETCH-LOGICAL-j2522-6d9102ca83dd7891fa08b8b00ce13adc12bd3676b6c72a31ddd1755caf65890e3
ISSN 1341-7924
IngestDate Wed Sep 03 06:31:02 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed false
IsScholarly true
Issue 3
Language Japanese
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-j2522-6d9102ca83dd7891fa08b8b00ce13adc12bd3676b6c72a31ddd1755caf65890e3
OpenAccessLink https://www.jstage.jst.go.jp/article/jcss/29/3/29_2022.034/_article/-char/ja
PageCount 7
ParticipantIDs jstage_primary_article_jcss_29_3_29_2022_034_article_char_ja
PublicationCentury 2000
PublicationDate 20220900
PublicationDateYYYYMMDD 2022-09-01
PublicationDate_xml – month: 09
  year: 2022
  text: 20220900
PublicationDecade 2020
PublicationTitle 認知科学
PublicationTitleAlternate 認知科学
PublicationYear 2022
Publisher 日本認知科学会
Publisher_xml – name: 日本認知科学会
References Okamura, K., & Yamada, S. (2018). Adaptive trust calibration for supervised autonomous vehicles. Proceedings of the Tenth International Conference on Automotive User Interfaces and Interactive Vehicular Applications (Auto motiveUI’18), 92–97. https://doi.org/10.1145/3239092.3265948
本多 淳也・中村 篤祥 (2016). バンディット問題の理論とアルゴリズム 講談社
Kim, D. J., Ferrin, D. L., & Rao, H. R. (2008). A trustbased consumer decision-making model in electronic commerce: The role of trust, perceived risk, and their antecedents. Decision Support Systems, 44 (2), 544–564. http://dx.doi.org/10.1016/j.dss.2007.07.001
Okamura, K., & Yamada, S. (2020d). Empirical evaluations of framework for adaptive trust Calibration in Human-AI Cooperation. IEEE Access, 1–18. https://doi.org/10.1109/access.2020.3042556
Okamura, K., & Yamada, S. (2020c). Calibrating trust in human-drone cooperative navigation. Proceedings of the 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN2022), 1274–1279. https://doi.org/10.1109/ro-man47096.2020.9223509
寺田 和憲・山田, 誠二 (2019). 適応アルゴリズム理解における認知バイアスの実験的検討 人工知能学会論文誌, 34 (4), A–I72_1–9. https://doi.org/10.1527/tjsai.A-I72
Ezer, N., Bruni, S., Cai, Y., Hepenstal, S. J., Miller, C. A., & Schmorrow, D. D. (2019). Trust engineering for human-AI teams. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 63 (1), 322– 326. https://doi.org/10.1177/1071181319631264
Gebru, B., Zeleke, L., Blankson, D., Nabil, M., Nateghi, S., Homaifar, A., & Tunstel, E. (2022). A review on human–machine trust evaluation: Human-centric and machine-centric perspectives. IEEE Transactions on Human-Machine Systems, 1–11. https://doi.org/10.1109/thms.2022.3144956
Nakahashi, R., & Yamada, S. (2021). Balancing performance and human autonomy with implicit guidance agent. Frontiers in Artificial Intelligence, Article 4. https://doi.org/10.3389/frai.2021.736321
鈴木 宏昭 (2020). 認知バイアス:心に潜むふしぎな働き 森北出版
Koh, P. W., & Liang, P. (2017). Understanding black-box predictions via influence functions. Proceedings of the 34th International Conference on Machine Learning.
Okamura, K., & Yamada, S. (2020b). Calibrating trust in autonomous systems in a dynamic environment. Proceedings of the 42nd Annual Meeting of the Cognitive Science Society (CogSci 2020), 2492–2498.
牧野 貴樹・澁谷 長史・白川 真一 (編著) (2016). これからの強化学習 森北出版
Okamura, K., & Yamada, S. (2020a). Adaptive trust calibration for human-AI collaboration. PLOS ONE,15 (2), e0229132. https://doi.org/10.1371/journal.pone.0229132
菅原 通代・片平 健太郎 (2019). 強化学習における認知バイアスと固執性:選択行動を決めているのは過去の “選択の結果” か “選択そのもの” か?基礎心理学研究, 38 (1), 48–55. https://doi.org/10.14947/psychono.38.5
Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction. MIT Press. (サットン, R. S.・バルト, A. G.  三上 貞芳・皆川 雅章 (訳) (2000). 強化学習 森北出版)
Auer, P., Cesa-Bianchi, N., & Fischer, P. (2002). Finite-time analysis of the multiarmed bandit problem. Machine Learning, 47 (2–3), 235–256. https://doi.org/10.1023/A:1013689704352
Kim, W., Kim, N., Lyons, J. B., & Nam, C. S. (2020). Factors affecting trust in high-vulnerability human-robot interaction contexts: A structural equation modelling approach. Applied Ergonomics, 85, 103056. https://doi.org/10.1016/j.apergo.2020.103056
Hang, C., Ono, T., & Yamada, S. (2021). Designing nudge agents that promote human altruism. Proceedings of the Thirteenth International Conference on Social Robotics (ICSR), 375–385. https://doi.org/10.1007/978-3-030-90525-5_32
小野 哲雄 (2019). ナッジエージェント:人をウェルビーイングへと導く環境知能システム 第33回人工知能学会全国大会論文集. https://doi.org/10.11517/pjsai.JSAI2020.0_3J1OS9a01
山岸 俊男 (1998). 信頼の構造:こころと社会の進化ゲーム 東京大学出版会
References_xml – reference: Auer, P., Cesa-Bianchi, N., & Fischer, P. (2002). Finite-time analysis of the multiarmed bandit problem. Machine Learning, 47 (2–3), 235–256. https://doi.org/10.1023/A:1013689704352
– reference: Okamura, K., & Yamada, S. (2020c). Calibrating trust in human-drone cooperative navigation. Proceedings of the 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN2022), 1274–1279. https://doi.org/10.1109/ro-man47096.2020.9223509
– reference: Gebru, B., Zeleke, L., Blankson, D., Nabil, M., Nateghi, S., Homaifar, A., & Tunstel, E. (2022). A review on human–machine trust evaluation: Human-centric and machine-centric perspectives. IEEE Transactions on Human-Machine Systems, 1–11. https://doi.org/10.1109/thms.2022.3144956
– reference: 鈴木 宏昭 (2020). 認知バイアス:心に潜むふしぎな働き 森北出版)
– reference: Okamura, K., & Yamada, S. (2018). Adaptive trust calibration for supervised autonomous vehicles. Proceedings of the Tenth International Conference on Automotive User Interfaces and Interactive Vehicular Applications (Auto motiveUI’18), 92–97. https://doi.org/10.1145/3239092.3265948
– reference: Okamura, K., & Yamada, S. (2020b). Calibrating trust in autonomous systems in a dynamic environment. Proceedings of the 42nd Annual Meeting of the Cognitive Science Society (CogSci 2020), 2492–2498.
– reference: 菅原 通代・片平 健太郎 (2019). 強化学習における認知バイアスと固執性:選択行動を決めているのは過去の “選択の結果” か “選択そのもの” か?基礎心理学研究, 38 (1), 48–55. https://doi.org/10.14947/psychono.38.5
– reference: Hang, C., Ono, T., & Yamada, S. (2021). Designing nudge agents that promote human altruism. Proceedings of the Thirteenth International Conference on Social Robotics (ICSR), 375–385. https://doi.org/10.1007/978-3-030-90525-5_32
– reference: 山岸 俊男 (1998). 信頼の構造:こころと社会の進化ゲーム 東京大学出版会
– reference: Nakahashi, R., & Yamada, S. (2021). Balancing performance and human autonomy with implicit guidance agent. Frontiers in Artificial Intelligence, Article 4. https://doi.org/10.3389/frai.2021.736321
– reference: 寺田 和憲・山田, 誠二 (2019). 適応アルゴリズム理解における認知バイアスの実験的検討 人工知能学会論文誌, 34 (4), A–I72_1–9. https://doi.org/10.1527/tjsai.A-I72
– reference: Koh, P. W., & Liang, P. (2017). Understanding black-box predictions via influence functions. Proceedings of the 34th International Conference on Machine Learning.
– reference: 小野 哲雄 (2019). ナッジエージェント:人をウェルビーイングへと導く環境知能システム 第33回人工知能学会全国大会論文集. https://doi.org/10.11517/pjsai.JSAI2020.0_3J1OS9a01
– reference: Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction. MIT Press. (サットン, R. S.・バルト, A. G.  三上 貞芳・皆川 雅章 (訳) (2000). 強化学習 森北出版)
– reference: 本多 淳也・中村 篤祥 (2016). バンディット問題の理論とアルゴリズム 講談社
– reference: Okamura, K., & Yamada, S. (2020a). Adaptive trust calibration for human-AI collaboration. PLOS ONE,15 (2), e0229132. https://doi.org/10.1371/journal.pone.0229132
– reference: Kim, W., Kim, N., Lyons, J. B., & Nam, C. S. (2020). Factors affecting trust in high-vulnerability human-robot interaction contexts: A structural equation modelling approach. Applied Ergonomics, 85, 103056. https://doi.org/10.1016/j.apergo.2020.103056
– reference: Kim, D. J., Ferrin, D. L., & Rao, H. R. (2008). A trustbased consumer decision-making model in electronic commerce: The role of trust, perceived risk, and their antecedents. Decision Support Systems, 44 (2), 544–564. http://dx.doi.org/10.1016/j.dss.2007.07.001
– reference: 牧野 貴樹・澁谷 長史・白川 真一 (編著) (2016). これからの強化学習 森北出版
– reference: Ezer, N., Bruni, S., Cai, Y., Hepenstal, S. J., Miller, C. A., & Schmorrow, D. D. (2019). Trust engineering for human-AI teams. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 63 (1), 322– 326. https://doi.org/10.1177/1071181319631264
– reference: Okamura, K., & Yamada, S. (2020d). Empirical evaluations of framework for adaptive trust Calibration in Human-AI Cooperation. IEEE Access, 1–18. https://doi.org/10.1109/access.2020.3042556
SSID ssj0055613
ssib001106414
ssib002484563
Score 2.2912524
SourceID jstage
SourceType Publisher
StartPage 364
SubjectTerms 人工知能
人間–AI 協調意思決定
信頼較正
合理性
Title AIの合理性と人間–AI系の合理性を目指す信頼較正
URI https://www.jstage.jst.go.jp/article/jcss/29/3/29_2022.034/_article/-char/ja
Volume 29
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
ispartofPNX 認知科学, 2022/09/01, Vol.29(3), pp.364-370
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV29b9QwFLdKWbrwjfhWBzzmSOx82BKLk6bqgUBCaqVuUeIkEje0iF4HmHogAQNDRSUkxAJigIUubKUS_wtX7uh_wXtOcqSCoQUpsiy_l_ee86z4549nE3I9K0sPgLm2tGae5WbCtzKeexYv3FIHWaplgLHDd-76C0vurWVveerYydaupfV-1tGP_xpX8i9ehTLwK0bJHsGzE6FQAHnwL6TgYUgP5WPVpTEHLEhVTGOPSpsKQePAZHwa-7iNQQUND5BcGip8YkmlTyVvtjpwlBTQkNMwPLRIRiUzpNAwAymiotEmpdE2T5WD2pRNw4jGAlPBkVnNUcXb0BipSlExZ2TCix5mQJd00BLkn8wiYkHo4IO8oMfGplJLAF1VRUXUntOA4XCzaatqhWiFDIweyERURYcxwciOqFSt3zn00VYgqyjtTlGVCeFYeMxauw-oZ13ut6cIzA-d-24LG_DqkpM_ux1mLpHSeP47Yx27np89eJB33UySnl5bS5hMOCbInwB_0lAx2C7pAeI_zoLAbDy4fa-1rgkjeLe17s1cAQh4gj3wrlMTUtLUuwn1APtutKwDuNWDwUezcdFgqcVT5EQ9CJpVlTGnyVQvPUNmJn3xo7MkUt3hYPvH5ovx5rPRxsfh4NPezs7-663vG1uqO_7y9QDxyavx2-3Ry-fDwZu9b-_33-3-3H06-vzhHFmajxejBau-78PqMRgGWH4O2JXpVPA8D4R0ytQWmYBa6sLhaa4dluV4wGDm64Cl3MnzHMCvp9MSYLS0C36eTK-srhQXyGxeFmWaebmEF2CAjWvfwvfK0i3t0nNy5yK5WX2A5EF1qEtyJO9c-r_XL5OZ363-CpnuP1wvrgK27WfXjLt_Ac2Bi2s
linkProvider Colorado Alliance of Research Libraries
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=AI%E3%81%AE%E5%90%88%E7%90%86%E6%80%A7%E3%81%A8%E4%BA%BA%E9%96%93%E2%80%93AI%E7%B3%BB%E3%81%AE%E5%90%88%E7%90%86%E6%80%A7%E3%82%92%E7%9B%AE%E6%8C%87%E3%81%99%E4%BF%A1%E9%A0%BC%E8%BC%83%E6%AD%A3&rft.jtitle=%E8%AA%8D%E7%9F%A5%E7%A7%91%E5%AD%A6&rft.au=%E5%B1%B1%E7%94%B0%2C+%E8%AA%A0%E4%BA%8C&rft.date=2022-09-01&rft.pub=%E6%97%A5%E6%9C%AC%E8%AA%8D%E7%9F%A5%E7%A7%91%E5%AD%A6%E4%BC%9A&rft.issn=1341-7924&rft.eissn=1881-5995&rft.volume=29&rft.issue=3&rft.spage=364&rft.epage=370&rft_id=info:doi/10.11225%2Fcs.2022.034&rft.externalDocID=article_jcss_29_3_29_2022_034_article_char_ja
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1341-7924&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1341-7924&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1341-7924&client=summon