視覚野の計算モデル:教師なし学習手法による視覚情報の表現分離
Saved in:
Published in | VISION Vol. 33; no. 2; pp. 63 - 76 |
---|---|
Main Author | |
Format | Journal Article |
Language | Japanese |
Published |
日本視覚学会
20.04.2021
|
Online Access | Get full text |
ISSN | 0917-1142 2433-5630 |
DOI | 10.24636/vision.33.2_63 |
Cover
Author | 林, 隆介 |
---|---|
Author_xml | – sequence: 1 fullname: 林, 隆介 organization: 国立研究開発法人 産業技術総合研究所 |
BookMark | eNo9kL9Lw0AcxQ-pYK2d_StSL_dN7tJJpPgLCi66uIRLetWE2kpSBMc0aCsFBaEW66IdSjFYEBcH0T_mSNpO_gtaKi7vDY_34fGWUapaqwqEVlWcIxoFunbm-E6tmgPIEZPCAkoTDUDRKeAUSuO8yhRV1cgSyvq-izFWMYDBcBodTgZ3k0Fv2ryWwWgybI1HXRn2ZdiUYfT90Us69_F7JINnGXTjl8H46za5aidvHRlEstGSjfa8noQX8ePrjPA0HN98xq3L6UN_BS2WecUX2T_PoIOtzf3CjlLc294tbBQVlwBmCtUZEEEFxpat6yWwuBA6Kf1ONzSOy8xmglOL6QblBrA8A47zgMtUNwzgliUgg9bnXNev8yNhnnrOCffOTe7VHbsizPk1JoBJZkLhP7GPuWe6HH4AZYmA7g |
ContentType | Journal Article |
Copyright | 2021 日本視覚学会 |
Copyright_xml | – notice: 2021 日本視覚学会 |
DOI | 10.24636/vision.33.2_63 |
DeliveryMethod | fulltext_linktorsrc |
EISSN | 2433-5630 |
EndPage | 76 |
ExternalDocumentID | article_vision_33_2_33_63_article_char_ja |
GroupedDBID | ALMA_UNASSIGNED_HOLDINGS JSF JSH RJT RZJ |
ID | FETCH-LOGICAL-j2307-65732e6e00bc55d3baee52d56384a0f7c7ea6b7586a837973a0930f65883abbe3 |
ISSN | 0917-1142 |
IngestDate | Sun Jul 28 05:37:11 EDT 2024 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | false |
IsScholarly | false |
Issue | 2 |
Language | Japanese |
LinkModel | OpenURL |
MergedId | FETCHMERGED-LOGICAL-j2307-65732e6e00bc55d3baee52d56384a0f7c7ea6b7586a837973a0930f65883abbe3 |
OpenAccessLink | https://www.jstage.jst.go.jp/article/vision/33/2/33_63/_article/-char/ja |
PageCount | 14 |
ParticipantIDs | jstage_primary_article_vision_33_2_33_63_article_char_ja |
PublicationCentury | 2000 |
PublicationDate | 2021/04/20 |
PublicationDateYYYYMMDD | 2021-04-20 |
PublicationDate_xml | – month: 04 year: 2021 text: 2021/04/20 day: 20 |
PublicationDecade | 2020 |
PublicationTitle | VISION |
PublicationYear | 2021 |
Publisher | 日本視覚学会 |
Publisher_xml | – name: 日本視覚学会 |
References | 5) R. Hayashi and S. Nishimoto: Decoding visual information in monkey IT cortex using deep neural network. Proceedings of Life Engineering Symposium 2013, 511–514, 2013. 22) R. Kiani, H. Esteky, K. Mirpour and K. Tanaka: Object category structure in response patterns of neuronal population in monkey inferior temporal cortex. Journal of Neurophysiology, 97, 4296–4309, 2007. 37) A. van den Oord, Y. Li and O. Vinyals: Representation learning with contrastive predictive coding. arXiv:1807.03748, 2018. 7) D. L. K. Yamins, H. Hong, C. F. Fadieu, E. A. Solomon, D. Seibert and J. J. DiCarlo: Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proceedings of the National Academy Sciences of the United States of America, 111, 8619–8624, 2014. 55) C. M. Schwiedrzik and W. A. Freiwald: High-level prediction signals in a low-level area of the macaque face-processing hierarchy. Neuron, 96, 89–97, 2017. 36) R. Hadsell, S. Chopra and Y. Lecun: Dimensionality reduction by learning an invariant mapping. 2006 Proceedings of IEEE Conference of Computer Vision and Pattern Recognition, 2, 1735–1742, 2006. 23) N. Kriegeskorte, M. Mur, D. A. Ruff and R. Kiani: Matching categorical object representations in inferior temporal cortex of man and monkey. Neuron, 26, 1126–1141, 2008. 16) A. Nguyen, J. Yoshinski and J. Clune: Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. Proceedings of IEEE Conference of Computer Vision and Pattern Recognition, 427–436, 2015. 47) X. Mao, Z. Su, P. S. Tan, J. K. Chow and Y.-H. Wang: Is discriminator a good feature extractor? arXiv:1912.00789, 1–12, 2020. 40) I. J. Goodfellow, J. P. Abradie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville and Y. Bengio: Generative adversarial nets. NIPS 2014, 1–9, 2014. 50) I. Higgins: Unsupervised deep learning identifies semantic disentanglement in single inferotemporal neurons. arXiv: 2006.14304, 1–24, 2020. 4) Y. LeCun, B. Bose, J. S. Denker, R. E. Howard, W. Habbard, L. D. Jackel and D. Henderson: Hand-written digit recognition with a back-propagation network. Advances in Neural Information Processing Systems, 2, 396–404, 1990. 9) N. Kriegeskorte, M. Mur and P. Bandettini: Representational similarity analysis: Connecting the branches of systems neuroscience. Frontiers Systems Neuroscience, 2, 1–28, 2008. 53) E. Mathieu, T. Rainforeth, N. Siddharth and Y. W. Teh: Disentangling disentanglement in variational autoencoders. Proceedings 36th International Conference Machine Learning, 4402–4412, 2019. 15) C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erthan, I. Goodfellow and R. Fergus: Intriguing properties of neural networks. 2nd International Conference of Learning Representations ICLR 2014—Conf. Track Proc., 1–10, 2014. 13) M. Schrimpf, J. Kubilius, H. Hong, N. J. Majaj, R. Rajalingham, E. B. Issa, K. Kar, P. Bashivan, J. Prescott-Roy, K. Schmidt, D. L. Yamins and J. J. DiCarlo: Brain-Score: Which artificial neural network for object recognition is most brain-like? bioRxiv, 1–9, 2018. doi: https://doi.org/10.1101/407007 39) M. Chen, A. Radford, R. Child, J. Wu, H. Jun, P. Dhariwal, D. Luan and I. Sutskever: Generative pretraining from pixels. Proceeding 37th International Conference Machine Learning, 119, 1691–1703, 2020. Available: http://proceedings.mlr.press/v119/chen20s.html 28) M. Noroozi and P. Favaro: Unsupervised learning of visual representations by solving jigsaw puzzles. European Confernce on Computer Vision, 69–84, 2016. 49) I. Higgins, L. Matthey, A. pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed and A. Lerchner: β-VAE: Learning basic visual concepts with a constrained variational framework. ICLR 2017, 1–22, 2017. 27) R. Zhang, P. Isola and A. A. Efros: Colorful image colorization. European Confernce on Computer Vision, 649–666, 2016. 41) A. Radford, L. Mets and S. Chintala: Unsupervised representation learning with deep convolutional generative adversarial ntworks. ICLR 2016, 1–16, 2016. 43) J. Donahue, P. Krähenbühl and T. Darrell: Adversarial feature learning. ICLR 2017, 1–18, 2017. 51) R. T. Q. Chen, X. Li, R. Grosse and D. Duvenaud: Isolating sources of disentanglement in VAEs. NeurIPS 2018, 1–18, 2018. 44) V. Dumoulin, I. Belghazi, B. Poole, O. Mastropietro, A. Lamb, M. Arjovsky and A. Courville: Adversarially learned inference. ICLR2017, 1–18, 2017. 24) C. Sun, A. Shrivastava, S. Singh and A. Gupta: Revisiting unreasonable effectiveness of data in deep learning era. Proceedings of IEEE Conference of Computer Vision and Pattern Recognition, 843–852, 2017. 56) P. Bao, L. She, M. McGill and D. Y. Tsao: A map of object space in primate inferotemporal cortex. Nature, 583, 103–108, 2020. 25) C. Doersch, A. Gupta and A. A. Efros: Unsupervised visual representation learning by context prediction. Proceedings of IEEE Conference of Computer Vision and Pattern Recognition, 1422–1430, 2015. 38) A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit and N. Houlsby: An image is worth 16X16 words: Transformers for image recognition at scale. ICLR 2021, 1–21, 2021. Available: http://proceedings.mlr.press/v119/chen20s.html 10) K. Simonyan and A. Zisserman: Very deep convolutional networks for large-scale image recognition. ICLR 2015, 1–14, 2015. 2) J. Deng, W. Dong, R. Socher, L. J. Li, K. Li and F. F. Li: ImageNet: A large-scale hierarchical image database. 2009 IEEE Conference of Computer Vision and Pattern Recognition, 248–255, 2009. 14) S. Nonaka, K. Majima, S. C. Aoki and Y. Kamitani: Brain hierarchy score: Which deep neural networks are hierarchically brain-like? bioRxiv, 2020. doi: https://doi.org/10.1101/2020.07.22.216713 46) A. Gonzalez-Garcia, J. van de Weijer and Y. Bengio: Image-to-image translation for cross-domain disentanglement. NeurIPS 2018, 1–12, 2018. 12) K. He, X. Zhang, S. Ren and J. Sun: Deep residual learning for image recognition. Proceedings of IEEE Conference on Computer Vision Pattern Recognition, 770–778, 2016. Available: http://www.cv-foundation.org/openaccess/content_cvpr_2016/html/He_Deep_Residual_Learning_CVPR_2016_paper.html 3) K. Fukushima: Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biologial Cybernernetics, 36, 193–202, 1980. 20) L. A. Gatys, A. S. Ecker and M. Bethge: Image style transfer using convolutional neural networks. Proceedings of IEEE Conference of Computer Vision and Pattern Recognition, 2414–2423, 2016. 34) K. He, H. Fan, Y. Wu, S. Xie and R. Girshick: Momentum contrast for unsupervised visual representation learning. Proceedings of IEEE/CVF Conference of Computer Vision and Pattern Recognition, 9729–9738, 2020. 35) T. Chen, S. Kornblith, M, Norouzi and G. Hinton: A simple framework for contrastive learning of visual representations. ICML 2020, 1–20, 2020. 45) J. Donahue and K. Simonyan: Large scale adversarial representation learning. NeurIPS 2019, 1–32, 2019. 26) D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell and A. A. Efros: Context encoders: Feature learning by inpainting. arXiv: 1604.07379, 2016. 21) R. Geirhos, P. Rubisch, C. Micaelis, M. Bethge, F. A. Wichmann and W. Brendel: ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. arXiv: 1811.12231, 1–22, 2018. 54) K. Kar, J. Kubilius, K. Schmidt, E. B. Issa and J. J. DiCarlo: Evidence that recurrent circuits are critical to the ventral stream’s execution of core object recognition behavior. Nature Neuroscience, 22, 974–983, 2019. 33) T. Konkle and G. A. Alvarez: Instance-level contrastive learning yields human brain-like representation without category-supervision. bioRxiv, 1–17, 2020. 48) D. P. Kingma and M. Welling: Auto-encoding variational bayes. arXiv: 1312.6114, 1–14, 2013. https://arxiv.org/abs/1312.6114 18) A. Athalye, L. Engstrom, A. Ilyas and K. Kwok: Synthesizing robust adversarial examples. 35th International Conference of Machine Learnning, ICML 2018, 1, 449–468, 2018. 6) R. Hayashi and H. Kawata: Image reconstruction from neural activity recorded from monkey inferior temporal cortex using generative adversarial networks, in 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 105–109, 2018. 42) X. Chen, Y. Duan, R. Houthhooft, J. Schulman, I. Sutskever and P. Abbeel: InfoGan: Interpretable representation learning by information maximizing generative adversarial nets. NeurIPS 2016, 1–14, 2016. 11) C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vamjpicke and A. Rabinovich: Going deeper with convolutions. Proceedings of IEEE Conference on Computer Vision Pattern Recognition, 1–9, 2015. 17) D. Karmon, D. Zoran and Y. Goldberg: LaVAN: Localized and visible adversarial noise. 35th International Conference of Machine Learnning, ICML 2018, 6, 3903–3911, 2018. 31) M. Caron, M. Caron, P. Bojanowski, A. Joulin and M. Douze: Deep clustering for unsupervised learning of visual features. ECCV 2018, 2018. 19) R. Geirhos, C. R. Medina Temme, J. Rauber, and H. H. Schütt: Generalisation in humans and deep neural networks. Advances in Neural Information Processing Systems, 7538–7550, 2018. 8) U. Güçlü and M. A. J. van Gerven: Deep neural networks reveal a gradient in the complexity of neural representations across the ventral stream. Journal of Neuroscience, 35, 10005–10014, 2015. 52) H. Kim and A. Mnih: Disentangling by factorising. 35th International Conference Machine Learning, ICML 2018, 6, 4153–4171, 2018. 1) A. Krizhevsky, I. Sutskever and G. E. Hinton: ImageNet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 25, 1097–1105, 2012. 30) R. Zhang, P. Isola and A. A. Efros: Split-brain autoencoders: Unsupervised learning by cross-channel prediction. Proceedings of IEEE Conference of Computer Vision and Pattern Recognition, 1058–1067, 2017. 32) Z. Wu, Y. Xiong, S. Yu and D. Lin: Unsuper |
References_xml | – reference: 13) M. Schrimpf, J. Kubilius, H. Hong, N. J. Majaj, R. Rajalingham, E. B. Issa, K. Kar, P. Bashivan, J. Prescott-Roy, K. Schmidt, D. L. Yamins and J. J. DiCarlo: Brain-Score: Which artificial neural network for object recognition is most brain-like? bioRxiv, 1–9, 2018. doi: https://doi.org/10.1101/407007 – reference: 17) D. Karmon, D. Zoran and Y. Goldberg: LaVAN: Localized and visible adversarial noise. 35th International Conference of Machine Learnning, ICML 2018, 6, 3903–3911, 2018. – reference: 24) C. Sun, A. Shrivastava, S. Singh and A. Gupta: Revisiting unreasonable effectiveness of data in deep learning era. Proceedings of IEEE Conference of Computer Vision and Pattern Recognition, 843–852, 2017. – reference: 41) A. Radford, L. Mets and S. Chintala: Unsupervised representation learning with deep convolutional generative adversarial ntworks. ICLR 2016, 1–16, 2016. – reference: 15) C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erthan, I. Goodfellow and R. Fergus: Intriguing properties of neural networks. 2nd International Conference of Learning Representations ICLR 2014—Conf. Track Proc., 1–10, 2014. – reference: 4) Y. LeCun, B. Bose, J. S. Denker, R. E. Howard, W. Habbard, L. D. Jackel and D. Henderson: Hand-written digit recognition with a back-propagation network. Advances in Neural Information Processing Systems, 2, 396–404, 1990. – reference: 20) L. A. Gatys, A. S. Ecker and M. Bethge: Image style transfer using convolutional neural networks. Proceedings of IEEE Conference of Computer Vision and Pattern Recognition, 2414–2423, 2016. – reference: 11) C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vamjpicke and A. Rabinovich: Going deeper with convolutions. Proceedings of IEEE Conference on Computer Vision Pattern Recognition, 1–9, 2015. – reference: 18) A. Athalye, L. Engstrom, A. Ilyas and K. Kwok: Synthesizing robust adversarial examples. 35th International Conference of Machine Learnning, ICML 2018, 1, 449–468, 2018. – reference: 55) C. M. Schwiedrzik and W. A. Freiwald: High-level prediction signals in a low-level area of the macaque face-processing hierarchy. Neuron, 96, 89–97, 2017. – reference: 16) A. Nguyen, J. Yoshinski and J. Clune: Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. Proceedings of IEEE Conference of Computer Vision and Pattern Recognition, 427–436, 2015. – reference: 56) P. Bao, L. She, M. McGill and D. Y. Tsao: A map of object space in primate inferotemporal cortex. Nature, 583, 103–108, 2020. – reference: 27) R. Zhang, P. Isola and A. A. Efros: Colorful image colorization. European Confernce on Computer Vision, 649–666, 2016. – reference: 29) S. Gidaris, P. Singh and N. Komodakis: Unsupervised representation learning by predicting image rotations. ICLR 2018, 1–16, 2018. – reference: 14) S. Nonaka, K. Majima, S. C. Aoki and Y. Kamitani: Brain hierarchy score: Which deep neural networks are hierarchically brain-like? bioRxiv, 2020. doi: https://doi.org/10.1101/2020.07.22.216713 – reference: 49) I. Higgins, L. Matthey, A. pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed and A. Lerchner: β-VAE: Learning basic visual concepts with a constrained variational framework. ICLR 2017, 1–22, 2017. – reference: 44) V. Dumoulin, I. Belghazi, B. Poole, O. Mastropietro, A. Lamb, M. Arjovsky and A. Courville: Adversarially learned inference. ICLR2017, 1–18, 2017. – reference: 52) H. Kim and A. Mnih: Disentangling by factorising. 35th International Conference Machine Learning, ICML 2018, 6, 4153–4171, 2018. – reference: 35) T. Chen, S. Kornblith, M, Norouzi and G. Hinton: A simple framework for contrastive learning of visual representations. ICML 2020, 1–20, 2020. – reference: 37) A. van den Oord, Y. Li and O. Vinyals: Representation learning with contrastive predictive coding. arXiv:1807.03748, 2018. – reference: 3) K. Fukushima: Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biologial Cybernernetics, 36, 193–202, 1980. – reference: 43) J. Donahue, P. Krähenbühl and T. Darrell: Adversarial feature learning. ICLR 2017, 1–18, 2017. – reference: 33) T. Konkle and G. A. Alvarez: Instance-level contrastive learning yields human brain-like representation without category-supervision. bioRxiv, 1–17, 2020. – reference: 32) Z. Wu, Y. Xiong, S. Yu and D. Lin: Unsupervised feature learning via non-parametric instance discrimination. Proceedings of IEEE Conference of Computer Vision and Pattern Recognition, 3733–3742, 2018. – reference: 1) A. Krizhevsky, I. Sutskever and G. E. Hinton: ImageNet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 25, 1097–1105, 2012. – reference: 53) E. Mathieu, T. Rainforeth, N. Siddharth and Y. W. Teh: Disentangling disentanglement in variational autoencoders. Proceedings 36th International Conference Machine Learning, 4402–4412, 2019. – reference: 31) M. Caron, M. Caron, P. Bojanowski, A. Joulin and M. Douze: Deep clustering for unsupervised learning of visual features. ECCV 2018, 2018. – reference: 26) D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell and A. A. Efros: Context encoders: Feature learning by inpainting. arXiv: 1604.07379, 2016. – reference: 36) R. Hadsell, S. Chopra and Y. Lecun: Dimensionality reduction by learning an invariant mapping. 2006 Proceedings of IEEE Conference of Computer Vision and Pattern Recognition, 2, 1735–1742, 2006. – reference: 34) K. He, H. Fan, Y. Wu, S. Xie and R. Girshick: Momentum contrast for unsupervised visual representation learning. Proceedings of IEEE/CVF Conference of Computer Vision and Pattern Recognition, 9729–9738, 2020. – reference: 8) U. Güçlü and M. A. J. van Gerven: Deep neural networks reveal a gradient in the complexity of neural representations across the ventral stream. Journal of Neuroscience, 35, 10005–10014, 2015. – reference: 45) J. Donahue and K. Simonyan: Large scale adversarial representation learning. NeurIPS 2019, 1–32, 2019. – reference: 48) D. P. Kingma and M. Welling: Auto-encoding variational bayes. arXiv: 1312.6114, 1–14, 2013. https://arxiv.org/abs/1312.6114 – reference: 12) K. He, X. Zhang, S. Ren and J. Sun: Deep residual learning for image recognition. Proceedings of IEEE Conference on Computer Vision Pattern Recognition, 770–778, 2016. Available: http://www.cv-foundation.org/openaccess/content_cvpr_2016/html/He_Deep_Residual_Learning_CVPR_2016_paper.html – reference: 19) R. Geirhos, C. R. Medina Temme, J. Rauber, and H. H. Schütt: Generalisation in humans and deep neural networks. Advances in Neural Information Processing Systems, 7538–7550, 2018. – reference: 30) R. Zhang, P. Isola and A. A. Efros: Split-brain autoencoders: Unsupervised learning by cross-channel prediction. Proceedings of IEEE Conference of Computer Vision and Pattern Recognition, 1058–1067, 2017. – reference: 38) A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit and N. Houlsby: An image is worth 16X16 words: Transformers for image recognition at scale. ICLR 2021, 1–21, 2021. Available: http://proceedings.mlr.press/v119/chen20s.html – reference: 40) I. J. Goodfellow, J. P. Abradie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville and Y. Bengio: Generative adversarial nets. NIPS 2014, 1–9, 2014. – reference: 54) K. Kar, J. Kubilius, K. Schmidt, E. B. Issa and J. J. DiCarlo: Evidence that recurrent circuits are critical to the ventral stream’s execution of core object recognition behavior. Nature Neuroscience, 22, 974–983, 2019. – reference: 50) I. Higgins: Unsupervised deep learning identifies semantic disentanglement in single inferotemporal neurons. arXiv: 2006.14304, 1–24, 2020. – reference: 51) R. T. Q. Chen, X. Li, R. Grosse and D. Duvenaud: Isolating sources of disentanglement in VAEs. NeurIPS 2018, 1–18, 2018. – reference: 46) A. Gonzalez-Garcia, J. van de Weijer and Y. Bengio: Image-to-image translation for cross-domain disentanglement. NeurIPS 2018, 1–12, 2018. – reference: 10) K. Simonyan and A. Zisserman: Very deep convolutional networks for large-scale image recognition. ICLR 2015, 1–14, 2015. – reference: 21) R. Geirhos, P. Rubisch, C. Micaelis, M. Bethge, F. A. Wichmann and W. Brendel: ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. arXiv: 1811.12231, 1–22, 2018. – reference: 2) J. Deng, W. Dong, R. Socher, L. J. Li, K. Li and F. F. Li: ImageNet: A large-scale hierarchical image database. 2009 IEEE Conference of Computer Vision and Pattern Recognition, 248–255, 2009. – reference: 22) R. Kiani, H. Esteky, K. Mirpour and K. Tanaka: Object category structure in response patterns of neuronal population in monkey inferior temporal cortex. Journal of Neurophysiology, 97, 4296–4309, 2007. – reference: 28) M. Noroozi and P. Favaro: Unsupervised learning of visual representations by solving jigsaw puzzles. European Confernce on Computer Vision, 69–84, 2016. – reference: 39) M. Chen, A. Radford, R. Child, J. Wu, H. Jun, P. Dhariwal, D. Luan and I. Sutskever: Generative pretraining from pixels. Proceeding 37th International Conference Machine Learning, 119, 1691–1703, 2020. Available: http://proceedings.mlr.press/v119/chen20s.html – reference: 42) X. Chen, Y. Duan, R. Houthhooft, J. Schulman, I. Sutskever and P. Abbeel: InfoGan: Interpretable representation learning by information maximizing generative adversarial nets. NeurIPS 2016, 1–14, 2016. – reference: 25) C. Doersch, A. Gupta and A. A. Efros: Unsupervised visual representation learning by context prediction. Proceedings of IEEE Conference of Computer Vision and Pattern Recognition, 1422–1430, 2015. – reference: 7) D. L. K. Yamins, H. Hong, C. F. Fadieu, E. A. Solomon, D. Seibert and J. J. DiCarlo: Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proceedings of the National Academy Sciences of the United States of America, 111, 8619–8624, 2014. – reference: 9) N. Kriegeskorte, M. Mur and P. Bandettini: Representational similarity analysis: Connecting the branches of systems neuroscience. Frontiers Systems Neuroscience, 2, 1–28, 2008. – reference: 5) R. Hayashi and S. Nishimoto: Decoding visual information in monkey IT cortex using deep neural network. Proceedings of Life Engineering Symposium 2013, 511–514, 2013. – reference: 23) N. Kriegeskorte, M. Mur, D. A. Ruff and R. Kiani: Matching categorical object representations in inferior temporal cortex of man and monkey. Neuron, 26, 1126–1141, 2008. – reference: 47) X. Mao, Z. Su, P. S. Tan, J. K. Chow and Y.-H. Wang: Is discriminator a good feature extractor? arXiv:1912.00789, 1–12, 2020. – reference: 6) R. Hayashi and H. Kawata: Image reconstruction from neural activity recorded from monkey inferior temporal cortex using generative adversarial networks, in 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 105–109, 2018. |
SSID | ssj0001033870 ssib002670576 ssib003171206 ssib044764594 |
Score | 1.8339232 |
SourceID | jstage |
SourceType | Publisher |
StartPage | 63 |
Title | 視覚野の計算モデル:教師なし学習手法による視覚情報の表現分離 |
URI | https://www.jstage.jst.go.jp/article/vision/33/2/33_63/_article/-char/ja |
Volume | 33 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
ispartofPNX | VISION, 2021/04/20, Vol.33(2), pp.63-76 |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnR09b9QwNCplYUEgQHyrA16QciR2YjtjcuRUkMpCiyqWKL4kww0FoevCgHQ9QQ9VAgmpVJQFOpwqTlRCLAwIfkx0H534CzzbubsU3VC6WC-238t7fo79nmM_G8Yt6gnhWiQzY0JjU8bCNGNXMJPSOnHtxEosT55GXnpAF1ec-6vu6typ26VdS-tNUak_n3mu5CRahTzQqzwl-x-anRCFDIBBv5CChiE9lo5RyJFPkUfHgI9CD4F7z0MUEjASkR-qIo44RyGTjx5TRQT5uAD4JCdAYQ0FVUUHqLnI81DoooCrIk3QLwBJx0X-XfleoBzUkIclFgcGAgkERFIosDQ6VmxoIJjFPFX8uIqyhQL7qBS2FATexYHJUNYBapxKkb0AxCnb2Y_uPRwvvUFvUtIo0aFDqfq-QnRQEAAn5ZUPbMufONia9FWFCy3nKqCK_Oosxict4ej2K6-AwgQtjxLrqVDlYYcQU4ZMK08QOlJH8SHg0mhfDM3abtDX2Pw7I2EZkA36kY4UUCGkgqMx2pEw30UninTFiJAIy4SSaFwij-FFDfAFTmPG1JaEpRdhyZVmYHrT0ljNbDz9g-44TIYNcqbrjxYhXN2lOGkHHflKcXznKL9gnjXAWRlvdFS21_I542zhNC34msXzxlwjvmA8HnXfj7q7h5tv8tbBaL8zPNjJ23t5ezNv9_783B1sf-j_6OWtL3lrp_-1O_z9bvB6a_B9O2_18o1OvrGl0Qftl_1P3ySFz_vDt7_6nVeHH_cuGiu1cLm6aBY3hZgNeZBB7t8iOKWpZYm66yZExGnq4gQUyZ3YylidpTEV4BrTmBPmMRJbHrEysL45iYVIySVjfu3JWnrZWOCgfHBqGM8S6thpxuuUMDsR1GMiFln9isF1U0RPdTiY6Niau3py1GvGmekXcN2Ybz5bT2-ANdwUN1U3-AvrVKQh |
linkProvider | ISSN International Centre |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=%E8%A6%96%E8%A6%9A%E9%87%8E%E3%81%AE%E8%A8%88%E7%AE%97%E3%83%A2%E3%83%87%E3%83%AB%EF%BC%9A%E6%95%99%E5%B8%AB%E3%81%AA%E3%81%97%E5%AD%A6%E7%BF%92%E6%89%8B%E6%B3%95%E3%81%AB%E3%82%88%E3%82%8B%E8%A6%96%E8%A6%9A%E6%83%85%E5%A0%B1%E3%81%AE%E8%A1%A8%E7%8F%BE%E5%88%86%E9%9B%A2&rft.jtitle=VISION&rft.au=%E6%9E%97%2C+%E9%9A%86%E4%BB%8B&rft.date=2021-04-20&rft.pub=%E6%97%A5%E6%9C%AC%E8%A6%96%E8%A6%9A%E5%AD%A6%E4%BC%9A&rft.issn=0917-1142&rft.eissn=2433-5630&rft.volume=33&rft.issue=2&rft.spage=63&rft.epage=76&rft_id=info:doi/10.24636%2Fvision.33.2_63&rft.externalDocID=article_vision_33_2_33_63_article_char_ja |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0917-1142&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0917-1142&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0917-1142&client=summon |