Hostname: page-component-77c89778f8-sh8wx Total loading time: 0 Render date: 2024-07-19T09:03:31.923Z Has data issue: false hasContentIssue false

Perceptual learning in humans: An active, top-down-guided process

Published online by Cambridge University Press:  06 December 2023

Heleen A. Slagter*
Affiliation:
Department of Cognitive Psychology, Institute for Brain and Behavior Amsterdam, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands h.a.slagter@vu.nl https://research.vu.nl/en/persons/heleen-slagter

Abstract

Deep neural network (DNN) models of human-like vision are typically built by feeding blank slate DNN visual images as training data. However, the literature on human perception and perceptual learning suggests that developing DNNs that truly model human vision requires a shift in approach in which perception is not treated as a largely bottom-up process, but as an active, top-down-guided process.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Ahissar, M., & Hochstein, S. (2004). The reverse hierarchy theory of visual perceptual learning. Trends in Cognitive Sciences, 8(10), 457464. https://doi.org/10.1016/j.tics.2004.08.011CrossRefGoogle ScholarPubMed
Ayzenberg, V., & Behrmann, M. (2022). Does the brain's ventral visual pathway compute object shape? Trends in Cognitive Sciences, 26(12), 11191132. https://doi.org/10.1016/j.tics.2022.09.019CrossRefGoogle ScholarPubMed
Baker, N., Lu, H., Erlikhman, G., & Kellman, P. J. (2018). Deep convolutional networks do not classify based on global object shape. PLoS Computational Biology, 14(12), e1006613. https://doi.org/10.1371/journal.pcbi.1006613CrossRefGoogle Scholar
Boonstra, E. A., & Slagter, H. A. (2019). The dialectics of free energy minimization. Frontiers in Systems Neuroscience, 13, 42. https://doi.org/10.3389/fnsys.2019.00042CrossRefGoogle ScholarPubMed
Buzsáki, G. (2019). The brain from inside out. Oxford University Press.CrossRefGoogle Scholar
Emberson, L. L. (2017). Chapter One – How does experience shape early development? Considering the role of top-down mechanisms. In Benson, J. B. (Ed.), Advances in child development and behavior (Vol. 52, pp. 141). JAI. https://doi.org/10.1016/bs.acdb.2016.10.001Google Scholar
Fahle, M. (2004). Perceptual learning: A case for early selection. Journal of Vision, 4(10), 4. https://doi.org/10.1167/4.10.4CrossRefGoogle ScholarPubMed
Fiorentini, A., & Berardi, N. (1980). Perceptual learning specific for orientation and spatial frequency. Nature, 287(5777), 4344. https://doi.org/10.1038/287043a0CrossRefGoogle ScholarPubMed
Friston, K. (2009). The free-energy principle: A rough guide to the brain? Trends in Cognitive Sciences, 13(7), 293301. https://doi.org/10.1016/j.tics.2009.04.005CrossRefGoogle ScholarPubMed
Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11(2), 127138. https://doi.org/10.1038/nrn2787CrossRefGoogle ScholarPubMed
Geirhos, R., Rubisch, P., Michaelis, C., Bethge, M., Wichmann, F. A., & Brendel, W. (2022). ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. arXiv, arXiv:1811.12231. https://doi.org/10.48550/arXiv.1811.12231Google Scholar
Gibson, E. J. (1988). Exploratory behavior in the development of perceiving, acting, and the acquiring of knowledge. Annual Review of Psychology, 39, 141.CrossRefGoogle Scholar
Gibson, J. J. (2014). The ecological approach to visual perception (1st ed.). Routledge.CrossRefGoogle Scholar
Heaven, D. (2019). Why deep-learning AIs are so easy to fool. Nature, 574(7777), 163166. https://doi.org/10.1038/d41586-019-03013-5CrossRefGoogle ScholarPubMed
Held, R., & Hein, A. (1963). Movement-produced stimulation in the development of visually guided behavior. Journal of Comparative and Physiological Psychology, 56, 872876. https://doi.org/10.1037/h0040546CrossRefGoogle ScholarPubMed
Hurley, S. (2001). Perception and action: Alternative views. Synthese, 129(1), 340. https://doi.org/10.1023/A:1012643006930CrossRefGoogle Scholar
Jacob, G., Pramod, R. T., Katti, H., & Arun, S. P. (2021). Qualitative similarities and differences in visual object representations between brains and deep networks. Nature Communications, 12(1), Article 1. https://doi.org/10.1038/s41467-021-22078-3CrossRefGoogle ScholarPubMed
Kohler, I. (1963). The formation and transformation of the perceptual world. Psychological Issues, 3, 1173.Google Scholar
Lanillos, P., Meo, C., Pezzato, C., Meera, A. A., Baioumy, M., Ohata, W., … Tani, J. (2021). Active inference in robotics and artificial agents: Survey and challenges. arXiv, arXiv:2112.01871. https://doi.org/10.48550/arXiv.2112.01871Google Scholar
Lu, Z.-L., & Dosher, B. A. (2022). Current directions in visual perceptual learning. Nature Reviews Psychology, 1(11), Article 11. https://doi.org/10.1038/s44159-022-00107-2CrossRefGoogle ScholarPubMed
Millidge, B., Salvatori, T., Song, Y., Bogacz, R., & Lukasiewicz, T. (2022). Predictive coding: Towards a future of deep learning beyond backpropagation? arXiv, arXiv:2202.09467. https://doi.org/10.48550/arXiv.2202.09467Google Scholar
Milne, G. A., Lisi, M., McLean, A., Zheng, R., Groen, I. I. A., & Dekker, T. M. (2022). Emergence of perceptual reorganisation from prior knowledge in human development and convolutional neural networks. BioRxiv. https://doi.org/10.1101/2022.11.21.517321Google Scholar
Schneider, D. M. (2020). Reflections of action in sensory cortex. Current Opinion in Neurobiology, 64, 5359. https://doi.org/10.1016/j.conb.2020.02.004CrossRefGoogle ScholarPubMed
Shibata, K., Sagi, D., & Watanabe, T. (2014). Two-stage model in perceptual learning: Toward a unified theory. Annals of the New York Academy of Sciences, 1316(1), 1828. https://doi.org/10.1111/nyas.12419CrossRefGoogle Scholar
Tan, Q., Wang, Z., Sasaki, Y., & Watanabe, T. (2019). Category-induced transfer of visual perceptual learning. Current Biology, 29(8), 13741378. e3. https://doi.org/10.1016/j.cub.2019.03.003CrossRefGoogle ScholarPubMed
Wang, R., Wang, J., Zhang, J.-Y., Xie, X.-Y., Yang, Y.-X., Luo, S.-H., … Li, W. (2016). Perceptual learning at a conceptual level. The Journal of Neuroscience, 36(7), 22382246. https://doi.org/10.1523/JNEUROSCI.2732-15.2016CrossRefGoogle Scholar
Xu, Y., & Vaziri-Pashkam, M. (2021). Limits to visual representational correspondence between convolutional neural networks and the human brain. Nature Communications, 12(1), Article 1. https://doi.org/10.1038/s41467-021-22244-7Google ScholarPubMed
Zaadnoordijk, L., Besold, T. R., & Cusack, R. (2022). Lessons from infant learning for unsupervised machine learning. Nature Machine Intelligence, 4(6), 510520. https://doi.org/10.1038/s42256-022-00488-2CrossRefGoogle Scholar
Zhang, J.-Y., Zhang, G.-L., Xiao, L.-Q., Klein, S. A., Levi, D. M., & Yu, C. (2010). Rule-based learning explains visual perceptual learning and its specificity and transfer. The Journal of Neuroscience, 30(37), 1232312328. https://doi.org/10.1523/JNEUROSCI.0704-10.2010CrossRefGoogle ScholarPubMed