Perceiving patterns from randomness: Emergence of structures and cognitive biases in time

Yanlong Sun and Hongbin Wang


Recent development in psychology and neuroscience suggests that the human mind develops structured probabilistic representations about the world and performs near-optimal Bayesian inferences. However, it remains elusive how the structured probability space has originated in the first place, and how cognitive biases can arise from normative probabilistic models. Here we show that without a pre-defined hypothesis space, structured representations of the learning environment, as well as “biases”, can naturally emerge through efficient neural encodings of the sensory inputs over time. We demonstrate the idea with a biologically realistic neural network model to simulate randomness perception. Our findings reveal that rich semantics can be developed in associative learning through temporal integration, providing the building blocks for the structured hypothesis space required by Bayesian inference. We further suggest that the statistical structures in the learning environment and the competition between implicit representations are the keys to bridging the gaps from object representation to probability induction and the gaps from low-level sensory processing to high-level cognition.