Machine Learning Approach for Sequential Sampling – k-Nearest Neighbor Classification and Metric Learning

Yung-Kyun Noh, Frank Chongwoo Park, Kee-Eung Kim, and Daniel D. Lee


Whenever a faster decision is required to save time and resources, the decision making process should focus on whether to proceed with the decision in light of the given information or to postpone it in order to collect more information for a higher confidence level. In many previous and recent psychology works, various computational models have tried to show ways of improving the speed-accuracy trade-off, as well as to postulate the decision-making process in humans. However, apart from the understanding we get from individual models, we lack a systematic way of understanding these models in one mathematically unified framework. Moreover, multiple-choice problem in all of these methods has not been discussed intensively.

In this work, we show how the k-nearest neighbor classification algorithm in machine learning can be utilized as a mathematical framework to derive a variety of novel sequential sampling models. We interpret these nearest neighbor models in the context of diffusion decision making (DDM) methods. We compare these nearest neighbor methods to exemplar-based models and accumulator models such as Race and LCA. Computational experiments show that the new models demonstrate significantly higher accuracy given equivalent time constraints.

Finally, we investigate how a metric learning approach in k-nearest neighbor classification can be utilized to interpret sequential sampling in human decision making which produces different decisions for different people having the same incoming information. An optimal metric is derived for the different ways of receiving information, and we show how a mathematical model can explain some of the human variation in decision making.