Locally Linear Embedding for Face Recognition with Simultaneous Diagonalization

Eun-Sol Kim, Yung-Kyun Noh, Masashi Sugiyama, and Byoung-Tak Zhang

Abstract

Nonlinear embedding is a useful tool for showing the manifold structure of high-dimensional data, which typically represents a conceptual structure of data in a low-dimensional space. Locally Linear Embedding (LLE) is one of the most well-known algorithm which preserves linear reconstruction between data, finding a low-dimensional subspace. Due to LLE’s special objective function, a reconstruction error using nearest neighbors, LLE always embeds data onto the convex hull of neighbors. Especially, when a particular point lies within the same subspace spanned by a subset of neighbors, the rest of the neighbors do not contribute to reconstruction, resulting in a complete separation between the particular point with its subset and the rest of the neighbors in the embedded space.

We utilize this special property of LLE to find nonlinear embedding, which is useful for classification. If we consider data where different classes span independent subspaces, we show that the data can be completely separated via LLE, which gives a good classification accuracy with simple classifiers. In real experiments, we can use simultaneous diagonalization using labeled data in order to minimize the interaction between subspaces. When we apply this method for face recognition in situation with varying illumination, nice embedding results are obtained with separate identities improving face recognition accuracy over conventional discriminant analysis methods.