The authors propose a subspace learning algorithm for face recognition by directly optimizing recognition performance scores.
This approach is motivated by the following observations: 1) Different face recognition tasks (i.e., face identification and verification) have different performance metrics, which implies that there exist distinguished subspaces that optimize these scores, respectively. Most prior work focused on optimizing various discriminative or locality criteria and neglect such distinctions. 2) As the gallery (target) and the probe (query) data are collected in different settings in many real-world applications, there could exist consistent appearance incoherences between the gallery and the probe data for the same subject. Knowledge regarding these incoherences could be used to guide the algorithm design, resulting in performance gain. Prior efforts have not focused on these facts. In this paper, the authors rigorously formulate performance scores for both the face identification and the face verification tasks, provide a theoretical analysis on how the optimal subspaces for the two tasks are related, and derive gradient descent algorithms for optimizing these subspaces. The authors extensive experiments on several public databases and a real-world face database demonstrate that the authors algorithm can improve the performance of given subspace-based face recognition algorithms targeted at a specific face recognition task. (Publisher abstract provided)