This paper proposes a discriminative framework for efficiently aligning facial images.
Although conventional Active Appearance Models (AAMs)-based approaches have achieved some success, they suffer from the generalization problem, i.e., how to align any image with a generic model. The authors treat the iterative image alignment problem as a process of maximizing the score of a trained two-class classifier that can distinguish correct alignment (positive class) from incorrect alignment (negative class). During the modeling stage, given a set of images with ground truth landmarks, the authors train a conventional Point Distribution Model (PDM) and a boosting-based classifier, which acts as an appearance model. When tested on an image with the initial landmark locations, the proposed algorithm iteratively updates the shape parameters of the PDM via the gradient ascent method such that the classification score of the warped image is maximized. The authors use the term Boosted Appearance Models (BAMs) to refer to the learned shape and appearance models, as well as the authors’ specific alignment method. The proposed framework is applied to the face alignment problem. Using extensive experimentation, the authors show that, compared to the AAM-based approach, this framework greatly improves the robustness, accuracy, and efficiency of face alignment by a large margin, especially for unseen data. (Publisher abstract provided)
Downloads
Similar Publications
- Machine Learning and the Prevention of Mass Shooting in the United States
- Effects of Police Body-Worn Cameras on Citizen Compliance and Cooperation: Findings From a Quasi-Randomized Controlled Trial
- Community Views of Milwaukee’s Police Body-worn Camera Program: Results from Three Waves of Community Surveys