Abstract:
A novel method and system for 3d-aided-2D face recognition under large pose and illumination variations is disclosed. The method and system includes enrolling a face of a subject into a gallery database using raw 3D data. The method also includes verifying and/or identifying a target face form data produced by a 2D imagining or scanning device. A statistically derived annotated face model is fitted using a subdivision-based deformable model framework to the raw 3D data. The annotated face model is capable of being smoothly deformed into any face so it acts as a universal facial template. During authentication or identification, only a single 2D image is required. The subject specific fitted annotated face model from the gallery is used to lift a texture of a face from a 2D probe image, and a bidirectional relighting algorithm is employed to change the illumination of the gallery texture to match that of the probe. Then, the relit texture is compared to the gallery texture using a view-dependent complex wavelet structural similarity index metric.