stub 'Creative' Facial Verification with Generative Adversarial Networks - Unite.AI
Connect with us

Artificial Intelligence

‘Creative’ Facial Verification with Generative Adversarial Networks

mm
Updated on

A new paper from Stanford University has proposed a nascent method for fooling facial authentication systems in platforms such as dating apps, by using a Generative Adversarial Network (GAN) to create alternative face images that contain the same essential ID information as a real face.

The method successfully bypassed facial verification processes on dating applications Tinder and Bumble, in one case even passing off a gender-swapped (male) face as authentic to the source (female) identity.

Various generated identities which feature the specific encoding of the paper's author (featured in first image above). Source: https://arxiv.org/pdf/2203.15068.pdf

Various generated identities which feature the specific encoding of the paper's author (featured in first image above). Source: https://arxiv.org/pdf/2203.15068.pdf

According to the author, the work represents the first attempt to bypass facial verification with the use of generated images that have been imbued with specific identity traits, but which attempt to represent an alternate or substantially altered identity.

The technique was tested on a custom local face verification system, and then performed well in black box tests against two dating applications that perform facial verification on user-uploaded images.

The new paper is titled Face Verification Bypass, and comes from Sanjana Sarda, a researcher at the Department of Electrical Engineering at Stanford University.

Controlling the Face Space

Though ‘injecting' ID-specific features (i.e. from faces, road signs, etc.) into crafted images is a staple of adversarial attacks, the new study suggests something different: that the research sector's growing ability to control the latent space of GANs will eventually enable the development of architectures that can create consistent alternative identities to that of a user – and, effectively, enable the extraction of identity features from web-available images of an unsuspecting user to coopt into a ‘shadow' crafted identity.

Consistency and navigability have been the primary challenges regarding the latent space of the GAN ever since the inception of Generative Adversarial Networks. A GAN that has successfully assimilated a collection of training images into its latent space provides no easy map to ‘push' features from one class to another.

While techniques and tools such as Gradient-weighted Class Activation Mapping (Grad-CAM) can help to establish latent directions between the established classes, and enable transformations (see image below), the further challenge of entanglement usually makes for an ‘approximative' journey, with limited fine control of the transition.

A rough journey between encoded vectors in a GAN's latent space, pushing a data-derived male identity into the 'female' encodings on the other side of one of many linear  hyperplanes in the complex and arcane latent space. Image derived from material at https://www.youtube.com/watch?v=dCKbRCUyop8

A rough journey between encoded vectors in a GAN's latent space, pushing a data-derived male identity into the ‘female' encodings on the other side of one of many linear  hyperplanes in the complex and arcane latent space. Image derived from material at https://www.youtube.com/watch?v=dCKbRCUyop8

The ability to ‘freeze' and protect ID-specific features while moving them into transformative encodings elsewhere in the latent space potentially makes it possible to create a consistent (and even animatable) individual whose identity is read by machine systems as someone else.

Method

The author used two datasets as the basis for experiments: a Human User Dataset consisting of 310 images of her face spanning a period of four years, with varying lighting, age, and view angles), with cropped faces extracted via Caffe; and the racially balanced 108,501 images in the FairFace dataset, similarly extracted and cropped.

The local facial verification model was derived from a base implementation of FaceNet and DeepFace, pre-trained on ConvNet Inception, with each image represented by a 128-dimensional vector.

The approach uses face images from a trained subset from FairFace. In order to pass facial verification, the calculated distance caused by an image's Frobenius norm is offset against the target user in the database. Any image under the threshold of 0.7 equates to the same identity, else verification is considered to have failed.

A StyleGAN model was fine-tuned on the author's personal dataset, producing a model that would generate recognizable variations of her identity, though none of these generated images were identical to the training data. This was achieved by freezing the first four layers in the discriminator, to avoid overfitting of the data and produce variegated output.

Though diverse images were obtained with the base StyleGAN model, the low resolution and fidelity prompted a second attempt with StarGAN V2, which allows the training of seed images towards a target face.

The StarGAN V2 model was pre-trained over approximately 10 hours using the FairFace validation set, on a batch size of four and a validation size of 8. In the most successful approach, the author's personal dataset was used as the source with training data as a reference.

Verification Experiments

A facial verification model was constructed based on a subset of 1000 images, with the intention of verifying an arbitrary image from the set. Images that passed verification successfully were subsequently tested against the author's own ID.

On the left, the paper's author, a real photo; middle, an arbitrary image that failed verification; right, an unrelated image from the dataset that passed verification as the author.

On the left, the paper's author, a real photo; middle, an arbitrary image that failed verification; right, an unrelated image from the dataset that passed verification as the author.

The objective of the experiments was to create as wide a gap as possible between the perceived visual identity while retaining the defining traits of the target identity. This was evaluated with Mahalanobis distance, a metric used in image processing for pattern and template search.

For the baseline generative model, the low-resolution results obtained display limited diversity, despite passing local facial verification. StarGAN V2 proved more capable of creating diverse images that were able to authenticate.

All images depicted passed local facial verification. Above are the low-resolution StyleGAN baseline generations, below, the higher-res and higher quality StarGAN V2 generations.

All images depicted passed local facial verification. Above are the low-resolution StyleGAN baseline generations, below, the higher-res and higher quality StarGAN V2 generations.

The final three images illustrated above used the author's own face dataset as both source and reference, while the preceding images used training data as reference and the author's dataset as source.

The resulting generated images were tested against the facial verification systems of dating apps Bumble and Tinder, with the author's identity as the baseline, and passed verification. A ‘male' generation of the author's face also passed Bumble's verification process, though the lighting had to be adjusted in the generated image before it was accepted. Tinder did not accept the male version.

'Maled' versions of the author's (female) identity.

‘Maled' versions of the author's (female) identity.

Conclusion

These are seminal experiments in identity projection, in the context of GAN latent space manipulation, which remains an extraordinary challenge in image synthesis and deepfake research. Nonetheless, work opens up the concept of embedding highly specific features consistently across diverse identities, and of creating ‘alternate' identities that ‘read' as someone else.

 

First published 30th March 2022.