” Adversarial Mask: Real-World Universal Adversarial Attack on Face Recognition Models ” developed by a research team of Ben-Gurion University of the Negev in Israel and Tel Aviv University in Israel is detected in a system that recognizes faces captured by a camera. It’s a special mask to prevent it from being done.
This is achieved by printing a hostile pattern calculated and created by deep learning on paper and cloth face masks. In the experiment, wearing this mask made it possible to misrecognize with an accuracy of about 96% or more.
Due to the COVID-19 epidemic, it became a habit to wear face masks, which initially hindered many face recognition systems used in towns and facilities around the world. However, over time, technology has evolved and adapted to accurately identify people wearing medical masks and other masks.
So far, physical methods have been tried to prevent the face in the camera from being recognized. For example, wear hostile glasses. Projects light onto the human face. Wear a hat with a hostile sticker on it. Put on hostile makeup. It has been proposed to wear a hostile full-face mask. However, the proposed attack method is so conspicuous that it cannot naturally blend into real-world scenarios and is impractical.
This time, as a method of avoiding face recognition while blending into everyday life, we propose a method of attacking with a pattern mask printed with a hostile pattern. This special mask is created by using a deep learning gradient-based optimization process to create a hostile pattern and print the pattern on the outside of the mask.
It works as intended regardless of whether it is printed on paper or cloth, and is effective for any person from multiple viewpoints, angles, and scales. In addition, both men and women are misjudged as unidentified and deceived.
One of the advantages of this mask is that it looks like a normal patterned mask to the surroundings, so it is not a mask to deceive the face recognition system. Unlike traditional deception methods, it can be integrated into everyday life.
In addition, if randomization is added to the process of creating this hostile pattern, the resulting pattern can be made slightly different, which has the advantage that countermeasures are difficult to take. Custom-made patterns can be printed for each user.
In addition, all masked facial images can be preprocessed during the learning phase to make the person appear to be wearing a standard mask (eg, a blue surgical mask).
In the experiment, we asked participants to actually wear masks and tested whether they could recognize them. As a result, 29 out of 30 people (about 96.6%) could be concealed with masks printed with hostile patterns.
The research team also proposes three countermeasures against attacks. (1) Train a face recognition model using face images that include hostile patterns. (2) Train the face recognition model so that it can be identified by looking only at the upper part of the face. (3) Train the face recognition model so that the lower part of the face is generated based on the upper part of the face.