Monireh Moshavash; Mahdi Eftekhari; Kaveh Bahraman
Abstract
By the rapid progress of deep learning and its use in a variety of applications, however, deep networks have shown that they are vulnerable to adversarial examples. Recently developed ...
Read More
By the rapid progress of deep learning and its use in a variety of applications, however, deep networks have shown that they are vulnerable to adversarial examples. Recently developed researches show that using self-supervised learning (SSL) in various ways results in increasingnetwork robustness. This paper examines the effect of a particular type of Contrastive SelfSupervised learning (CSSL) called Momentum Contrast (MoCo) on increasing network robustness to adversarial examples. For this purpose, MoCo is employed as a pre-text task and a deep network is pre-trained for this task. Then fine-tuning will cause to increase the robustness of the network against adversarial attacks examples. A new attack method is introduced based on MoCo and one of the Projected Gradient Descent (PGD) or Fast Gradient Sign (FGSM) methods that do not require any labeled data. Using this corrupted data and adversarial training method, a deep network is pre-trained and the representation provided by it is used to fine-tune downstream tasks that results in increasing network robustness. For an instance, the setup including Resnet50 structure, PGD attack, and MoCo-v1 shows 2.79%, 2%, and 1.35% of improvements comparing to the Jigsaw, Rotation, Selfie, respectively. More details of experiments and the improvements raised by MoCo are given in the results part and show the superiority of MoCo based models on CIFAR-10 and CIFAR-10-C datasets. Also, the obtained results for validating the robustness of proposed models against various noises with different corruption strengths, confirm the resistance of the proposed methods.