Abstract:
RESUMEN:Diseñar algoritmos para la generación de ataques adversarios a
modelos de clasificación con base en redes neuronales profundas, con y sin
un blanco especifico, haciendo uso de técnicas de evaluación de caja
blanca. Con la finalidad de facilitar la evaluación de la robustez de los
modelos de clasificación frente a ataques adversario.
ABSTRAC: Deep neural networks have been found vulnerable to inputs maliciously
constructed by adversaries to force misclassification. These inputs, generated by adversarial attacks, are called adversarial examples and are
potentially dangerous. In this work, we have studied the behavior of state
of art adversarial attacks such as LBFG-S, Fast Gradient Sign Method,
Basic Iterative Method, Jacobian Saliency Map and Carlini & Wagner
against state of art pre-trained models such as VGGNet, Inception, Xception, and ResNet, these models are commonly used in Computer Vision
tasks, using ImageNet as our base dataset. We have used the previous
models and adversarial attacks in a gray-box scenario attack. Along with
this work we have developed algorithms that systematize the process of
generating adversarial examples, generating targeted and untargeted adversarial examples. Also, we have studied metrics such as, �" , �# , �$,
Mean Square Distance and Mean Absolute Distance in order to measure
the change made by adversarial attacks to the original examples. The efforts of this work are focused on exploiting the vulnerabilities of
classification models based on deep neural networks, as well as increasing
the ease of generating adversarial examples to generate in the future classification models robust to adversarial samples.
Description:
Tesis (Ingeniería en Comunicaciones y Electrónica), Instituto Politécnico Nacional, ESIME, Unidad Zacatenco, 2019, 1 archivo PDF, (80 páginas). tesis.ipn.mx