DSpace Repository

Ataques adversario a modelos de clasificación con base en redes neuronales

Show simple item record

dc.contributor.author Pacheco Rodriguez, Hugo Sebastian
dc.date.accessioned 2020-03-05T15:13:25Z
dc.date.available 2020-03-05T15:13:25Z
dc.date.created 2019-06-03
dc.date.issued 2020-03-05
dc.identifier.citation Pacheco Rodriguez, Hugo Sebastian. (2019). Ataques adversario a modelos de clasificación con base en redes neuronales (Ingeniería en Comunicaciones y Electrónica). Instituto Politécnico Nacional, Escuela Superior de Ingeniería Mecánica y Eléctrica, Unidad Zacatenco, México. es
dc.identifier.uri http://tesis.ipn.mx/handle/123456789/28091
dc.description Tesis (Ingeniería en Comunicaciones y Electrónica), Instituto Politécnico Nacional, ESIME, Unidad Zacatenco, 2019, 1 archivo PDF, (80 páginas). tesis.ipn.mx es
dc.description.abstract RESUMEN:Diseñar algoritmos para la generación de ataques adversarios a modelos de clasificación con base en redes neuronales profundas, con y sin un blanco especifico, haciendo uso de técnicas de evaluación de caja blanca. Con la finalidad de facilitar la evaluación de la robustez de los modelos de clasificación frente a ataques adversario. ABSTRAC: Deep neural networks have been found vulnerable to inputs maliciously constructed by adversaries to force misclassification. These inputs, generated by adversarial attacks, are called adversarial examples and are potentially dangerous. In this work, we have studied the behavior of state of art adversarial attacks such as LBFG-S, Fast Gradient Sign Method, Basic Iterative Method, Jacobian Saliency Map and Carlini & Wagner against state of art pre-trained models such as VGGNet, Inception, Xception, and ResNet, these models are commonly used in Computer Vision tasks, using ImageNet as our base dataset. We have used the previous models and adversarial attacks in a gray-box scenario attack. Along with this work we have developed algorithms that systematize the process of generating adversarial examples, generating targeted and untargeted adversarial examples. Also, we have studied metrics such as, �" , �# , �$, Mean Square Distance and Mean Absolute Distance in order to measure the change made by adversarial attacks to the original examples. The efforts of this work are focused on exploiting the vulnerabilities of classification models based on deep neural networks, as well as increasing the ease of generating adversarial examples to generate in the future classification models robust to adversarial samples. es
dc.subject Ataques es
dc.subject modelos de clasificacíon es
dc.subject redes neuronales es
dc.title Ataques adversario a modelos de clasificación con base en redes neuronales es
dc.contributor.advisor Aguirre Anaya, Eleazar
dc.contributor.advisor Menchaca Méndez, Ricardo


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Browse

My Account