# ML_project_lipnet **Repository Path**: xjr01/ml_project_lipnet ## Basic Information - **Project Name**: ML_project_lipnet - **Description**: No description available - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2021-05-31 - **Last Updated**: 2021-06-30 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # Adversarial Training on Lipnet + Denoising Block ## Introduction This is the code for (adversarial) training on Lipnet. We improve the behavior of Lipnet in adversarial training by adding a denoising block. The best robust accuracy we have is $96.22\%$ on MNIST and $38.54\%$ on CIFAR10 with attack noise's $l_\infty$-dist up to $\epsilon=0.3$ generated by PGD attack with 20 iterations. ## Dependencies - Pytorch 1.1.0 - Tensorboard (optional) ## Getting Started with the Code ### Installation After cloning this repo into your computer, first run the following command to install the CUDA extension, which can speed up the training procedure considerably. ``` python setup.py install --user ``` ### Reproducing Results in Our Report In this repo, we provide complete training scripts to reproduce the results on MNIST and CIFAR-10 datasets. These scripts are in the `command` folder. To reproduce the best results of CIFAR-10 we have, use the command below ``` bash command/denoisingadvance_conv++_cifar10_adv_training.sh ``` To reproduce the best results of MNIST we have, use the command below ``` bash command/denoisingadvance_conv++_mnist_adv_training.sh ``` ## More Codes in this Repo ### Adversarial Examples Generator `get_example_cifar10.py` and `get_example.py` are for generating adversarial data on a certain model and save some of them as pictures. `get_example.py` will get you the first $5$ adversarial data of the dataset. `get_example_cifar10.py` will generate adversarial examples from input pictures in the directory `cifar10_examples`. To run these codes, you need to pass all the command variables of `main.py` to `get_example*.py`. These two codes are not designed for users, thus have special dependencies (such as files like `model.pth` that cannot be uploaded to git) and may not run properly. We only present these codes because we used them to generate examples for report writing. The files `get_examples*.txt` are generated by `get_example.py` which are tensors of pictures and cannot be opened directly. But you can use `torch.load` to retrieve the tensors from these files. `example*_adv.jpg` in directory `cifar10_examples` are the generated adversarial pictures that can be opened directly. ### Visualizing and Understanding CNN You can find this in the directory `VisualizingCNN/`. This is an implementation of "*Visualizing and Understanding Convolutional Networks*" by Matthew D. Zeiler and Rob Fergus [https://www.cs.nyu.edu/~fergus/papers/zeilerECCV2014.pdf] in Keras/Tensorflow. It includes: 1. A CNN (AlexNet) Model, which is pre-trained on ImageNet. 2. A corresponding Deconvolution Network. #### Input an image #### Output The CNN model outputs the predicted classification results. The Deconvolution Network outputs an image, which is generated by projecting the feature activations of some specific layers in CNN to the input pixel space. #### More Details in ./VisualizingCNN/README.md