|
| 1 | +--- |
| 2 | +layout: page |
| 3 | +mathjax: true |
| 4 | +permalink: /assignments2019/assignment2/ |
| 5 | +--- |
| 6 | + |
| 7 | +In this assignment you will practice writing backpropagation code, and training |
| 8 | +Neural Networks and Convolutional Neural Networks. The goals of this assignment |
| 9 | +are as follows: |
| 10 | + |
| 11 | +- understand **Neural Networks** and how they are arranged in layered |
| 12 | + architectures |
| 13 | +- understand and be able to implement (vectorized) **backpropagation** |
| 14 | +- implement various **update rules** used to optimize Neural Networks |
| 15 | +- implement **Batch Normalization** and **Layer Normalization** for training deep networks |
| 16 | +- implement **Dropout** to regularize networks |
| 17 | +- understand the architecture of **Convolutional Neural Networks** and |
| 18 | + get practice with training these models on data |
| 19 | +- gain experience with a major deep learning framework, such as **TensorFlow** or **PyTorch**. |
| 20 | + |
| 21 | +## Setup |
| 22 | +Get the code as a zip file [here]. |
| 23 | +You can follow the setup instructions [here](/setup-instructions). |
| 24 | + |
| 25 | +If you've perform the google cloud setup already for assignment1, you can skip this step and use the virtual machine you created previously. |
| 26 | +(However, if you're using your virtual machine from assignment1, you might need to perform additional installation steps for the 5th notebook depending on whether you're using Pytorch or Tensorflow. See below for details.) |
| 27 | + |
| 28 | +### Some Notes |
| 29 | +**NOTE 1:** This year, the `assignment2` code has been tested to be compatible with python version `3.7` (it may work with other versions of `3.x`, but we won't be officially supporting them). You will need to make sure that during your virtual environment setup that the correct version of `python` is used. You can confirm your python version by (1) activating your virtualenv and (2) running `which python`. |
| 30 | + |
| 31 | +**NOTE 2:** As noted in the setup instructions, we strongly recommend you to do development on Google Cloud, as we have limited support for local machine configurations. |
| 32 | + |
| 33 | +**NOTE 3:** The submission process this year has **2 steps**, requiring you to 1. run a submission script and 2. download/upload an auto-generated pdf (details below.) We suggest **_making a test submission early on_** to make sure you are able to successfully submit your assignment on time (a maximum of 10 successful submissions can be made.) |
| 34 | + |
| 35 | +### Q1: Fully-connected Neural Network (20 points) |
| 36 | +The IPython notebook `FullyConnectedNets.ipynb` will introduce you to our |
| 37 | +modular layer design, and then use those layers to implement fully-connected |
| 38 | +networks of arbitrary depth. To optimize these models you will implement several |
| 39 | +popular update rules. |
| 40 | + |
| 41 | +### Q2: Batch Normalization (30 points) |
| 42 | +In the IPython notebook `BatchNormalization.ipynb` you will implement batch |
| 43 | +normalization, and use it to train deep fully-connected networks. |
| 44 | + |
| 45 | +### Q3: Dropout (10 points) |
| 46 | +The IPython notebook `Dropout.ipynb` will help you implement Dropout and explore |
| 47 | +its effects on model generalization. |
| 48 | + |
| 49 | +### Q4: Convolutional Networks (30 points) |
| 50 | +In the IPython Notebook `ConvolutionalNetworks.ipynb` you will implement several new layers that are commonly used in convolutional networks. |
| 51 | + |
| 52 | +### Q5: PyTorch / TensorFlow on CIFAR-10 (10 points) |
| 53 | +For this last part, you will be working in either TensorFlow or PyTorch, two popular and powerful deep learning frameworks. **You only need to complete ONE of these two notebooks.** You do NOT need to do both, and we will _not_ be awarding extra credit to those who do. |
| 54 | + |
| 55 | +Open up either `PyTorch.ipynb` or `TensorFlow.ipynb`. There, you will learn how the framework works, culminating in training a convolutional network of your own design on CIFAR-10 to get the best performance you can. |
| 56 | + |
| 57 | +**NOTE 1**: The PyTorch notebook requires PyTorch version 1.0, which comes pre-installed on the Google cloud instances. |
| 58 | + |
| 59 | +**NOTE 2**: The TensorFlow notebook requires Tensorflow version 2.0. If you want to work on the Tensorflow notebook with your VM from assignment1, please follow the instructions on [Piazza](https://piazza.com/class/js3o5prh5w378a?cid=384) to install TensorFlow. |
| 60 | + New virtual machines that are set up following the [instructions](/setup-instructions) will come with the correct version of Tensorflow. |
| 61 | + |
| 62 | + |
| 63 | +### Submitting your work |
| 64 | +There are **_two_** steps to submitting your assignment: |
| 65 | + |
| 66 | +**1.** Run the provided `collectSubmission.sh` script in the `assignment2` directory. |
| 67 | + |
| 68 | +You will be prompted for your SunetID (e.g. `jdoe`) and will need to provide your Stanford password. This script will generate a zip file of your code, submit your source code to Stanford AFS, and generate a pdf `a2.pdf` in a `cs231n-2019-assignment2/` folder in your AFS home directory. |
| 69 | + |
| 70 | +If your submission for this step was successful, you should see a display message |
| 71 | + |
| 72 | +`### Code submitted at [TIME], [N] submission attempts remaining. ###` |
| 73 | + |
| 74 | +**2.** Download the generated `a2.pdf` from AFS, then submit the pdf to [Gradescope](https://gradescope.com/courses/17367). |
0 commit comments