Skip to content

Commit 803004a

Browse files
committed
this was too much tedious work and i blame github
1 parent 97214cb commit 803004a

12 files changed

+199
-80
lines changed

classification.md

Lines changed: 11 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,7 @@ This is an introductory lecture designed to introduce people from outside of Com
1616
- [Further Reading](#reading)
1717

1818
<a name='intro'></a>
19+
1920
## Image Classification
2021

2122
**Motivation**. In this section we will introduce the Image Classification problem, which is the task of assigning an input image one label from a fixed set of categories. This is one of the core problems in Computer Vision that, despite its simplicity, has a large variety of practical applications. Moreover, as we will see later in the course, many other seemingly distinct Computer Vision tasks (such as object detection, segmentation) can be reduced to image classification.
@@ -58,6 +59,7 @@ A good image classification model must be invariant to the cross product of all
5859
- **Evaluation:** In the end, we evaluate the quality of the classifier by asking it to predict labels for a new set of images that it has never seen before. We will then compare the true labels of these images to the ones predicted by the classifier. Intuitively, we're hoping that a lot of the predictions match up with the true answers (which we call the *ground truth*).
5960

6061
<a name='nn'></a>
62+
6163
### Nearest Neighbor Classifier
6264
As our first approach, we will develop what we call a **Nearest Neighbor Classifier**. This classifier has nothing to do with Convolutional Neural Networks and it is very rarely used in practice, but it will allow us to get an idea about the basic approach to an image classification problem.
6365

@@ -70,10 +72,10 @@ As our first approach, we will develop what we call a **Nearest Neighbor Classif
7072

7173
Suppose now that we are given the CIFAR-10 training set of 50,000 images (5,000 images for every one of the labels), and we wish to label the remaining 10,000. The nearest neighbor classifier will take a test image, compare it to every single one of the training images, and predict the label of the closest training image. In the image above and on the right you can see an example result of such a procedure for 10 example test images. Notice that in only about 3 out of 10 examples an image of the same class is retrieved, while in the other 7 examples this is not the case. For example, in the 8th row the nearest training image to the horse head is a red car, presumably due to the strong black background. As a result, this image of a horse would in this case be mislabeled as a car.
7274

73-
You may have noticed that we left unspecified the details of exactly how we compare two images, which in this case are just two blocks of 32 x 32 x 3. One of the simplest possibilities is to compare the images pixel by pixel and add up all the differences. In other words, given two images and representing them as vectors \\( I\_1, I\_2 \\) , a reasonable choice for comparing them might be the **L1 distance**:
75+
You may have noticed that we left unspecified the details of exactly how we compare two images, which in this case are just two blocks of 32 x 32 x 3. One of the simplest possibilities is to compare the images pixel by pixel and add up all the differences. In other words, given two images and representing them as vectors \\( I_1, I_2 \\) , a reasonable choice for comparing them might be the **L1 distance**:
7476

7577
$$
76-
d\_1 (I\_1, I\_2) = \sum\_{p} \left| I^p\_1 - I^p\_2 \right|
78+
d_1 (I_1, I_2) = \sum_{p} \left| I^p_1 - I^p_2 \right|
7779
$$
7880

7981
Where the sum is taken over all pixels. Here is the procedure visualized:
@@ -141,7 +143,7 @@ If you ran this code, you would see that this classifier only achieves **38.6%**
141143
There are many other ways of computing distances between vectors. Another common choice could be to instead use the **L2 distance**, which has the geometric interpretation of computing the euclidean distance between two vectors. The distance takes the form:
142144

143145
$$
144-
d\_2 (I\_1, I\_2) = \sqrt{\sum\_{p} \left( I^p\_1 - I^p\_2 \right)^2}
146+
d_2 (I_1, I_2) = \sqrt{\sum_{p} \left( I^p_1 - I^p_2 \right)^2}
145147
$$
146148

147149
In other words we would be computing the pixelwise difference as before, but this time we square all of them, add them up and finally take the square root. In numpy, using the code from above we would need to only replace a single line of code. The line that computes the distances:
@@ -155,6 +157,7 @@ Note that I included the `np.sqrt` call above, but in a practical nearest neighb
155157
**L1 vs. L2.** It is interesting to consider differences between the two metrics. In particular, the L2 distance is much more unforgiving than the L1 distance when it comes to differences between two vectors. That is, the L2 distance prefers many medium disagreements to one big one. L1 and L2 distances (or equivalently the L1/L2 norms of the differences between a pair of images) are the most commonly used special cases of a [p-norm](http://planetmath.org/vectorpnorm).
156158

157159
<a name='knn'></a>
160+
158161
### k - Nearest Neighbor Classifier
159162

160163
You may have noticed that it is strange to only use the label of the nearest image when we wish to make a prediction. Indeed, it is almost always the case that one can do better by using what's called a **k-Nearest Neighbor Classifier**. The idea is very simple: instead of finding the single closest image in the training set, we will find the top **k** closest images, and have them vote on the label of the test image. In particular, when *k = 1*, we recover the Nearest Neighbor classifier. Intuitively, higher values of **k** have a smoothing effect that makes the classifier more resistant to outliers:
@@ -167,6 +170,7 @@ You may have noticed that it is strange to only use the label of the nearest ima
167170
In practice, you will almost always want to use k-Nearest Neighbor. But what value of *k* should you use? We turn to this problem next.
168171

169172
<a name='val'></a>
173+
170174
### Validation sets for Hyperparameter tuning
171175

172176
The k-nearest neighbor classifier requires a setting for *k*. But what number works best? Additionally, we saw that there are many different distance functions we could have used: L1 norm, L2 norm, there are many other choices we didn't even consider (e.g. dot products). These choices are called **hyperparameters** and they come up very often in the design of many Machine Learning algorithms that learn from data. It's often not obvious what values/settings one should choose.
@@ -225,6 +229,7 @@ In cases where the size of your training data (and therefore also the validation
225229
</div>
226230

227231
<a name='procon'></a>
232+
228233
**Pros and Cons of Nearest Neighbor classifier.**
229234

230235
It is worth considering some advantages and drawbacks of the Nearest Neighbor classifier. Clearly, one advantage is that it is very simple to implement and understand. Additionally, the classifier takes no time to train, since all that is required is to store and possibly index the training data. However, we pay that computational cost at test time, since classifying a test example requires a comparison to every single training example. This is backwards, since in practice we often care about the test time efficiency much more than the efficiency at training time. In fact, the deep neural networks we will develop later in this class shift this tradeoff to the other extreme: They are very expensive to train, but once the training is finished it is very cheap to classify a new test example. This mode of operation is much more desirable in practice.
@@ -248,6 +253,7 @@ Here is one more visualization to convince you that using pixel differences to c
248253
In particular, note that images that are nearby each other are much more a function of the general color distribution of the images, or the type of background rather than their semantic identity. For example, a dog can be seen very near a frog since both happen to be on white background. Ideally we would like images of all of the 10 classes to form their own clusters, so that images of the same class are nearby to each other regardless of irrelevant characteristics and variations (such as the background). However, to get this property we will have to go beyond raw pixels.
249254

250255
<a name='summary'></a>
256+
251257
### Summary
252258

253259
In summary:
@@ -263,6 +269,7 @@ In summary:
263269
In next lectures we will embark on addressing these challenges and eventually arrive at solutions that give 90% accuracies, allow us to completely discard the training set once learning is complete, and they will allow us to evaluate a test image in less than a millisecond.
264270

265271
<a name='summaryapply'></a>
272+
266273
### Summary: Applying kNN in practice
267274

268275
If you wish to apply kNN in practice (hopefully not on images, or perhaps as only a baseline) proceed as follows:
@@ -275,6 +282,7 @@ If you wish to apply kNN in practice (hopefully not on images, or perhaps as onl
275282
6. Take note of the hyperparameters that gave the best results. There is a question of whether you should use the full training set with the best hyperparameters, since the optimal hyperparameters might change if you were to fold the validation data into your training set (since the size of the data would be larger). In practice it is cleaner to not use the validation data in the final classifier and consider it to be *burned* on estimating the hyperparameters. Evaluate the best model on the test set. Report the test set accuracy and declare the result to be the performance of the kNN classifier on your data.
276283

277284
<a name='reading'></a>
285+
278286
#### Further Reading
279287

280288
Here are some (optional) links you may find interesting for further reading:

0 commit comments

Comments
 (0)