Skip to content

Commit b9cb6da

Browse files
committed
Fixed little typos in convolutional-networks.md
Fixed little typos in convolutional-networks.md: - They are made up of => they are made up of - The whole network still express => The whole network still expresses - From the raw image pixels => from the raw image pixels - vastly reduces the amount of parameters => vastly reduce the amount of parameters
1 parent 96f1d89 commit b9cb6da

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

convolutional-networks.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -21,9 +21,9 @@ Table of Contents:
2121

2222
## Convolutional Neural Networks (CNNs / ConvNets)
2323

24-
Convolutional Neural Networks are very similar to ordinary Neural Networks from the previous chapter: They are made up of neurons that have learnable weights and biases. Each neuron receives some inputs, performs a dot product and optionally follows it with a non-linearity. The whole network still express a single differentiable score function: From the raw image pixels on one end to class scores at the other. And they still have a loss function (e.g. SVM/Softmax) on the last (fully-connected) layer and all the tips/tricks we developed for learning regular Neural Networks still apply.
24+
Convolutional Neural Networks are very similar to ordinary Neural Networks from the previous chapter: they are made up of neurons that have learnable weights and biases. Each neuron receives some inputs, performs a dot product and optionally follows it with a non-linearity. The whole network still expresses a single differentiable score function: from the raw image pixels on one end to class scores at the other. And they still have a loss function (e.g. SVM/Softmax) on the last (fully-connected) layer and all the tips/tricks we developed for learning regular Neural Networks still apply.
2525

26-
So what does change? ConvNet architectures make the explicit assumption that the inputs are images, which allows us to encode certain properties into the architecture. These then make the forward function more efficient to implement and vastly reduces the amount of parameters in the network.
26+
So what does change? ConvNet architectures make the explicit assumption that the inputs are images, which allows us to encode certain properties into the architecture. These then make the forward function more efficient to implement and vastly reduce the amount of parameters in the network.
2727

2828
<a name='overview'></a>
2929
### Architecture Overview

0 commit comments

Comments
 (0)