Skip to content

Commit 54f1102

Browse files
authored
neural-networks-case-study.md - loss mistake (#244)
The cross entropy loss should be -log(1/3)=1.1 instead of log(1/3).
1 parent 305dac1 commit 54f1102

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

neural-networks-case-study.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -124,7 +124,7 @@ reg_loss = 0.5*reg*np.sum(W*W)
124124
loss = data_loss + reg_loss
125125
```
126126

127-
In this code, the regularization strength \\(\lambda\\) is stored inside the `reg`. The convenience factor of `0.5` multiplying the regularization will become clear in a second. Evaluating this in the beginning (with random parameters) might give us `loss = 1.1`, which is `np.log(1.0/3)`, since with small initial random weights all probabilities assigned to all classes are about one third. We now want to make the loss as low as possible, with `loss = 0` as the absolute lower bound. But the lower the loss is, the higher are the probabilities assigned to the correct classes for all examples.
127+
In this code, the regularization strength \\(\lambda\\) is stored inside the `reg`. The convenience factor of `0.5` multiplying the regularization will become clear in a second. Evaluating this in the beginning (with random parameters) might give us `loss = 1.1`, which is `-np.log(1.0/3)`, since with small initial random weights all probabilities assigned to all classes are about one third. We now want to make the loss as low as possible, with `loss = 0` as the absolute lower bound. But the lower the loss is, the higher are the probabilities assigned to the correct classes for all examples.
128128

129129
<a name='grad'></a>
130130

0 commit comments

Comments
 (0)