Skip to content

Commit c43698b

Browse files
committed
minor edits
1 parent 9d9d727 commit c43698b

File tree

2 files changed

+255
-197
lines changed

2 files changed

+255
-197
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -353,6 +353,8 @@ The LSTM model learns by iteratively making predictions given the training data
353353

354354
We use the [Adam optimizer](https://pytorch.org/docs/master/generated/torch.optim.Adam.html) that updates the model's parameters based on the learning rate through its `step()` method. This is how the model learns and fine-tunes its predictions. The learning rate controls how quickly the model converges. A learning rate that is too large can cause the model to converge too quickly to a suboptimal solution, whereas smaller learning rates require more training iterations and may result in prolonged duration for the model to find the optimal solution. We also use the [StepLR scheduler](https://pytorch.org/docs/master/generated/torch.optim.lr_scheduler.StepLR.html) to reduce the learning rate during the training process. You may also try the [ReduceLROnPlateau](https://pytorch.org/docs/master/generated/torch.optim.lr_scheduler.ReduceLROnPlateau.html) scheduler, which reduces the learning rate when a cost function has stopped improving for a "`patience`" number of epochs. Choosing the proper learning rate for your project is both art and science, and is a heavily researched topic in the machine learning community.
355355

356+
Using *mean squared error* as the loss function to optimize our model, we calculate the loss on training and validation based on how well the model is doing in these two sets. After every epoch, a smaller *loss* value indicates that the model is learning, and 0.0 means that no mistakes were made. From the console's logs, `loss train` gives an idea of how well the model is learning, while `loss test` shows how well the model generalizes the validation dataset. A well-trained model is identified by a training and validation loss that decreases to the point of stability with relatively small differences between the two final loss values (at this stage, we say the model has "converged"). Generally, the loss of the model will be lower on the training than on the validation dataset.
357+
356358
Append the following code block to your **project.py** file and re-run the file to start the model training process.
357359

358360
<details>
@@ -408,8 +410,6 @@ for epoch in range(config["training"]["num_epoch"]):
408410
.format(epoch+1, config["training"]["num_epoch"], loss_train, loss_val, lr_train))
409411
```
410412

411-
Using *mean squared error* as the loss function to optimize our model, we calculate the loss on training and validation based on how well the model is doing in these two sets. After every epoch, a smaller *loss* value indicates that the model is learning, and 0.0 means that no mistakes were made. `Loss train` gives an idea of how well the model is learning, while `loss test` shows how well the model generalizes the validation dataset. A well-trained model is identified by a training and validation loss that decreases to the point of stability with relatively small differences between the two final loss values. Generally, the loss of the model will be lower on the training than on the validation dataset.
412-
413413
Console showing the loss and learning rate during training:
414414

415415
```

demo-predicting-stock-prices.ipynb

Lines changed: 253 additions & 195 deletions
Large diffs are not rendered by default.

0 commit comments

Comments
 (0)