Skip to content

Commit 7bdf684

Browse files
star1327prossbar
andauthored
DOC: Correct minor grammar issues (#275)
* DOC: a LSTM -> an LSTM * DOC: lets start -> let's start * DOC: lets say -> let's say --------- Co-authored-by: Ross Barnowski <rossbar@caltech.edu>
1 parent 6d1342b commit 7bdf684

File tree

2 files changed

+4
-4
lines changed

2 files changed

+4
-4
lines changed

content/tutorial-nlp-from-scratch.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ This tutorial can be run locally in an isolated environment, such as [Virtualenv
4949

5050
2. Preprocess the datasets
5151

52-
3. Build and train a LSTM network from scratch
52+
3. Build and train an LSTM network from scratch
5353

5454
4. Perform sentiment analysis on collected speeches
5555

@@ -462,7 +462,7 @@ The problem with an RNN however, is that it cannot retain long-term memory becau
462462
In the above gif, the rectangles labeled $A$ are called `Cells` and they are the **Memory Blocks** of our LSTM network. They are responsible for choosing what to remember in a sequence and pass on that information to the next cell via two states called the `hidden state` $H_{t}$ and the `cell state` $C_{t}$ where $t$ indicates the time-step. Each `Cell` has dedicated gates which are responsible for storing, writing or reading the information passed to an LSTM. You will now look closely at the architecture of the network by implementing each mechanism happening inside of it.
463463

464464

465-
Lets start with writing a function to randomly initialize the parameters which will be learned while our model trains:
465+
Let's start with writing a function to randomly initialize the parameters which will be learned while our model trains:
466466

467467
```python
468468
def initialise_params(hidden_dim, input_dim):
@@ -641,7 +641,7 @@ def forward_prop(X_vec, parameters, input_dim):
641641
After each forward pass through the network, you will implement the `backpropagation through time` algorithm to accumulate gradients of each parameter over the time steps. Backpropagation through a LSTM is not as straightforward as through other common Deep Learning architectures, due to the special way its underlying layers interact. Nonetheless, the approach is largely the same; identifying dependencies and applying the chain rule.
642642

643643

644-
Lets start with defining a function to initialize gradients of each parameter as arrays made up of zeros with same dimensions as the corresponding parameter.
644+
Let's start with defining a function to initialize gradients of each parameter as arrays made up of zeros with same dimensions as the corresponding parameter.
645645

646646
```python
647647
# Initialise the gradients

content/tutorial-static_equilibrium.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -190,7 +190,7 @@ The pole does not move, so it must apply a reaction force.
190190
The pole also does not rotate, so it must also be creating a reaction moment.
191191
Solve for both the reaction force and moments.
192192

193-
Lets say a 5N force is applied perpendicularly 2m above the base of the pole.
193+
Let's say a 5N force is applied perpendicularly 2m above the base of the pole.
194194

195195
```{code-cell}
196196
f = 5 # Force in newtons

0 commit comments

Comments
 (0)