Skip to content

Commit 080693c

Browse files
authored
DOC: lets start -> let's start
1 parent ed0031d commit 080693c

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

content/tutorial-nlp-from-scratch.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -462,7 +462,7 @@ The problem with an RNN however, is that it cannot retain long-term memory becau
462462
In the above gif, the rectangles labeled $A$ are called `Cells` and they are the **Memory Blocks** of our LSTM network. They are responsible for choosing what to remember in a sequence and pass on that information to the next cell via two states called the `hidden state` $H_{t}$ and the `cell state` $C_{t}$ where $t$ indicates the time-step. Each `Cell` has dedicated gates which are responsible for storing, writing or reading the information passed to an LSTM. You will now look closely at the architecture of the network by implementing each mechanism happening inside of it.
463463

464464

465-
Lets start with writing a function to randomly initialize the parameters which will be learned while our model trains:
465+
Let's start with writing a function to randomly initialize the parameters which will be learned while our model trains:
466466

467467
```python
468468
def initialise_params(hidden_dim, input_dim):
@@ -641,7 +641,7 @@ def forward_prop(X_vec, parameters, input_dim):
641641
After each forward pass through the network, you will implement the `backpropagation through time` algorithm to accumulate gradients of each parameter over the time steps. Backpropagation through a LSTM is not as straightforward as through other common Deep Learning architectures, due to the special way its underlying layers interact. Nonetheless, the approach is largely the same; identifying dependencies and applying the chain rule.
642642

643643

644-
Lets start with defining a function to initialize gradients of each parameter as arrays made up of zeros with same dimensions as the corresponding parameter.
644+
Let's start with defining a function to initialize gradients of each parameter as arrays made up of zeros with same dimensions as the corresponding parameter.
645645

646646
```python
647647
# Initialise the gradients

0 commit comments

Comments
 (0)