From ed0031dda6cc87d2cf739960136c511d757bcee9 Mon Sep 17 00:00:00 2001 From: "Christine P. Chai" Date: Mon, 10 Nov 2025 18:58:50 -0800 Subject: [PATCH 1/3] DOC: a LSTM -> an LSTM --- content/tutorial-nlp-from-scratch.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/tutorial-nlp-from-scratch.md b/content/tutorial-nlp-from-scratch.md index 49a84ed5..74f26a2e 100644 --- a/content/tutorial-nlp-from-scratch.md +++ b/content/tutorial-nlp-from-scratch.md @@ -49,7 +49,7 @@ This tutorial can be run locally in an isolated environment, such as [Virtualenv 2. Preprocess the datasets -3. Build and train a LSTM network from scratch +3. Build and train an LSTM network from scratch 4. Perform sentiment analysis on collected speeches From 080693c1c7ed9770973651702a81d4c5fc05da28 Mon Sep 17 00:00:00 2001 From: "Christine P. Chai" Date: Mon, 10 Nov 2025 18:59:33 -0800 Subject: [PATCH 2/3] DOC: lets start -> let's start --- content/tutorial-nlp-from-scratch.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/tutorial-nlp-from-scratch.md b/content/tutorial-nlp-from-scratch.md index 74f26a2e..c5809cdb 100644 --- a/content/tutorial-nlp-from-scratch.md +++ b/content/tutorial-nlp-from-scratch.md @@ -462,7 +462,7 @@ The problem with an RNN however, is that it cannot retain long-term memory becau In the above gif, the rectangles labeled $A$ are called `Cells` and they are the **Memory Blocks** of our LSTM network. They are responsible for choosing what to remember in a sequence and pass on that information to the next cell via two states called the `hidden state` $H_{t}$ and the `cell state` $C_{t}$ where $t$ indicates the time-step. Each `Cell` has dedicated gates which are responsible for storing, writing or reading the information passed to an LSTM. You will now look closely at the architecture of the network by implementing each mechanism happening inside of it. -Lets start with writing a function to randomly initialize the parameters which will be learned while our model trains: +Let's start with writing a function to randomly initialize the parameters which will be learned while our model trains: ```python def initialise_params(hidden_dim, input_dim): @@ -641,7 +641,7 @@ def forward_prop(X_vec, parameters, input_dim): After each forward pass through the network, you will implement the `backpropagation through time` algorithm to accumulate gradients of each parameter over the time steps. Backpropagation through a LSTM is not as straightforward as through other common Deep Learning architectures, due to the special way its underlying layers interact. Nonetheless, the approach is largely the same; identifying dependencies and applying the chain rule. -Lets start with defining a function to initialize gradients of each parameter as arrays made up of zeros with same dimensions as the corresponding parameter. +Let's start with defining a function to initialize gradients of each parameter as arrays made up of zeros with same dimensions as the corresponding parameter. ```python # Initialise the gradients From d39295c705fb306a5ee930c1adfa85a6c71f8826 Mon Sep 17 00:00:00 2001 From: "Christine P. Chai" Date: Mon, 10 Nov 2025 19:00:02 -0800 Subject: [PATCH 3/3] DOC: lets say -> let's say --- content/tutorial-static_equilibrium.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/tutorial-static_equilibrium.md b/content/tutorial-static_equilibrium.md index f649e56e..868d777f 100644 --- a/content/tutorial-static_equilibrium.md +++ b/content/tutorial-static_equilibrium.md @@ -190,7 +190,7 @@ The pole does not move, so it must apply a reaction force. The pole also does not rotate, so it must also be creating a reaction moment. Solve for both the reaction force and moments. -Lets say a 5N force is applied perpendicularly 2m above the base of the pole. +Let's say a 5N force is applied perpendicularly 2m above the base of the pole. ```{code-cell} f = 5 # Force in newtons