Skip to content

Commit 6c63f2a

Browse files
committed
master: dot-product attention.
1 parent fcc0f06 commit 6c63f2a

File tree

1 file changed

+2
-2
lines changed
  • Chapter-wise code/Code - PyTorch/7. Attention Models/2. Neural Text Summarization/1. Transformer Models

1 file changed

+2
-2
lines changed

Chapter-wise code/Code - PyTorch/7. Attention Models/2. Neural Text Summarization/1. Transformer Models/Dot Product Attention.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,8 +5,8 @@ Below steps describe in detail as to how a *dot-product attention* works:
55
*Imp: Queries: German words.*
66

77
1. Let's consider the phrase in English, *"I am happy"*.
8-
First, the word *I* is embedded, to obtain a vector representation that holds continuous values which is unique for every single word.
9-
<img src="../images/1.step - 1.png" width="50%"></img><br>
8+
First, the word *I* is embedded, to obtain a vector representation that holds continuous values which is unique for every single word.<br><br>
9+
<img src="../images/7.step - 1.png" width="50%"></img><br>
1010

1111
2. By feeding three distinct linear layers, you get three different vectors for queries, keys and values.<br><br>
1212
<img src="../images/8. step - 2.png" width="50%"></img><br>

0 commit comments

Comments
 (0)