Skip to content

Commit 153c643

Browse files
committed
master: attention formula.
1 parent f8c0c9b commit 153c643

File tree

2 files changed

+4
-0
lines changed

2 files changed

+4
-0
lines changed

Chapter-wise code/Code - PyTorch/7. Attention Models/2. Neural Text Summarization/1. Transformer Models/Attention Maths.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -40,3 +40,7 @@ often called *attention weights*. The shape of this matrix is `[Lq, Lk]`.<br>
4040

4141
8. Multiplying alignment weights with input sequence (values), will then weight the sequence. A single context vector can then be calculated using the sum of weighted vectors.<br>
4242
<img src="../images/19. step - 5.png" width="50%"></img> <br><br>
43+
44+
## Attention Formula
45+
46+
<img src="../images/20. attention formula.png" width="50%"></img> <br><br>
256 KB
Loading

0 commit comments

Comments
 (0)