Skip to content

Commit 749b49a

Browse files
committed
master: Text Summarization.
1 parent 23110c8 commit 749b49a

File tree

4 files changed

+10
-0
lines changed

4 files changed

+10
-0
lines changed
Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
# Transformer Models
2+
3+
## What was wrong with Seq2Seq Models?
4+
5+
1. No parallel computaions. For longer sequence of text, a seq2seq model will take more number of timesteps to complete
6+
the translation and as we know, with large sequences, the information tends to get lost in the network (vanishing gradient).
7+
LSTMs and GRUs can help to overcome the vanishing gradient problem, but even those will fail to process long sequences.<br><br>
8+
<img src="./images/1. drawbacks of seq2seq.png" width="50%"></img><br>
9+
10+
2.

Chapter-wise code/Code - PyTorch/7. Attention Models/2. Neural Text Summarization/Readme.md

Whitespace-only changes.
241 KB
Loading

0 commit comments

Comments
 (0)