Skip to content

Commit dd74698

Browse files
committed
master: Text Summarization.
1 parent 749b49a commit dd74698

File tree

1 file changed

+1
-1
lines changed
  • Chapter-wise code/Code - PyTorch/7. Attention Models/2. Neural Text Summarization/1. Transformer Models

1 file changed

+1
-1
lines changed

Chapter-wise code/Code - PyTorch/7. Attention Models/2. Neural Text Summarization/1. Transformer Models/Readme.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,6 @@
55
1. No parallel computaions. For longer sequence of text, a seq2seq model will take more number of timesteps to complete
66
the translation and as we know, with large sequences, the information tends to get lost in the network (vanishing gradient).
77
LSTMs and GRUs can help to overcome the vanishing gradient problem, but even those will fail to process long sequences.<br><br>
8-
<img src="./images/1. drawbacks of seq2seq.png" width="50%"></img><br>
8+
<img src="../images/1. drawbacks of seq2seq.png" width="50%"></img><br>
99

1010
2.

0 commit comments

Comments
 (0)