Skip to content

Commit 29d10ab

Browse files
committed
master: NMT model evaluation.
1 parent 97d48ce commit 29d10ab

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

Chapter-wise code/Code - PyTorch/7. Attention Models/1. NMT/Evaluating NMT.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
2. The closer the BLEU score is to one, the better your model is. The closer to zero, the worse it is.
1010

1111
3.To get a BLEU score, the candidates and the references are usually based on an average of uni, bi, try or even four-gram precision.<br>
12-
<img src="./images/28. BLEU Score Calculation.png" width="40%"></img><br><br>
12+
<img src="./images/28. BLEU Score Calculation.png" width="55%"></img><br><br>
1313

1414
### Disadvantages of BLEU
1515

@@ -23,7 +23,7 @@
2323
1. ROGUE stands for *Recall Oriented Understudy for Gisting Evaluation*, which is a mouthful. But let's you know right off the bat that it's more recall-oriented by default. This means that it's placing more importance on how much of the human created reference appears in the machine translation.
2424
2. It was orignially developed to evaluate text-summarization models but works well for NMT as well.
2525
3. ROGUE score calculates the precision and recall between the generated text and human-created text.
26-
<img src="./images/29. ROUGE Score Calculation.png" width="40%"></img><br><br>
26+
<img src="./images/29. ROUGE Score Calculation.png" width="55%"></img><br><br>
2727

2828
### Disadvantages of ROUGE
2929

0 commit comments

Comments
 (0)