You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: Chapter-wise code/Code - PyTorch/6. Natural-Language-Processing/6. Machine Translation/NMT-Basic/Readme.md
+1-25Lines changed: 1 addition & 25 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,31 +14,6 @@ Here, we have 3 given parameters:
14
14
15
15
Now, we have to create an English embedding matrix and French embedding matrix:<br>
# loop through all english, french word pairs in the english french dictionary
19
-
for en_word, fr_word in en_fr.items():
20
-
21
-
# check that the french word has an embedding and that the english word has an embedding
22
-
if fr_word in french_set and en_word in english_set:
23
-
24
-
# get the english embedding
25
-
en_vec = english_vecs[en_word]
26
-
27
-
# get the french embedding
28
-
fr_vec = french_vecs[fr_word]
29
-
30
-
# add the english embedding to the list
31
-
X_l.append(en_vec)
32
-
33
-
# add the french embedding to the list
34
-
Y_l.append(fr_vec)
35
-
36
-
# stack the vectors of X_l into a matrix X
37
-
X = np.vstack(X_l)
38
-
39
-
# stack the vectors of Y_l into a matrix Y
40
-
Y = np.vstack(Y_l)
41
-
```
42
17
43
18
## 2. Linear Transformation of Word Embeddings
44
19
Given dictionaries of English and French word embeddings we will create a transformation matrix `R`. In other words, given an english word embedding, `e`, we need to multiply `e` with `R`, i.e., (`eR`) to generate a new word embedding `f`.
@@ -51,6 +26,7 @@ Original Forbenius Norm: <img src="./images/original_forbenius_norm.png"></img><
0 commit comments