Skip to content

Commit 9a0b764

Browse files
Merge branch 'master' into NinaM31
2 parents 2a7bc1f + f2aa274 commit 9a0b764

File tree

5 files changed

+80
-115
lines changed

5 files changed

+80
-115
lines changed

.github/workflows/manual.yml

Lines changed: 46 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,46 @@
1+
# Workflow to ensure whenever a Github PR is submitted,
2+
# a JIRA ticket gets created automatically.
3+
name: Manual Workflow
4+
5+
# Controls when the action will run.
6+
on:
7+
# Triggers the workflow on pull request events but only for the master branch
8+
pull_request_target:
9+
types: [assigned, opened, reopened]
10+
11+
# Allows you to run this workflow manually from the Actions tab
12+
workflow_dispatch:
13+
14+
jobs:
15+
test-transition-issue:
16+
name: Convert Github Issue to Jira Issue
17+
runs-on: ubuntu-latest
18+
steps:
19+
- name: Checkout
20+
uses: actions/checkout@master
21+
22+
- name: Login
23+
uses: atlassian/gajira-login@master
24+
env:
25+
JIRA_BASE_URL: ${{ secrets.JIRA_BASE_URL }}
26+
JIRA_USER_EMAIL: ${{ secrets.JIRA_USER_EMAIL }}
27+
JIRA_API_TOKEN: ${{ secrets.JIRA_API_TOKEN }}
28+
29+
- name: Create NEW JIRA ticket
30+
id: create
31+
uses: atlassian/gajira-create@master
32+
with:
33+
project: CONUPDATE
34+
issuetype: Task
35+
summary: |
36+
Github PR - nd101 v7 Deep Learning | Repo: ${{ github.repository }} | PR# ${{github.event.number}}
37+
description: |
38+
Repo link: https://github.com/${{ github.repository }}
39+
PR no. ${{ github.event.pull_request.number }}
40+
PR title: ${{ github.event.pull_request.title }}
41+
PR description: ${{ github.event.pull_request.description }}
42+
In addition, please resolve other issues, if any.
43+
fields: '{"components": [{"name":"Github PR"}], "customfield_16449":"https://classroom.udacity.com/nanodegrees/nd101/dashboard/overview", "customfield_16450":"Resolve the PR", "labels": ["github"]}'
44+
45+
- name: Log created issue
46+
run: echo "Issue ${{ steps.create.outputs.issue }} was created"

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1 +1,2 @@
11
.ipynb_checkpoints
2+
.github/**

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
1-
# Deep Learning (PyTorch)
1+
# Deep Learning (PyTorch) - ND101 v7
22

3-
This repository contains material related to Udacity's [Deep Learning Nanodegree program](https://www.udacity.com/course/deep-learning-nanodegree--nd101). It consists of a bunch of tutorial notebooks for various deep learning topics. In most cases, the notebooks lead you through implementing models such as convolutional networks, recurrent networks, and GANs. There are other topics covered such as weight initialization and batch normalization.
3+
This repository contains material related to Udacity's [Deep Learning v7 Nanodegree program](https://www.udacity.com/course/deep-learning-nanodegree--nd101). It consists of a bunch of tutorial notebooks for various deep learning topics. In most cases, the notebooks lead you through implementing models such as convolutional networks, recurrent networks, and GANs. There are other topics covered such as weight initialization and batch normalization.
44

55
There are also notebooks used as projects for the Nanodegree program. In the program itself, the projects are reviewed by real people (Udacity reviewers), but the starting code is available here, as well.
66

intro-to-pytorch/Part 1 - Tensors in PyTorch (Exercises).ipynb

Lines changed: 30 additions & 112 deletions
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@
6060
},
6161
{
6262
"cell_type": "code",
63-
"execution_count": 71,
63+
"execution_count": null,
6464
"metadata": {},
6565
"outputs": [],
6666
"source": [
@@ -70,7 +70,7 @@
7070
},
7171
{
7272
"cell_type": "code",
73-
"execution_count": 72,
73+
"execution_count": null,
7474
"metadata": {},
7575
"outputs": [],
7676
"source": [
@@ -86,14 +86,14 @@
8686
},
8787
{
8888
"cell_type": "code",
89-
"execution_count": 73,
89+
"execution_count": null,
9090
"metadata": {},
9191
"outputs": [],
9292
"source": [
9393
"### Generate some data\n",
9494
"torch.manual_seed(7) # Set the random seed so things are predictable\n",
9595
"\n",
96-
"# Features are 5 random normal variables\n",
96+
"# Features are 3 random normal variables\n",
9797
"features = torch.randn((1, 5))\n",
9898
"# True weights for our data, random normal variables again\n",
9999
"weights = torch.randn_like(features)\n",
@@ -119,22 +119,11 @@
119119
},
120120
{
121121
"cell_type": "code",
122-
"execution_count": 74,
122+
"execution_count": null,
123123
"metadata": {},
124-
"outputs": [
125-
{
126-
"output_type": "execute_result",
127-
"data": {
128-
"text/plain": "tensor([[0.1595]])"
129-
},
130-
"metadata": {},
131-
"execution_count": 74
132-
}
133-
],
124+
"outputs": [],
134125
"source": [
135-
"## Calculate the output of this network using the weights and bias tensors\n",
136-
"output = activation(torch.sum(features*weights) + bias)\n",
137-
"output"
126+
"## Calculate the output of this network using the weights and bias tensors"
138127
]
139128
},
140129
{
@@ -156,42 +145,28 @@
156145
"RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033\n",
157146
"```\n",
158147
"\n",
159-
"As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second tensor. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.\n",
148+
"As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.\n",
160149
"\n",
161150
"**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.\n",
162151
"\n",
163-
"There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.resize_), [`weights.view()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.view) and [`torch.transpose(weights,0,1)`](https://pytorch.org/docs/master/generated/torch.transpose.html).\n",
152+
"There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.view).\n",
164153
"\n",
165154
"* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.\n",
166155
"* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.\n",
167156
"* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.\n",
168-
"* `torch.transpose(weights,0,1)` will return transposed weights tensor. This returns transposed version of inpjut tensor along dim 0 and dim 1. This is efficient since we do not specify to actual dimesions of weights.\n",
169157
"\n",
170158
"I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.\n",
171159
"\n",
172-
"One more approach is to use `.t()` to transpose vector of weights, in our case from (1,5) to (5,1) shape.\n",
173-
174160
"> **Exercise**: Calculate the output of our little network using matrix multiplication."
175161
]
176162
},
177163
{
178164
"cell_type": "code",
179-
"execution_count": 75,
165+
"execution_count": null,
180166
"metadata": {},
181-
"outputs": [
182-
{
183-
"output_type": "execute_result",
184-
"data": {
185-
"text/plain": "tensor([[0.1595]])"
186-
},
187-
"metadata": {},
188-
"execution_count": 75
189-
}
190-
],
167+
"outputs": [],
191168
"source": [
192-
"## Calculate the output of this network using matrix multiplication\n",
193-
"output = activation(torch.matmul(features,torch.transpose(weights,0,1)) + bias)\n",
194-
"output"
169+
"## Calculate the output of this network using matrix multiplication"
195170
]
196171
},
197172
{
@@ -229,7 +204,7 @@
229204
},
230205
{
231206
"cell_type": "code",
232-
"execution_count": 76,
207+
"execution_count": null,
233208
"metadata": {},
234209
"outputs": [],
235210
"source": [
@@ -263,23 +238,11 @@
263238
},
264239
{
265240
"cell_type": "code",
266-
"execution_count": 77,
241+
"execution_count": null,
267242
"metadata": {},
268-
"outputs": [
269-
{
270-
"output_type": "execute_result",
271-
"data": {
272-
"text/plain": "tensor([[0.3171]])"
273-
},
274-
"metadata": {},
275-
"execution_count": 77
276-
}
277-
],
243+
"outputs": [],
278244
"source": [
279-
"## Your solution here\n",
280-
"h = activation(torch.matmul(features,W1).add_(B1))\n",
281-
"output = activation(torch.matmul(h,W2).add_(B2))\n",
282-
"output"
245+
"## Your solution here"
283246
]
284247
},
285248
{
@@ -288,7 +251,7 @@
288251
"source": [
289252
"If you did this correctly, you should see the output `tensor([[ 0.3171]])`.\n",
290253
"\n",
291-
"The number of hidden units are a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions."
254+
"The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions."
292255
]
293256
},
294257
{
@@ -302,18 +265,9 @@
302265
},
303266
{
304267
"cell_type": "code",
305-
"execution_count": 78,
268+
"execution_count": null,
306269
"metadata": {},
307-
"outputs": [
308-
{
309-
"output_type": "execute_result",
310-
"data": {
311-
"text/plain": "array([[0.01999898, 0.8199435 , 0.49156905],\n [0.41055049, 0.77689295, 0.34885976],\n [0.18349863, 0.75363566, 0.92894509],\n [0.55251871, 0.60749635, 0.21301188]])"
312-
},
313-
"metadata": {},
314-
"execution_count": 78
315-
}
316-
],
270+
"outputs": [],
317271
"source": [
318272
"import numpy as np\n",
319273
"np.set_printoptions(precision=8)\n",
@@ -323,18 +277,9 @@
323277
},
324278
{
325279
"cell_type": "code",
326-
"execution_count": 79,
280+
"execution_count": null,
327281
"metadata": {},
328-
"outputs": [
329-
{
330-
"output_type": "execute_result",
331-
"data": {
332-
"text/plain": "tensor([[0.0200, 0.8199, 0.4916],\n [0.4106, 0.7769, 0.3489],\n [0.1835, 0.7536, 0.9289],\n [0.5525, 0.6075, 0.2130]], dtype=torch.float64)"
333-
},
334-
"metadata": {},
335-
"execution_count": 79
336-
}
337-
],
282+
"outputs": [],
338283
"source": [
339284
"torch.set_printoptions(precision=8)\n",
340285
"b = torch.from_numpy(a)\n",
@@ -343,18 +288,9 @@
343288
},
344289
{
345290
"cell_type": "code",
346-
"execution_count": 80,
291+
"execution_count": null,
347292
"metadata": {},
348-
"outputs": [
349-
{
350-
"output_type": "execute_result",
351-
"data": {
352-
"text/plain": "array([[0.01999898, 0.8199435 , 0.49156905],\n [0.41055049, 0.77689295, 0.34885976],\n [0.18349863, 0.75363566, 0.92894509],\n [0.55251871, 0.60749635, 0.21301188]])"
353-
},
354-
"metadata": {},
355-
"execution_count": 80
356-
}
357-
],
293+
"outputs": [],
358294
"source": [
359295
"b.numpy()"
360296
]
@@ -368,37 +304,19 @@
368304
},
369305
{
370306
"cell_type": "code",
371-
"execution_count": 81,
307+
"execution_count": null,
372308
"metadata": {},
373-
"outputs": [
374-
{
375-
"output_type": "execute_result",
376-
"data": {
377-
"text/plain": "tensor([[0.0400, 1.6399, 0.9831],\n [0.8211, 1.5538, 0.6977],\n [0.3670, 1.5073, 1.8579],\n [1.1050, 1.2150, 0.4260]], dtype=torch.float64)"
378-
},
379-
"metadata": {},
380-
"execution_count": 81
381-
}
382-
],
309+
"outputs": [],
383310
"source": [
384311
"# Multiply PyTorch Tensor by 2, in place\n",
385312
"b.mul_(2)"
386313
]
387314
},
388315
{
389316
"cell_type": "code",
390-
"execution_count": 82,
317+
"execution_count": null,
391318
"metadata": {},
392-
"outputs": [
393-
{
394-
"output_type": "execute_result",
395-
"data": {
396-
"text/plain": "array([[0.03999795, 1.639887 , 0.9831381 ],\n [0.82110098, 1.55378589, 0.69771953],\n [0.36699725, 1.50727133, 1.85789017],\n [1.10503742, 1.2149927 , 0.42602377]])"
397-
},
398-
"metadata": {},
399-
"execution_count": 82
400-
}
401-
],
319+
"outputs": [],
402320
"source": [
403321
"# Numpy array matches new values from Tensor\n",
404322
"a"
@@ -407,9 +325,9 @@
407325
],
408326
"metadata": {
409327
"kernelspec": {
410-
"display_name": "Python 3.7.7 64-bit ('envTorch': conda)",
328+
"display_name": "Python 3",
411329
"language": "python",
412-
"name": "python37764bitenvtorchcondaf1e8697b5c364e0493551efccbc5e8bb"
330+
"name": "python3"
413331
},
414332
"language_info": {
415333
"codemirror_mode": {
@@ -421,7 +339,7 @@
421339
"name": "python",
422340
"nbconvert_exporter": "python",
423341
"pygments_lexer": "ipython3",
424-
"version": "3.7.7-final"
342+
"version": "3.6.6"
425343
}
426344
},
427345
"nbformat": 4,

sentiment-analysis-network/Sentiment_Classification_Projects.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -321,7 +321,7 @@
321321
"\n",
322322
"Ok, the ratios tell us which words are used more often in postive or negative reviews, but the specific values we've calculated are a bit difficult to work with. A very positive word like \"amazing\" has a value above 4, whereas a very negative word like \"terrible\" has a value around 0.18. Those values aren't easy to compare for a couple of reasons:\n",
323323
"\n",
324-
"* Right now, 1 is considered neutral, but the absolute value of the postive-to-negative rations of very postive words is larger than the absolute value of the ratios for the very negative words. So there is no way to directly compare two numbers and see if one word conveys the same magnitude of positive sentiment as another word conveys negative sentiment. So we should center all the values around netural so the absolute value fro neutral of the postive-to-negative ratio for a word would indicate how much sentiment (positive or negative) that word conveys.\n",
324+
"* Right now, 1 is considered neutral, but the absolute value of the positive-to-negative ratios of very positive words is larger than the absolute value of the ratios for the very negative words. So there is no way to directly compare two numbers and see if one word conveys the same magnitude of positive sentiment as another word conveys negative sentiment. So we should center all the values around netural so the absolute value fro neutral of the postive-to-negative ratio for a word would indicate how much sentiment (positive or negative) that word conveys.\n",
325325
"* When comparing absolute values it's easier to do that around zero than one. \n",
326326
"\n",
327327
"To fix these issues, we'll convert all of our ratios to new values using logarithms.\n",

0 commit comments

Comments
 (0)