Skip to content

Commit 91a562d

Browse files
committed
describe bonus assignments
1 parent 8ce1895 commit 91a562d

File tree

1 file changed

+13
-9
lines changed

1 file changed

+13
-9
lines changed

week05_large_models/practice_part1.ipynb

Lines changed: 13 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
"cells": [
33
{
44
"cell_type": "code",
5-
"execution_count": 8,
5+
"execution_count": null,
66
"metadata": {
77
"id": "0TH9Am-9ztHB"
88
},
@@ -42,7 +42,7 @@
4242
},
4343
{
4444
"cell_type": "code",
45-
"execution_count": 3,
45+
"execution_count": null,
4646
"metadata": {
4747
"colab": {
4848
"base_uri": "https://localhost:8080/"
@@ -81,7 +81,7 @@
8181
},
8282
{
8383
"cell_type": "code",
84-
"execution_count": 4,
84+
"execution_count": null,
8585
"metadata": {
8686
"id": "sTuoIY_tNSVk",
8787
"colab": {
@@ -279,7 +279,7 @@
279279
},
280280
{
281281
"cell_type": "code",
282-
"execution_count": 5,
282+
"execution_count": null,
283283
"metadata": {
284284
"colab": {
285285
"base_uri": "https://localhost:8080/"
@@ -305,7 +305,7 @@
305305
},
306306
{
307307
"cell_type": "code",
308-
"execution_count": 6,
308+
"execution_count": null,
309309
"metadata": {
310310
"colab": {
311311
"base_uri": "https://localhost:8080/"
@@ -354,7 +354,7 @@
354354
},
355355
{
356356
"cell_type": "code",
357-
"execution_count": 7,
357+
"execution_count": null,
358358
"metadata": {
359359
"colab": {
360360
"base_uri": "https://localhost:8080/",
@@ -564,7 +564,7 @@
564564
},
565565
{
566566
"cell_type": "code",
567-
"execution_count": 8,
567+
"execution_count": null,
568568
"metadata": {
569569
"colab": {
570570
"base_uri": "https://localhost:8080/"
@@ -660,6 +660,7 @@
660660
"\n",
661661
"__Task 1.2:__ generate a short sequence given a prefix. You may choose any generation task that requires generating at least 25 consecutive tokens. Here's one example from the NLP course (the generated code is in blue)\n",
662662
"\n",
663+
"![img](https://i.imgur.com/a1QhKF7.png)\n",
663664
"\n",
664665
"You may use model.generate (if your code is compatible with that) or write your own inference loop. If you choose to write your own loop, you are free to use sampling, greedy, top-p, top-k or any other [inference mode supported by HF transformers](https://huggingface.co/docs/transformers/main_classes/text_generation).\n",
665666
"\n",
@@ -672,7 +673,10 @@
672673
"- __+1 point__ you can perform forward pass on 128x1024 tokens of actual text data (e.g. the sample data above)\n",
673674
"- __+1 point__ you can compute gradients with offloading on the same 128x1024 tokens from the real text data\n",
674675
"- __+1 point__ you can inference the model - and it generates some human-readable text\n",
675-
"- __bonus points__ optimize your code so that it would pre-load the next offloaded layer in background\n",
676+
"- __bonus points:__ we offer two optional assignments:\n",
677+
" - **Selective activation checkpointing (2pt):** there is a gentler version of gradient checkpointing where you don't just remember the layer inputs, but also some activations that are easier to compute - compared to their size. For instance, MLP linear layers are compute-heavy, but the nonlinearity is relatively compute-light for the same amount of memory. You can re-compute only the compute-light operations and keep the compute-heavy ones in memory. There's [a paper](https://arxiv.org/pdf/2205.05198) that describes such an approach in detail (see 'Selective activation checkpointing').\n",
678+
" - **Prefetch offloaded layers (2pt):** optimize your code so that it begins pre-loading the next offloaded layer in the background, while computing the current layer. It can be done with a copy with non_blocking=True, or, for fine-grained control, CUDA streams. To get the full grade for this assignment, please demonstrate that your approach is faster than naive offloading, at least during large batch forward/backward pass. This can be done using a profiler.\n",
679+
" - Please note that the maximum points for this week are **capped at 14**.\n",
676680
"\n",
677681
"__Conditions:__\n",
678682
"- using more than 10GiB of GPU memory at any point is forbidden (check with [`torch.cuda.max_memory_allocated()`](https://pytorch.org/docs/stable/generated/torch.cuda.max_memory_allocated.html))\n",
@@ -5337,4 +5341,4 @@
53375341
},
53385342
"nbformat": 4,
53395343
"nbformat_minor": 0
5340-
}
5344+
}

0 commit comments

Comments
 (0)