Skip to content

Commit 0d1df55

Browse files
authored
ENH: Migrate to sphinx-exercise (#231)
* ENH: Migrate to sphinx-exercise * update a* lectures * update c*.md lectures * add :class: dropdown for solutions * update labels for exercises to be unique * update labels for exercises to be unique * update labels for exercises to be unique * update labels for exercises to be unique * [career] fix nested directives issue * update f*.md lectures * update h*.md lectures * fix issue in f*.md * update i*.md lectures * update j*.md lectures * fix broken reference to exercise * update k*.md lectures * update m*.md lectures * fix lqcontrol * update o*.md lectures * update p*.md lectures * fix optgrowth_fast * update r*.md lectures * fix typo * update s*.md lectures * update u*.md lectures * update w*.md lectures * Update m*.md and quality fixes * update finite_markove tags to fm_ instead of mc_ * Revert "update finite_markove tags to fm_ instead of mc_" This reverts commit 235b20c. * update exercise tag names to fm_ from mc_ * fix issue with markov_asset * update exercise links
1 parent 05a3580 commit 0d1df55

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

41 files changed

+883
-224
lines changed

lectures/_config.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ latex:
3232
targetname: quantecon-python.tex
3333

3434
sphinx:
35-
extra_extensions: [sphinx_multitoc_numbering, sphinxext.rediraffe, sphinx_tojupyter, sphinxcontrib.youtube, sphinx.ext.todo]
35+
extra_extensions: [sphinx_multitoc_numbering, sphinxext.rediraffe, sphinx_tojupyter, sphinxcontrib.youtube, sphinx.ext.todo, sphinx_exercise, sphinx_togglebutton]
3636
config:
3737
nb_render_priority:
3838
html:

lectures/ar1_processes.md

Lines changed: 31 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -322,7 +322,8 @@ important concept for statistics and simulation.
322322

323323
## Exercises
324324

325-
### Exercise 1
325+
```{exercise}
326+
:label: ar1p_ex1
326327
327328
Let $k$ be a natural number.
328329
@@ -355,8 +356,11 @@ $$
355356
when $m$ is large.
356357
357358
Confirm this by simulation at a range of $k$ using the default parameters from the lecture.
359+
```
360+
358361

359-
### Exercise 2
362+
```{exercise}
363+
:label: ar1p_ex2
360364
361365
Write your own version of a one dimensional [kernel density
362366
estimator](https://en.wikipedia.org/wiki/Kernel_density_estimation),
@@ -398,8 +402,11 @@ Use $n=500$.
398402
399403
Make a comment on your results. (Do you think this is a good estimator
400404
of these distributions?)
405+
```
406+
401407

402-
### Exercise 3
408+
```{exercise}
409+
:label: ar1p_ex3
403410
404411
In the lecture we discussed the following fact: for the $AR(1)$ process
405412
@@ -438,10 +445,14 @@ color) as follows:
438445
Try this for $n=2000$ and confirm that the
439446
simulation based estimate of $\psi_{t+1}$ does converge to the
440447
theoretical distribution.
448+
```
449+
441450

442451
## Solutions
443452

444-
### Exercise 1
453+
```{solution-start} ar1p_ex1
454+
:class: dropdown
455+
```
445456

446457
```{code-cell} python3
447458
from numba import njit
@@ -479,7 +490,13 @@ ax.legend()
479490
plt.show()
480491
```
481492

482-
### Exercise 2
493+
```{solution-end}
494+
```
495+
496+
497+
```{solution-start} ar1p_ex2
498+
:class: dropdown
499+
```
483500

484501
Here is one solution:
485502

@@ -532,7 +549,13 @@ for α, β in parameter_pairs:
532549
We see that the kernel density estimator is effective when the underlying
533550
distribution is smooth but less so otherwise.
534551

535-
### Exercise 3
552+
```{solution-end}
553+
```
554+
555+
556+
```{solution-start} ar1p_ex3
557+
:class: dropdown
558+
```
536559

537560
Here is our solution
538561

@@ -579,3 +602,5 @@ plt.show()
579602
The simulated distribution approximately coincides with the theoretical
580603
distribution, as predicted.
581604

605+
```{solution-end}
606+
```

lectures/cake_eating_numerical.md

Lines changed: 20 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -482,7 +482,8 @@ This is due to
482482

483483
## Exercises
484484

485-
### Exercise 1
485+
```{exercise}
486+
:label: cen_ex1
486487
487488
Try the following modification of the problem.
488489
@@ -500,15 +501,22 @@ where $\alpha$ is a parameter satisfying $0 < \alpha < 1$.
500501
Make the required changes to value function iteration code and plot the value and policy functions.
501502
502503
Try to reuse as much code as possible.
504+
```
505+
503506

504-
### Exercise 2
507+
```{exercise}
508+
:label: cen_ex2
505509
506510
Implement time iteration, returning to the original case (i.e., dropping the
507511
modification in the exercise above).
512+
```
513+
508514

509515
## Solutions
510516

511-
### Exercise 1
517+
```{solution-start} cen_ex1
518+
:class: dropdown
519+
```
512520

513521
We need to create a class to hold our primitives and return the right hand side of the Bellman equation.
514522

@@ -582,7 +590,13 @@ plt.show()
582590

583591
Consumption is higher when $\alpha < 1$ because, at least for large $x$, the return to savings is lower.
584592

585-
### Exercise 2
593+
```{solution-end}
594+
```
595+
596+
597+
```{solution-start} cen_ex2
598+
:class: dropdown
599+
```
586600

587601
Here's one way to implement time iteration.
588602

@@ -670,3 +684,5 @@ ax.legend(fontsize=12)
670684
plt.show()
671685
```
672686

687+
```{solution-end}
688+
```

lectures/cake_eating_problem.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -506,7 +506,8 @@ Combining this fact with {eq}`bellman_envelope` recovers the Euler equation.
506506

507507
## Exercises
508508

509-
### Exercise 1
509+
```{exercise}
510+
:label: cep_ex1
510511
511512
How does one obtain the expressions for the value function and optimal policy
512513
given in {eq}`crra_vstar` and {eq}`crra_opt_pol` respectively?
@@ -523,10 +524,12 @@ Starting from this conjecture, try to obtain the solutions {eq}`crra_vstar` and
523524
524525
In doing so, you will need to use the definition of the value function and the
525526
Bellman equation.
527+
```
526528

527529
## Solutions
528530

529-
### Exercise 1
531+
```{solution} cep_ex1
532+
:class: dropdown
530533
531534
We start with the conjecture $c_t^*=\theta x_t$, which leads to a path
532535
for the state variable (cake size) given by
@@ -611,4 +614,4 @@ v^*(x_t) = \left(1-\beta^\frac{1}{\gamma}\right)^{-\gamma}u(x_t)
611614
$$
612615
613616
Our claims are now verified.
614-
617+
```

lectures/career.md

Lines changed: 34 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -362,8 +362,9 @@ the worker cannot change careers without changing jobs.
362362

363363
## Exercises
364364

365-
(career_ex1)=
366-
### Exercise 1
365+
```{exercise-start}
366+
:label: career_ex1
367+
```
367368

368369
Using the default parameterization in the class `CareerWorkerProblem`,
369370
generate and plot typical sample paths for $\theta$ and $\epsilon$
@@ -372,13 +373,16 @@ when the worker follows the optimal policy.
372373
In particular, modulo randomness, reproduce the following figure (where the horizontal axis represents time)
373374

374375
```{figure} /_static/lecture_specific/career/career_solutions_ex1_py.png
375-
376376
```
377377

378378
Hint: To generate the draws from the distributions $F$ and $G$, use `quantecon.random.draw()`.
379379

380-
(career_ex2)=
381-
### Exercise 2
380+
```{exercise-end}
381+
```
382+
383+
384+
```{exercise}
385+
:label: career_ex2
382386
383387
Let's now consider how long it takes for the worker to settle down to a
384388
permanent job, given a starting point of $(\theta, \epsilon) = (0, 0)$.
@@ -402,16 +406,21 @@ $$
402406
Collect 25,000 draws of this random variable and compute the median (which should be about 7).
403407
404408
Repeat the exercise with $\beta=0.99$ and interpret the change.
409+
```
410+
405411

406-
(career_ex3)=
407-
### Exercise 3
412+
```{exercise}
413+
:label: career_ex3
408414
409415
Set the parameterization to `G_a = G_b = 100` and generate a new optimal policy
410416
figure -- interpret.
417+
```
411418

412419
## Solutions
413420

414-
### Exercise 1
421+
```{solution-start} career_ex1
422+
:class: dropdown
423+
```
415424

416425
Simulate job/career paths.
417426

@@ -455,7 +464,13 @@ plt.legend()
455464
plt.show()
456465
```
457466

458-
### Exercise 2
467+
```{solution-end}
468+
```
469+
470+
471+
```{solution-start} career_ex2
472+
:class: dropdown
473+
```
459474

460475
The median for the original parameterization can be computed as follows
461476

@@ -498,7 +513,13 @@ The medians are subject to randomness but should be about 7 and 14 respectively.
498513

499514
Not surprisingly, more patient workers will wait longer to settle down to their final job.
500515

501-
### Exercise 3
516+
```{solution-end}
517+
```
518+
519+
520+
```{solution-start} career_ex3
521+
:class: dropdown
522+
```
502523

503524
```{code-cell} python3
504525
cw = CareerWorkerProblem(G_a=100, G_b=100)
@@ -522,3 +543,6 @@ In the new figure, you see that the region for which the worker
522543
stays put has grown because the distribution for $\epsilon$
523544
has become more concentrated around the mean, making high-paying jobs
524545
less realistic.
546+
547+
```{solution-end}
548+
```

lectures/cass_koopmans_1.md

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -866,17 +866,28 @@ state in which $f'(K)=\rho +\delta$.
866866
867867
### Exercise
868868
869+
```{exercise}
870+
:label: ck1_ex1
871+
869872
- Plot the optimal consumption, capital, and saving paths when the
870873
initial capital level begins at 1.5 times the steady state level
871874
as we shoot towards the steady state at $T=130$.
872875
- Why does the saving rate respond as it does?
876+
```
873877
874878
### Solution
875879
880+
```{solution-start} ck1_ex1
881+
:class: dropdown
882+
```
883+
876884
```{code-cell} python3
877885
plot_saving_rate(pp, 0.3, k_ss*1.5, [130], k_ter=k_ss, k_ss=k_ss, s_ss=s_ss)
878886
```
879887
888+
```{solution-end}
889+
```
890+
880891
## Concluding Remarks
881892
882893
In {doc}`Cass-Koopmans Competitive Equilibrium <cass_koopmans_2>`, we study a decentralized version of an economy with exactly the same

lectures/coleman_policy_iter.md

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -424,7 +424,8 @@ and accuracy, at least for this model.
424424

425425
## Exercises
426426

427-
### Exercise 1
427+
```{exercise}
428+
:label: cpi_ex1
428429
429430
Solve the model with CRRA utility
430431
@@ -435,10 +436,13 @@ $$
435436
Set `γ = 1.5`.
436437
437438
Compute and plot the optimal policy.
439+
```
438440

439441
## Solutions
440442

441-
### Exercise 1
443+
```{solution-start} cpi_ex1
444+
:class: dropdown
445+
```
442446

443447
We use the class `OptimalGrowthModel_CRRA` from our {doc}`VFI lecture <optgrowth_fast>`.
444448

@@ -468,3 +472,5 @@ ax.legend()
468472
plt.show()
469473
```
470474

475+
```{solution-end}
476+
```

0 commit comments

Comments
 (0)