Skip to content

Commit 0527c24

Browse files
Tom's March 25 edit of one lecture
1 parent 7419a3a commit 0527c24

File tree

1 file changed

+18
-17
lines changed

1 file changed

+18
-17
lines changed

lectures/wald_friedman.md

Lines changed: 18 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,10 @@ jupytext:
33
text_representation:
44
extension: .md
55
format_name: myst
6+
format_version: 0.13
7+
jupytext_version: 1.11.5
68
kernelspec:
7-
display_name: Python 3
9+
display_name: Python 3 (ipykernel)
810
language: python
911
name: python3
1012
---
@@ -31,10 +33,9 @@ kernelspec:
3133

3234
In addition to what's in Anaconda, this lecture will need the following libraries:
3335

34-
```{code-cell} ipython
35-
---
36-
tags: [hide-output]
37-
---
36+
```{code-cell} ipython3
37+
:tags: [hide-output]
38+
3839
!conda install -y quantecon
3940
!pip install interpolation
4041
```
@@ -64,7 +65,7 @@ Key ideas in play will be:
6465

6566
We'll begin with some imports:
6667

67-
```{code-cell} ipython
68+
```{code-cell} ipython3
6869
import numpy as np
6970
import matplotlib.pyplot as plt
7071
from numba import jit, prange, float64, int64
@@ -185,7 +186,7 @@ The next figure shows two beta distributions in the top panel.
185186

186187
The bottom panel presents mixtures of these distributions, with various mixing probabilities $\pi_k$
187188

188-
```{code-cell} python3
189+
```{code-cell} ipython3
189190
@jit(nopython=True)
190191
def p(x, a, b):
191192
r = gamma(a + b) / (gamma(a) * gamma(b))
@@ -393,7 +394,7 @@ $$
393394

394395
First, we will construct a `jitclass` to store the parameters of the model
395396

396-
```{code-cell} python3
397+
```{code-cell} ipython3
397398
wf_data = [('a0', float64), # Parameters of beta distributions
398399
('b0', float64),
399400
('a1', float64),
@@ -408,7 +409,7 @@ wf_data = [('a0', float64), # Parameters of beta distributions
408409
('z1', float64[:])]
409410
```
410411

411-
```{code-cell} python3
412+
```{code-cell} ipython3
412413
@jitclass(wf_data)
413414
class WaldFriedman:
414415
@@ -467,7 +468,7 @@ As in the {doc}`optimal growth lecture <optgrowth>`, to approximate a continuous
467468

468469
We define the operator function `Q` below.
469470

470-
```{code-cell} python3
471+
```{code-cell} ipython3
471472
@jit(nopython=True, parallel=True)
472473
def Q(h, wf):
473474
@@ -502,7 +503,7 @@ def Q(h, wf):
502503

503504
To solve the model, we will iterate using `Q` to find the fixed point
504505

505-
```{code-cell} python3
506+
```{code-cell} ipython3
506507
@jit(nopython=True)
507508
def solve_model(wf, tol=1e-4, max_iter=1000):
508509
"""
@@ -534,7 +535,7 @@ Let's inspect outcomes.
534535

535536
We will be using the default parameterization with distributions like so
536537

537-
```{code-cell} python3
538+
```{code-cell} ipython3
538539
wf = WaldFriedman()
539540
540541
fig, ax = plt.subplots(figsize=(10, 6))
@@ -550,14 +551,14 @@ plt.show()
550551

551552
To solve the model, we will call our `solve_model` function
552553

553-
```{code-cell} python3
554+
```{code-cell} ipython3
554555
h_star = solve_model(wf) # Solve the model
555556
```
556557

557558
We will also set up a function to compute the cutoffs $\alpha$ and $\beta$
558559
and plot these on our value function plot
559560

560-
```{code-cell} python3
561+
```{code-cell} ipython3
561562
@jit(nopython=True)
562563
def find_cutoff_rule(wf, h):
563564
@@ -637,7 +638,7 @@ On the right is the fraction of correct decisions at the stopping time.
637638

638639
In this case, the decision-maker is correct 80% of the time
639640

640-
```{code-cell} python3
641+
```{code-cell} ipython3
641642
def simulate(wf, true_dist, h_star, π_0=0.5):
642643
643644
"""
@@ -741,7 +742,7 @@ Before you look, think about what will happen:
741742
- Will the decision-maker be correct more or less often?
742743
- Will he make decisions sooner or later?
743744

744-
```{code-cell} python3
745+
```{code-cell} ipython3
745746
wf = WaldFriedman(c=2.5)
746747
simulation_plot(wf)
747748
```
@@ -940,4 +941,4 @@ We'll dig deeper into some of the ideas used here in the following lectures:
940941
* {doc}`this lecture <likelihood_ratio_process>` describes **likelihood ratio processes** and their role in frequentist and Bayesian statistical theories
941942
* {doc}`this lecture <likelihood_bayes>` discusses the role of likelihood ratio processes in **Bayesian learning**
942943
* {doc}`this lecture <navy_captain>` returns to the subject of this lecture and studies whether the Captain's hunch that the (frequentist) decision rule
943-
that the Navy had ordered him to use can be expected to be better or worse than the rule sequential rule that Abraham Wald designed
944+
that the Navy had ordered him to use can be expected to be better or worse than the rule sequential rule that Abraham Wald designed

0 commit comments

Comments
 (0)