Skip to content

Commit f7e226d

Browse files
Tom's April 21 edits of two lectures
1 parent 2ce7500 commit f7e226d

File tree

2 files changed

+20
-18
lines changed

2 files changed

+20
-18
lines changed

lectures/exchangeable.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -717,8 +717,7 @@ We'll dig deeper into some of the ideas used here in the following lectures:
717717

718718
* {doc}`this lecture <likelihood_ratio_process>` describes **likelihood ratio processes**
719719
and their role in frequentist and Bayesian statistical theories
720-
* {doc}`this lecture <navy_captain>` returns to the subject of this lecture and studies
721-
whether the Captain's hunch that the (frequentist) decision rule that the Navy had ordered
722-
him to use can be expected to be better or worse than the rule sequential rule that Abraham
723-
Wald designed
720+
* {doc}`this lecture <navy_captain>` studies whether a World War II US Nay Captain's hunch that a (frequentist) decision rule that the Navy had told
721+
him to use was actually inferior to a sequential rule that Abraham
722+
Wald would soon design
724723

lectures/wald_friedman.md

Lines changed: 17 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -124,7 +124,7 @@ We'll formulate the problem using dynamic programming.
124124
The following presentation of the problem closely follows Dmitri
125125
Berskekas's treatment in **Dynamic Programming and Stochastic Control** {cite}`Bertekas75`.
126126

127-
A decision-maker observes a sequence of draws of a random variable $z$.
127+
A decision-maker can observe a sequence of draws of a random variable $z$.
128128

129129
He (or she) wants to know which of two probability distributions $f_0$ or $f_1$ governs $z$.
130130

@@ -137,19 +137,23 @@ random variables is also independently and identically distributed (IID).
137137
But the observer does not know which of the two distributions generated the sequence.
138138

139139
For reasons explained in [Exchangeability and Bayesian Updating](https://python.quantecon.org/exchangeable.html), this means that the sequence is not
140-
IID and that the observer has something to learn, even though he knows both $f_0$ and $f_1$.
140+
IID.
141141

142-
The decision maker chooses a number of draws (i.e., random samples from the unknown distribution) and uses them to decide
142+
The observer has something to learn, namely, whether the observations are drawn from $f_0$ or from $f_1$.
143+
144+
The decision maker wants to decide
143145
which of the two distributions is generating outcomes.
144146

145-
He starts with prior
147+
We adopt a Bayesian formulation.
148+
149+
The decision maker begins with a prior probability
146150

147151
$$
148152
\pi_{-1} =
149153
\mathbb P \{ f = f_0 \mid \textrm{ no observations} \} \in (0, 1)
150154
$$
151155

152-
After observing $k+1$ observations $z_k, z_{k-1}, \ldots, z_0$, he updates this value to
156+
After observing $k+1$ observations $z_k, z_{k-1}, \ldots, z_0$, he updates his personal probability that the observations are described by distribution $f_0$ to
153157

154158
$$
155159
\pi_k = \mathbb P \{ f = f_0 \mid z_k, z_{k-1}, \ldots, z_0 \}
@@ -252,15 +256,15 @@ So when we treat $f=f_0$ as the null hypothesis
252256

253257
### Intuition
254258

255-
Let's try to guess what an optimal decision rule might look like before we go further.
259+
Before proceeding, let's try to guess what an optimal decision rule might look like.
256260

257261
Suppose at some given point in time that $\pi$ is close to 1.
258262

259263
Then our prior beliefs and the evidence so far point strongly to $f = f_0$.
260264

261265
If, on the other hand, $\pi$ is close to 0, then $f = f_1$ is strongly favored.
262266

263-
Finally, if $\pi$ is in the middle of the interval $[0, 1]$, then we have little information in either direction.
267+
Finally, if $\pi$ is in the middle of the interval $[0, 1]$, then we are confronted with more uncertainty.
264268

265269
This reasoning suggests a decision rule such as the one shown in the figure
266270

@@ -270,8 +274,7 @@ This reasoning suggests a decision rule such as the one shown in the figure
270274

271275
As we'll see, this is indeed the correct form of the decision rule.
272276

273-
The key problem is to determine the threshold values $\alpha, \beta$,
274-
which will depend on the parameters listed above.
277+
Our problem is to determine threshold values $\alpha, \beta$ that somehow depend on the parameters described above.
275278

276279
You might like to pause at this point and try to predict the impact of a
277280
parameter such as $c$ or $L_0$ on $\alpha$ or $\beta$.
@@ -326,7 +329,7 @@ where $\pi \in [0,1]$ and
326329
$f_0$ (i.e., the cost of making a type II error).
327330
- $\pi L_1$ is the expected loss associated with accepting
328331
$f_1$ (i.e., the cost of making a type I error).
329-
- $h(\pi) := c + \mathbb E [J(\pi')]$ the continuation value; i.e.,
332+
- $h(\pi) := c + \mathbb E [J(\pi')]$; this is the continuation value; i.e.,
330333
the expected cost associated with drawing one more $z$.
331334

332335
The optimal decision rule is characterized by two numbers $\alpha, \beta \in (0,1) \times (0,1)$ that satisfy
@@ -354,7 +357,7 @@ $$
354357
Our aim is to compute the value function $J$, and from it the associated cutoffs $\alpha$
355358
and $\beta$.
356359

357-
To make our computations simpler, using {eq}`optdec`, we can write the continuation value $h(\pi)$ as
360+
To make our computations manageable, using {eq}`optdec`, we can write the continuation value $h(\pi)$ as
358361

359362
```{math}
360363
:label: optdec2
@@ -375,7 +378,7 @@ h(\pi) =
375378
c + \int \min \{ (1 - \kappa(z', \pi) ) L_0, \kappa(z', \pi) L_1, h(\kappa(z', \pi) ) \} f_\pi (z') dz'
376379
```
377380

378-
can be understood as a functional equation, where $h$ is the unknown.
381+
is a **functional equation** in an unknown function $h$.
379382

380383
Using the functional equation, {eq}`funceq`, for the continuation value, we can back out
381384
optimal choices using the right side of {eq}`optdec`.
@@ -501,7 +504,7 @@ def Q(h, wf):
501504
return h_new
502505
```
503506

504-
To solve the model, we will iterate using `Q` to find the fixed point
507+
To solve the key functional equation, we will iterate using `Q` to find the fixed point
505508

506509
```{code-cell} ipython3
507510
@jit(nopython=True)
@@ -630,7 +633,7 @@ model $f_0$ falls below $\beta$ or above $\alpha$.
630633

631634
The next figure shows the outcomes of 500 simulations of the decision process.
632635

633-
On the left is a histogram of the stopping times, which equal the number of draws of $z_k$ required to make a decision.
636+
On the left is a histogram of **stopping times**, i.e., the number of draws of $z_k$ required to make a decision.
634637

635638
The average number of draws is around 6.6.
636639

0 commit comments

Comments
 (0)