You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The predictive distribution {eq}`eq3` that assumes that the parameters $\rho, \sigma$ are known, which we express
88
+
The predictive distribution {eq}`ar1-tp-eq3` that assumes that the parameters $\rho, \sigma$ are known, which we express
87
89
by conditioning on them.
88
90
89
91
We also want to compute a predictive distribution that does not condition on $\rho,\sigma$ but instead takes account of our uncertainty about them.
90
92
91
-
We form this predictive distribution by integrating {eq}`eq3` with respect to a joint posterior distribution $\pi_t(\rho,\sigma | y^t)$
93
+
We form this predictive distribution by integrating {eq}`ar1-tp-eq3` with respect to a joint posterior distribution $\pi_t(\rho,\sigma | y^t)$
92
94
that conditions on an observed history $y^t = \{y_s\}_{s=0}^t$:
93
95
94
96
$$
95
97
f(y_{t+j} | y^t) = \int f(y_{t+j} | y_{t}; \rho, \sigma) \pi_t(\rho,\sigma | y^t ) d \rho d \sigma
96
-
$$ (eq4)
97
-
98
-
98
+
$$ (ar1-tp-eq4)
99
99
100
-
Predictive distribution {eq}`eq3` assumes that parameters $(\rho,\sigma)$ are known.
100
+
Predictive distribution {eq}`ar1-tp-eq3` assumes that parameters $(\rho,\sigma)$ are known.
101
101
102
-
Predictive distribution {eq}`eq4` assumes that parameters $(\rho,\sigma)$ are uncertain, but have known probability distribution $\pi_t(\rho,\sigma | y^t )$.
102
+
Predictive distribution {eq}`ar1-tp-eq4` assumes that parameters $(\rho,\sigma)$ are uncertain, but have known probability distribution $\pi_t(\rho,\sigma | y^t )$.
103
103
104
-
We also want to compute some predictive distributions of "sample path statistics" that might include, for example
104
+
We also want to compute some predictive distributions of "sample path statistics" that might include, for example
105
105
106
106
- the time until the next "recession",
107
107
- the minimum value of $Y$ over the next 8 periods,
@@ -115,20 +115,16 @@ To accomplish that for situations in which we are uncertain about parameter valu
115
115
- for each draw $n=0,1,...,N$, simulate a "future path" of length $T_1$ with parameters $\left(\rho_n,\sigma_n\right)$ and compute our three "sample path statistics";
116
116
- finally, plot the desired statistics from the $N$ samples as an empirical distribution.
117
117
118
-
119
-
120
118
## Implementation
121
119
122
120
First, we'll simulate a sample path from which to launch our forecasts.
123
121
124
122
In addition to plotting the sample path, under the assumption that the true parameter values are known,
125
123
we'll plot $.9$ and $.95$ coverage intervals using conditional distribution
126
-
{eq}`eq3` described above.
124
+
{eq}`ar1-tp-eq3` described above.
127
125
128
126
We'll also plot a bunch of samples of sequences of future values and watch where they fall relative to the coverage interval.
129
127
130
-
131
-
132
128
```{code-cell} ipython3
133
129
def AR1_simulate(rho, sigma, y0, T):
134
130
@@ -198,30 +194,30 @@ Wecker {cite}`wecker1979predicting` proposed using simulation techniques to char
198
194
199
195
He called these functions "path properties" to contrast them with properties of single data points.
200
196
201
-
He studied two special prospective path properties of a given series $\{y_t\}$.
197
+
He studied two special prospective path properties of a given series $\{y_t\}$.
202
198
203
-
The first was **time until the next turning point**
199
+
The first was **time until the next turning point**
204
200
205
-
* he defined a **"turning point"** to be the date of the second of two successive declines in $y$.
201
+
* he defined a **"turning point"** to be the date of the second of two successive declines in $y$.
206
202
207
-
To examine this statistic, let $Z$ be an indicator process
203
+
To examine this statistic, let $Z$ be an indicator process
$ as samples from the predictive distributions $f(W_{t+1} \mid \mathcal y_t, \dots)$, $f(W_{t+2} \mid y_t, y_{t-1}, \dots)$, $\dots$, $f(W_{t+N} \mid y_t, y_{t-1}, \dots)$.
269
+
* consider the sets $\{W_t(\omega_i)\}^{T}_{i=1}, \ \{W_{t+1}(\omega_i)\}^{T}_{i=1}, \ \dots, \ \{W_{t+N}(\omega_i)\}^{T}_{i=1}$ as samples from the predictive distributions $f(W_{t+1} \mid \mathcal y_t, \dots)$, $f(W_{t+2} \mid y_t, y_{t-1}, \dots)$, $\dots$, $f(W_{t+N} \mid y_t, y_{t-1}, \dots)$.
278
270
279
271
280
272
## Using Simulations to Approximate a Posterior Distribution
@@ -283,7 +275,6 @@ The next code cells use `pymc` to compute the time $t$ posterior distribution of
283
275
284
276
Note that in defining the likelihood function, we choose to condition on the initial value $y_0$.
285
277
286
-
287
278
```{code-cell} ipython3
288
279
def draw_from_posterior(sample):
289
280
"""
@@ -328,7 +319,6 @@ The graphs on the left portray posterior marginal distributions.
328
319
329
320
## Calculating Sample Path Statistics
330
321
331
-
332
322
Our next step is to prepare Python codeto compute our sample path statistics.
333
323
334
324
```{code-cell} ipython3
@@ -455,9 +445,9 @@ plt.show()
455
445
## Extended Wecker Method
456
446
457
447
Now we apply we apply our "extended" Wecker method based on predictive densities of $y$ defined by
458
-
{eq}`eq4` that acknowledge posterior uncertainty in the parameters $\rho, \sigma$.
448
+
{eq}`ar1-tp-eq4` that acknowledge posterior uncertainty in the parameters $\rho, \sigma$.
459
449
460
-
To approximate the intergration on the right side of {eq}`eq4`, we repeately draw parameters from the joint posterior distribution each time we simulate a sequence of future values from model {eq}`eq1`.
450
+
To approximate the intergration on the right side of {eq}`ar1-tp-eq4`, we repeately draw parameters from the joint posterior distribution each time we simulate a sequence of future values from model {eq}`ar1-tp-eq1`.
461
451
462
452
```{code-cell} ipython3
463
453
def plot_extended_Wecker(post_samples, initial_path, N, ax):
Copy file name to clipboardExpand all lines: lectures/bayes_nonconj.md
+2-4Lines changed: 2 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -888,7 +888,7 @@ SVI_num_steps = 5000
888
888
true_theta = 0.8
889
889
```
890
890
891
-
#### Beta Prior and Posteriors
891
+
### Beta Prior and Posteriors:
892
892
893
893
Let's compare outcomes when we use a Beta prior.
894
894
@@ -944,7 +944,7 @@ Here the MCMC approximation looks good.
944
944
945
945
But the VI approximation doesn't look so good.
946
946
947
-
* even though we use the beta distribution as our guide, the VI approximated posterior distributions do not closely resemble the posteriors that we had just computed analytically.
947
+
* even though we use the beta distribution as our guide, the VI approximated posterior distributions do not closely resemble the posteriors that we had just computed analytically.
948
948
949
949
(Here, our initial parameter for Beta guide is (0.5, 0.5).)
0 commit comments