You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The following code displays Bayesian posteriors for $\alpha$ at various history lengths.
702
+
The following code generates the graph below that displays Bayesian posteriors for $\alpha$ at various history lengths.
701
703
702
704
```{code-cell} ipython3
703
705
@@ -715,7 +717,7 @@ ax.set_xlabel('$\\alpha$')
715
717
plt.show()
716
718
```
717
719
718
-
It shows how the Bayesian posterior narrows in on the true value $\alpha = .8$ of the mixing parameter as the length of a history of observations grows.
720
+
Evidently, the Bayesian posterior narrows in on the true value $\alpha = .8$ of the mixing parameter as the length of a history of observations grows.
719
721
720
722
## Concluding Remarks
721
723
@@ -728,13 +730,17 @@ That is wrong because nature is actually mixing each period with mixing probabil
728
730
729
731
Our type 1 agent eventually believes that either $f$ or $g$ generated the $w$ sequence, the outcome being determined by the model, either $f$ or $g$, whose KL divergence relative to $h$ is smaller.
730
732
731
-
Our type 2 agent has a statistical model that lets him learn more.
733
+
Our type 2 agent has a different statistical model, one that is correctly specified.
734
+
735
+
He knows the parametric form of the statistical model but not the mixing parameter $\alpha$.
736
+
737
+
He knows that he does not know it.
732
738
733
-
Using Bayes law he eventually figures out $\alpha$.
739
+
But by using Bayes' law in conjunction with his statistical model and a history of data, he eventually acquires a more and more accurate inference about $\alpha$.
734
740
735
741
This little laboratory exhibits some important general principles that govern outcomes of Bayesian learning of misspecified models.
736
742
737
-
The following situation prevails quite generally in empirical work.
743
+
Thus, the following situation prevails quite generally in empirical work.
738
744
739
745
A scientist approaches the data with a manifold $S$ of statistical models $ s (X | \theta)$ , where $s$ is a probability distribution over a random vector $X$, $\theta \in \Theta$
740
746
is a vector of parameters, and $\Theta$ indexes the manifold of models.
0 commit comments