Skip to content

Commit 21690fd

Browse files
committed
[no ci] user guide: add stub on additional topics
1 parent d461d60 commit 21690fd

File tree

5 files changed

+194
-0
lines changed

5 files changed

+194
-0
lines changed
Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"id": "2eb7a0f2-6017-4a45-90e4-d149d10193b7",
6+
"metadata": {},
7+
"source": [
8+
"# Approximators\n",
9+
"\n",
10+
"Neural approximators provide an approximation of a distribution or a value. To achieve this, they combine the things we have discussed in the previous chapters: simulated data, adapters, summary networks and inference networks. Approximators are at the heart of BayesFlow, as they organize the different components and provide the `fit()` function used for training."
11+
]
12+
}
13+
],
14+
"metadata": {
15+
"kernelspec": {
16+
"display_name": "Python 3 (ipykernel)",
17+
"language": "python",
18+
"name": "python3"
19+
},
20+
"language_info": {
21+
"codemirror_mode": {
22+
"name": "ipython",
23+
"version": 3
24+
},
25+
"file_extension": ".py",
26+
"mimetype": "text/x-python",
27+
"name": "python",
28+
"nbconvert_exporter": "python",
29+
"pygments_lexer": "ipython3",
30+
"version": "3.11.11"
31+
}
32+
},
33+
"nbformat": 4,
34+
"nbformat_minor": 5
35+
}
Lines changed: 62 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,62 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"id": "71f2af04-8c30-44d7-a0c9-377ffd0d3c75",
6+
"metadata": {},
7+
"source": [
8+
"# Diagnostics and Visualizations\n",
9+
"\n",
10+
"There are many factors that influence whether training succeeds and how well we can approximate a target. In this light, checking the results and diagnosing potential problems is an important part of the workflow.\n",
11+
"\n",
12+
"## Loss\n",
13+
"\n",
14+
"While the loss cannot show that the training has succeeded, it can indicate that something has gone wrong. Warning signs are an unstable loss with large upward jumps, and a lack of convergence (the loss still changes significantly at the end of training). We recommend to supply a validation dataset to the training, to diagnose potential overfitting. You can plot the loss using the {py:func}`bayesflow.diagnostics.loss` function.\n",
15+
"\n",
16+
"## Posterior\n",
17+
"\n",
18+
"For inference on simulated data, we can plot the posterior alongside the ground truth values. This can serve as a diagnostic for whether the approximator has learned to approximate the true posteriors well enough. The {py:func}`~bayesflow.diagnostics.pairs_posterior` function displays a set of one- and two-dimensional marginal posterior distributions.\n",
19+
"\n",
20+
"## Recovery\n",
21+
"\n",
22+
"For inference on simulated data, we can visualize how well the ground truth values are recovered over a larger number of datasets. {py:func}`~bayesflow.diagnostics.recovery` is a convencience function for this kind of plot.\n",
23+
"\n",
24+
"## Simulation-based Calibration (SBC)\n",
25+
"\n",
26+
"Simulation-based calibration provides an indication of the posterior approximations' accuracy, without requiring access to the ground-truth posterior. In short, if the true values are simulated from the prior used during inference, we would expect the rank of the true parameter value to be uniformly distributed from 1 to `num_samples`. There are multiple graphical methods that use this property for diagnostics. For example, we can use histograms together with an uncertainty band within which we would expect the histogram bars to be if the rank statistics were indeed uniform. This plot is provided by the {py:func}`~bayesflow.diagnostics.calibration_histogram` function.\n",
27+
"\n",
28+
"SBC histograms have some drawbacks on how the confidence bands are computed, so we recommend using another kind of plot that is based on the empirical cumulative distribution function (ECDF). For the ECDF, we can compute better confidence bands than for histograms, so the SBC ECDF plot is usually preferable. [This SBC interpretation guide by Martin Modrák](https://hyunjimoon.github.io/SBC/articles/rank_visualizations.html) gives further background information and also practical examples of how to interpret the SBC plots. To display SBC ECDF plots, use the {py:func}`~bayesflow.diagnostics.calibration_ecdf` function.\n",
29+
"\n",
30+
"## Posterior Contraction & z-Score\n",
31+
"\n",
32+
"After having convinced us that the posterior approximation are overall reasonable, we can check how much and what kind of information in the data we encode in the posterior. Specifically, we might want to look at two interesting scores:\n",
33+
"\n",
34+
"- The posterior contraction, which measures how much smaller the posterior variance is relative to the prior variance (higher values indicate more contraction relative to the prior).\n",
35+
"- The posterior z-score which indicates the standardized difference between the posterior mean and the true parameter value. Since the posterior z-score requires the true parameter values, it can only be computed in simulated data settings.\n",
36+
"\n",
37+
"The {py:func}`~bayesflow.diagnostics.z_score_contraction` function provides a combined plot of both."
38+
]
39+
}
40+
],
41+
"metadata": {
42+
"kernelspec": {
43+
"display_name": "Python 3 (ipykernel)",
44+
"language": "python",
45+
"name": "python3"
46+
},
47+
"language_info": {
48+
"codemirror_mode": {
49+
"name": "ipython",
50+
"version": 3
51+
},
52+
"file_extension": ".py",
53+
"mimetype": "text/x-python",
54+
"name": "python",
55+
"nbconvert_exporter": "python",
56+
"pygments_lexer": "ipython3",
57+
"version": "3.11.11"
58+
}
59+
},
60+
"nbformat": 4,
61+
"nbformat_minor": 5
62+
}

docsrc/source/user_guide/index.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,4 +14,8 @@ generative_models.ipynb
1414
data_processing.ipynb
1515
summary_networks.ipynb
1616
inference_networks.ipynb
17+
approximators.ipynb
18+
workflows.ipynb
19+
diagnostics.ipynb
20+
saving_loading.ipynb
1721
```
Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"id": "743f4071-009d-48e6-bc27-9785f37071c4",
6+
"metadata": {},
7+
"source": [
8+
"# Saving & Loading Models\n",
9+
"\n",
10+
"Saving and loading of models takes place via our backend, [Keras 3](https://keras.io/). Objects that can be saved have a `save` method, which allows saving to a `.keras` file.\n",
11+
"\n",
12+
"The [`keras.saving.load_model`](https://keras.io/api/models/model_saving_apis/model_saving_and_loading/#load_model-function) function can be used to load the stored models. There is a lot more to say about this topic. For now, refer to the respective [Keras guide](https://keras.io/guides/serialization_and_saving/).\n",
13+
"\n",
14+
"**Important:** We try to avoid it, but changes to model architectures can lead to a situation where the model is not successfully loaded after updating BayesFlow. This can also happen silently (i.e., without an error/warning being shown), so we recommend to conduct a quick sanity check on the model outputs after loading. In practice, pinning the BayesFlow version for each project can also help to avoid problems in this regard."
15+
]
16+
}
17+
],
18+
"metadata": {
19+
"kernelspec": {
20+
"display_name": "Python 3 (ipykernel)",
21+
"language": "python",
22+
"name": "python3"
23+
},
24+
"language_info": {
25+
"codemirror_mode": {
26+
"name": "ipython",
27+
"version": 3
28+
},
29+
"file_extension": ".py",
30+
"mimetype": "text/x-python",
31+
"name": "python",
32+
"nbconvert_exporter": "python",
33+
"pygments_lexer": "ipython3",
34+
"version": "3.11.11"
35+
}
36+
},
37+
"nbformat": 4,
38+
"nbformat_minor": 5
39+
}
Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"id": "2eb7a0f2-6017-4a45-90e4-d149d10193b7",
6+
"metadata": {},
7+
"source": [
8+
"# Workflows\n",
9+
"\n",
10+
"Workflows are an abstraction on top of the approximator, that expose methods for training and inference in a more abstract, and therefore simplified, fashion.\n",
11+
"\n",
12+
"## BasicWorkflow\n",
13+
"\n",
14+
"For now, {py:class}`~bayesflow.workflows.BasicWorkflow` is the only available workflow. In its most basic form, you provide it with four things: simulator, adapter, inference network and summary network (if you use one).\n",
15+
"\n",
16+
"```python\n",
17+
"workflow = bf.BasicWorkflow(\n",
18+
" simulator=simulator,\n",
19+
" adapter=adapter,\n",
20+
" inference_network=inference_network,\n",
21+
" summary_network=summary_network,\n",
22+
")\n",
23+
"```\n",
24+
"\n",
25+
"You can then use the {py:meth}`~bayesflow.workflows.BasicWorkflow.fit_online` method to run the training. After training, the you can sample from the workflow using {py:meth}`~bayesflow.workflows.BasicWorkflow.sample`.\n",
26+
"\n",
27+
"Note that after the training, the work of the workflow is done, and you can just use the trained approximator (accessible via `workflow.approximator`) for downstream tasks. If you want to save you trained output, you would use the `approximator` as well, i.e., store it using `workflow.approximator.save(filepath=filepath)`. You can then use {py:meth}`~bayesflow.approximators.ContinuousApproximator.sample` and related methods directly from the approximator.\n",
28+
"\n",
29+
"{py:class}`~bayesflow.workflows.BasicWorkflow` is highly configurable, refer to the API reference for details. For practical usage, take a look at the {doc}`../examples`."
30+
]
31+
}
32+
],
33+
"metadata": {
34+
"kernelspec": {
35+
"display_name": "Python 3 (ipykernel)",
36+
"language": "python",
37+
"name": "python3"
38+
},
39+
"language_info": {
40+
"codemirror_mode": {
41+
"name": "ipython",
42+
"version": 3
43+
},
44+
"file_extension": ".py",
45+
"mimetype": "text/x-python",
46+
"name": "python",
47+
"nbconvert_exporter": "python",
48+
"pygments_lexer": "ipython3",
49+
"version": "3.11.11"
50+
}
51+
},
52+
"nbformat": 4,
53+
"nbformat_minor": 5
54+
}

0 commit comments

Comments
 (0)