|
| 1 | +# # The ASIA network |
| 2 | + |
| 3 | +# The graph below corresponds to the *ASIA network*, a simple Bayesian model |
| 4 | +# used extensively in educational settings. It was introduced by Lauritzen in |
| 5 | +# 1988 [^lauritzen1988local]. |
| 6 | + |
| 7 | +# ``` |
| 8 | +# ┌─┐ ┌─┐ |
| 9 | +# │A│ ┌──┤S├──┐ |
| 10 | +# └┬┘ │ └─┘ │ |
| 11 | +# │ │ │ |
| 12 | +# ▼ ▼ ▼ |
| 13 | +# ┌─┐ ┌─┐ ┌─┐ |
| 14 | +# │T│ │L│ │B│ |
| 15 | +# └┬┘ └┬┘ └┬┘ |
| 16 | +# │ ┌─┐ │ │ |
| 17 | +# └─►│E│◄─┘ │ |
| 18 | +# └┬┘ │ |
| 19 | +# ┌─┐ │ ┌─┐ │ |
| 20 | +# │X│◄─┘ │D│◄─────┘ |
| 21 | +# └─┘ └─┘ |
| 22 | +# ``` |
| 23 | + |
| 24 | +# The table below explains the meanings of each random variable used in the |
| 25 | +# ASIA network model. |
| 26 | + |
| 27 | +# | **Random variable** | **Meaning** | |
| 28 | +# | :---: | :--- | |
| 29 | +# | ``A`` | Recent trip to Asia | |
| 30 | +# | ``T`` | Patient has tuberculosis | |
| 31 | +# | ``S`` | Patient is a smoker | |
| 32 | +# | ``L`` | Patient has lung cancer | |
| 33 | +# | ``B`` | Patient has bronchitis | |
| 34 | +# | ``E`` | Patient hast ``T`` and/or ``L`` | |
| 35 | +# | ``X`` | Chest X-Ray is positive | |
| 36 | +# | ``D`` | Patient has dyspnoea | |
| 37 | + |
| 38 | +# --- |
| 39 | + |
| 40 | +# We now demonstrate how to use the TensorInference.jl package for conducting a |
| 41 | +# variety of inference tasks on the Asia network. |
| 42 | + |
| 43 | +# Import the TensorInference package, which provides the functionality needed |
| 44 | +# for working with tensor networks and probabilistic graphical models. |
1 | 45 | using TensorInference |
2 | 46 |
|
3 | | -# Load the model that detailed in the README and `asia.uai`. |
| 47 | +# --- |
| 48 | + |
| 49 | +# Load the ASIA network model from the `asia.uai` file located in the examples directory. |
| 50 | +# See [Model file format (.uai)](@ref) for a description of the format of this file. |
4 | 51 | instance = read_instance(pkgdir(TensorInference, "examples", "asia", "asia.uai")) |
5 | | -tnet = TensorNetworkModel(instance) |
6 | 52 |
|
7 | | -# Get the probabilities (PR) |
8 | | -probability(tnet) |
| 53 | +# --- |
| 54 | + |
| 55 | +# Create a tensor network representation of the loaded model. |
| 56 | +tn = TensorNetworkModel(instance) |
| 57 | + |
| 58 | +# --- |
| 59 | + |
| 60 | +# Calculate the ``\log_{10}`` partition function |
| 61 | +probability(tn) |> first |> log10 |
| 62 | + |
| 63 | +# --- |
9 | 64 |
|
10 | | -# Get the marginal probabilities (MAR) |
11 | | -marginals(tnet) .|> first |
| 65 | +# Calculate the marginal probabilities of each random variable in the model. |
| 66 | +marginals(tn) |
12 | 67 |
|
13 | | -# The corresponding variables are |
14 | | -get_vars(tnet) |
| 68 | +# --- |
15 | 69 |
|
16 | | -# Set the evidence variables "X-ray" (7) to be positive. |
| 70 | +# Retrieve the variables associated with the tensor network model. |
| 71 | +get_vars(tn) |
| 72 | + |
| 73 | +# --- |
| 74 | + |
| 75 | +# Set an evidence: Assume that the "X-ray" result (variable 7) is positive. |
17 | 76 | set_evidence!(instance, 7=>0) |
18 | 77 |
|
19 | | -# Since the evidence variable may change the contraction order, we re-compute the tensor network. |
20 | | -tnet = TensorNetworkModel(instance) |
| 78 | +# --- |
| 79 | + |
| 80 | +# Since setting an evidence may affect the contraction order of the tensor network, recompute it. |
| 81 | +tn = TensorNetworkModel(instance) |
| 82 | + |
| 83 | +# --- |
| 84 | + |
| 85 | +# Calculate the maximum log-probability among all configurations. |
| 86 | +maximum_logp(tn) |
| 87 | + |
| 88 | +# --- |
21 | 89 |
|
22 | | -# Get the maximum log-probabilities (MAP) |
23 | | -maximum_logp(tnet) |
| 90 | +# Generate 10 samples from the probability distribution represented by the model. |
| 91 | +sample(tn, 10) |
24 | 92 |
|
25 | | -# To sample from the probability model |
26 | | -sample(tnet, 10) |
| 93 | +# --- |
27 | 94 |
|
28 | | -# Get not only the maximum log-probability, but also the most probable conifguration |
29 | | -# In the most probable configuration, the most probable one is the patient smoke (3) and has lung cancer (4) |
30 | | -logp, cfg = most_probable_config(tnet) |
| 95 | +# Retrieve not only the maximum log-probability but also the most probable configuration. |
| 96 | +# In this configuration, the most likely outcomes are that the patient smokes (variable 3) and has lung cancer (variable 4). |
| 97 | +logp, cfg = most_probable_config(tn) |
31 | 98 |
|
32 | | -# Get the maximum log-probabilities (MMAP) |
33 | | -# To get the probability of lung cancer, we need to marginalize out other variables. |
| 99 | +# --- |
| 100 | + |
| 101 | +# Compute the most probable values of certain variables (e.g., 4 and 7) while marginalizing over others. |
| 102 | +# This is known as Maximum a Posteriori (MAP) estimation. |
34 | 103 | mmap = MMAPModel(instance; queryvars=[4,7]) |
35 | | -# We get the most probable configurations on [4, 7] |
| 104 | + |
| 105 | +# --- |
| 106 | + |
| 107 | +# Get the most probable configurations for variables 4 and 7. |
36 | 108 | most_probable_config(mmap) |
37 | | -# The total probability of having lung cancer is roughly half. |
| 109 | + |
| 110 | +# --- |
| 111 | + |
| 112 | +# Compute the total log-probability of having lung cancer. The results suggest that the probability is roughly half. |
38 | 113 | log_probability(mmap, [1, 0]), log_probability(mmap, [0, 0]) |
| 114 | + |
| 115 | +# [^lauritzen1988local]: |
| 116 | +# Steffen L Lauritzen and David J Spiegelhalter. Local computations with probabilities on graphical structures and their application to expert systems. *Journal of the Royal Statistical Society: Series B (Methodological)*, 50(2):157–194, 1988. |
0 commit comments