|
| 1 | +--- |
| 2 | +jupytext: |
| 3 | + text_representation: |
| 4 | + extension: .md |
| 5 | + format_name: myst |
| 6 | + format_version: 0.13 |
| 7 | + jupytext_version: 1.14.4 |
| 8 | +kernelspec: |
| 9 | + display_name: Python 3 (ipykernel) |
| 10 | + language: python |
| 11 | + name: python3 |
| 12 | +--- |
| 13 | + |
| 14 | +# A Lake Model of Employment |
| 15 | + |
| 16 | +\ |
| 17 | +The importance of the Perron-Frobenius theorem stems from the fact that |
| 18 | +firstly in the real world most matrices we encounter are nonnegative matrices. |
| 19 | + |
| 20 | +Secondly, a lot of important models are simply linear iterative models that |
| 21 | +begin with an initial condition $x_0$ and then evolve recursively by the rule |
| 22 | +$x_{t+1} = Ax_t$ or in short $x_t = A^tx_0$. |
| 23 | + |
| 24 | +This theorem helps characterise the dominant eigenvalue $r(A)$ which |
| 25 | +determines the behavior of this iterative process. |
| 26 | + |
| 27 | +We now illustrate the power of the Perron-Frobenius theorem by showing how it |
| 28 | +helps us to analyze a model of employment and unemployment flows in a large |
| 29 | +population. |
| 30 | + |
| 31 | +This model is sometimes called the **lake model** because there are two pools of workers: |
| 32 | +1. those who are currently employed. |
| 33 | +2. those who are currently unemployed but are seeking employment. |
| 34 | + |
| 35 | +The "flows" between the two lakes are as follows: |
| 36 | +1. workers exit the labor market at rate $d$. |
| 37 | +2. new workers enter the labor market at rate $b$. |
| 38 | +3. employed workers separate from their jobs at rate $\alpha$. |
| 39 | +4. unemployed workers find jobs at rate $\lambda$. |
| 40 | + |
| 41 | +Let $e_t$ and $u_t$ be the number of employed and unemployed workers at time $t$ respectively. |
| 42 | + |
| 43 | +The total population of workers is $n_t = e_t + u_t$. |
| 44 | + |
| 45 | +The number of unemployed and employed workers thus evolve according to: |
| 46 | + |
| 47 | +```{math} |
| 48 | +:label: lake_model |
| 49 | +\begin{aligned} |
| 50 | + u_{t+1} = (1-d)(1-\lambda)u_t + \alpha(1-d)e_t + bn_t = ((1-d)(1-\lambda) +& b)u_t + (\alpha(1-d) + b)e_t \\ |
| 51 | + e_{t+1} = (1-d)\lambda u_t + (1 - \alpha)(1-d)e_t& |
| 52 | +\end{aligned} |
| 53 | +``` |
| 54 | + |
| 55 | +We can arrange {eq}`lake_model` as a linear system of equations in matrix form $x_{t+1} = Ax_t$ such that: |
| 56 | + |
| 57 | +$$ |
| 58 | +x_{t+1} = |
| 59 | +\begin{bmatrix} |
| 60 | + u_{t+1} \\ |
| 61 | + e_{t+1} |
| 62 | +\end{bmatrix} |
| 63 | +, \; A = |
| 64 | +\begin{bmatrix} |
| 65 | + (1-d)(1-\lambda) + b & \alpha(1-d) + b \\ |
| 66 | + (1-d)\lambda & (1 - \alpha)(1-d) |
| 67 | +\end{bmatrix} |
| 68 | +\; \text{and} \; |
| 69 | +x_t = |
| 70 | +\begin{bmatrix} |
| 71 | + u_t \\ |
| 72 | + e_t |
| 73 | +\end{bmatrix} |
| 74 | +$$ |
| 75 | + |
| 76 | +Suppose at $t=0$ we have $x_0 = \begin{bmatrix} u_0 \\ e_0 \end{bmatrix}$. |
| 77 | + |
| 78 | +Then, $x_1=Ax_0$, $x_2=Ax_1=A^2x_0$ and thus $x_t = A^tx_0$. |
| 79 | + |
| 80 | +Thus the long run outcomes of this system depend on the initial condition $x_0$ and the matrix $A$. |
| 81 | + |
| 82 | +$A$ is a nonnegative and irreducible matrix and thus we can use the Perron-Frobenius theorem to obtain some useful results about A. |
| 83 | + |
| 84 | +Note that $colsum_j(A) = 1 + b - d$ for $j=1,2$ and by {eq}`PF_bounds` we can thus conclude that the dominant eigenvalue |
| 85 | +$r(A) = 1 + b - d$. |
| 86 | + |
| 87 | +If we consider $g = b - d$ as the overall growth rate of the total labor force, then $r(A) = 1 + g$. |
| 88 | + |
| 89 | +We can thus find a unique positive vector $\bar{x} = \begin{bmatrix} \bar{u} \\ \bar{e} \end{bmatrix}$ |
| 90 | +such that $A\bar{x} = r(A)\bar{x}$ and $\begin{bmatrix} 1 & 1 \end{bmatrix} \bar{x} = 1$. |
| 91 | + |
| 92 | +Since $\bar{x}$ is the eigenvector corresponding to the dominant eigenvalue $r(A)$ we can also call this the dominant eigenvector. |
| 93 | + |
| 94 | +This eigenvector plays an important role in determining long run outcomes as is illustrated below. |
| 95 | + |
| 96 | +```{code-cell} ipython3 |
| 97 | +:tags: [] |
| 98 | +
|
| 99 | +def lake_model(α, λ, d, b): |
| 100 | + g = b-d |
| 101 | + A = np.array([[(1-d)*(1-λ) + b, α*(1-d) + b], |
| 102 | + [(1-d)*λ, (1-α)*(1-d)]]) |
| 103 | +
|
| 104 | + ū = (1 + g - (1 - d) * (1 - α)) / (1 + g - (1 - d) * (1 - α) + (1 - d) * λ) |
| 105 | + ē = 1 - ū |
| 106 | + x̄ = np.array([ū, ē]) |
| 107 | + x̄.shape = (2,1) |
| 108 | + |
| 109 | + ts_length = 1000 |
| 110 | + x_ts_1 = np.zeros((2, ts_length)) |
| 111 | + x_ts_2 = np.zeros((2, ts_length)) |
| 112 | + x_0_1 = (5.0, 0.1) |
| 113 | + x_0_2 = (0.1, 4.0) |
| 114 | + x_ts_1[0, 0] = x_0_1[0] |
| 115 | + x_ts_1[1, 0] = x_0_1[1] |
| 116 | + x_ts_2[0, 0] = x_0_2[0] |
| 117 | + x_ts_2[1, 0] = x_0_2[1] |
| 118 | + |
| 119 | + for t in range(1, ts_length): |
| 120 | + x_ts_1[:, t] = A @ x_ts_1[:, t-1] |
| 121 | + x_ts_2[:, t] = A @ x_ts_2[:, t-1] |
| 122 | + |
| 123 | + fig, ax = plt.subplots() |
| 124 | + |
| 125 | + # Set the axes through the origin |
| 126 | + for spine in ["left", "bottom"]: |
| 127 | + ax.spines[spine].set_position("zero") |
| 128 | + for spine in ["right", "top"]: |
| 129 | + ax.spines[spine].set_color("none") |
| 130 | + |
| 131 | + ax.set_xlim(-2, 6) |
| 132 | + ax.set_ylim(-2, 6) |
| 133 | + ax.set_xlabel("unemployed workforce") |
| 134 | + ax.set_ylabel("employed workforce") |
| 135 | + ax.set_xticks((0, 6)) |
| 136 | + ax.set_yticks((0, 6)) |
| 137 | + s = 10 |
| 138 | + ax.plot([0, s * ū], [0, s * ē], "k--", lw=1) |
| 139 | + ax.scatter(x_ts_1[0, :], x_ts_1[1, :], s=4, c="blue") |
| 140 | + ax.scatter(x_ts_2[0, :], x_ts_2[1, :], s=4, c="green") |
| 141 | + |
| 142 | + ax.plot([ū], [ē], "ko", ms=4, alpha=0.6) |
| 143 | + ax.annotate(r'$\bar{x}$', |
| 144 | + xy=(ū, ē), |
| 145 | + xycoords="data", |
| 146 | + xytext=(20, -20), |
| 147 | + textcoords="offset points", |
| 148 | + arrowprops=dict(arrowstyle = "->")) |
| 149 | +
|
| 150 | + x, y = x_0_1[0], x_0_1[1] |
| 151 | + ax.plot([x], [y], "ko", ms=2, alpha=0.6) |
| 152 | + ax.annotate(f'$x_0 = ({x},{y})$', |
| 153 | + xy=(x, y), |
| 154 | + xycoords="data", |
| 155 | + xytext=(0, 20), |
| 156 | + textcoords="offset points", |
| 157 | + arrowprops=dict(arrowstyle = "->")) |
| 158 | + |
| 159 | + x, y = x_0_2[0], x_0_2[1] |
| 160 | + ax.plot([x], [y], "ko", ms=2, alpha=0.6) |
| 161 | + ax.annotate(f'$x_0 = ({x},{y})$', |
| 162 | + xy=(x, y), |
| 163 | + xycoords="data", |
| 164 | + xytext=(0, 20), |
| 165 | + textcoords="offset points", |
| 166 | + arrowprops=dict(arrowstyle = "->")) |
| 167 | + |
| 168 | + plt.show() |
| 169 | +``` |
| 170 | + |
| 171 | +```{code-cell} ipython3 |
| 172 | +lake_model(α=0.01, λ=0.1, d=0.02, b=0.025) |
| 173 | +``` |
| 174 | + |
| 175 | +If $\bar{x}$ is an eigenvector corresponding to the eigenvalue $r(A)$ then all the vectors in the set |
| 176 | +$D := \{ x \in \mathbb{R}^2 : x = \alpha \bar{x} \; \text{for some} \; \alpha >0 \}$ are also eigenvectors corresponding |
| 177 | +to $r(A)$. |
| 178 | + |
| 179 | +This set is represented by a dashed line in the above figure. |
| 180 | + |
| 181 | +We can observe that for two distinct initial conditions $x_0$ the sequence of iterates $(A^t x_0)_{t \geq 0}$ move towards D over time. |
| 182 | + |
| 183 | +This suggests that all such sequences share strong similarities in the long run, determined by the dominant eigenvector $\bar{x}$. |
| 184 | + |
| 185 | +In the example illustrated above we considered parameter such that overall growth rate of the labor force $g>0$. |
| 186 | + |
| 187 | +Suppose now we are faced with a situation where the $g<0$, i.e, negative growth in the labor force. |
| 188 | + |
| 189 | +This means that $b-d<0$, i.e., workers exit the market faster than they enter. |
| 190 | + |
| 191 | +What would the behavior of the iterative sequence $x_{t+1} = Ax_t$ be now? |
| 192 | + |
| 193 | +This is visualised below. |
| 194 | + |
| 195 | +```{code-cell} ipython3 |
| 196 | +lake_model(α=0.01, λ=0.1, d=0.025, b=0.02) |
| 197 | +``` |
| 198 | + |
| 199 | +Thus, while the sequence of iterates still move towards the dominant eigenvector $\bar{x}$ however in this case |
| 200 | +they converge to the origin. |
| 201 | + |
| 202 | +This is a result of the fact that $r(A)<1$, which ensures that the iterative sequence $(A^t x_0)_{t \geq 0}$ will converge |
| 203 | +to some point, in this case to $(0,0)$. |
| 204 | + |
| 205 | +This leads us into the next result. |
0 commit comments