You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The next two functions describe the [law of motion](motion_law) and the true fixed point $k^*$.
78
+
The next two functions implements the law of motion[](motion_law) and the true fixed point $k^*$.
78
79
79
80
```{code-cell} python3
80
81
def g(k, params):
@@ -206,7 +207,7 @@ def Dg(k, params):
206
207
return α * A * s * k**(α-1) + (1 - δ)
207
208
```
208
209
209
-
Here's a function $q$ representing the [formula for newtons' method above](newtons_method).
210
+
Here's a function $q$ representing [](newtons_method).
210
211
211
212
```{code-cell} python3
212
213
def q(k, params):
@@ -255,7 +256,7 @@ plot_trajectories(params)
255
256
256
257
We can see that Newton's Method reaches convergence faster than the successive approximation.
257
258
258
-
The above problem can be seen as a root-finding problem since the computation of a fixed point can be seen as approximating $x^*$ iteratively such that $g(x^*) - x^* = 0$.
259
+
The above fixed-point calculation can be seen as a root-finding problem since the computation of a fixed point can be seen as approximating $x^*$ iteratively such that $g(x^*) - x^* = 0$.
259
260
260
261
For one-dimensional root-finding problems, Newton's method iterates on:
iteration = lambda x, params: x - g(x, params)/Dg(x, params)
279
281
280
282
error = tol + 1
@@ -310,7 +312,7 @@ The multi-dimensional variant will be left as an [exercise](newton_ex1).
310
312
311
313
By observing the formula of Newton's method, it is easy to see the possibility to implement Newton's method using Jacobian when we move up the ladder to higher dimensions.
312
314
313
-
This naturally leads us to use Newton's method to solve multi-dimensional problems for which we will use the powerful auto-differentiation functionality in `jax` to solve intricate calculations.
315
+
This naturally leads us to use Newton's method to solve multi-dimensional problems for which we will use the powerful auto-differentiation functionality in JAX to do intricate calculations.
314
316
315
317
## Multivariate Newton’s Method
316
318
@@ -385,7 +387,7 @@ The function below calculates the excess demand for given parameters
starting from some initial guess of the price vector $p_0$. (Here $J_e(p_n)$ is the Jacobian of $e$ evaluated at $p_n$.)
578
578
579
-
We use the `jax.jacobian()` function to auto-differentiate and calculate the jacobian.
579
+
Instead of coding Jacobian by hand, We use the `jax.jacobian()` function to auto-differentiate and calculate Jacobian.
580
580
581
-
With only slight modification, we can generalize our previous attempt to multi-dimensional problems
581
+
With only slight modification, we can generalize [our previous attempt](first_newton_attempt) to multi-dimensional problems
582
582
583
583
```{code-cell} python3
584
584
def newton(f, x_0, tol=1e-5, maxIter=10):
@@ -616,11 +616,11 @@ p = newton(lambda p: e(p, A, b, c), init_p).block_until_ready()
616
616
np.max(np.abs(e(p, A, b, c)))
617
617
```
618
618
619
-
The error is almost 0.
619
+
The result is very accurate.
620
620
621
621
With the larger overhead, the speed is not better than the optimized `scipy` function.
622
622
623
-
However, things will change slightly when we move to higher dimensional problems.
623
+
However, things will change when we move to higher dimensional problems.
624
624
625
625
626
626
@@ -646,7 +646,7 @@ b = jnp.ones(dim)
646
646
c = jnp.ones(dim)
647
647
```
648
648
649
-
Here's the same demand function expressed in matrix syntax:
649
+
Here's the same demand function using `jax.numpy`:
650
650
651
651
```{code-cell} python3
652
652
def e(p, A, b, c):
@@ -659,7 +659,7 @@ Here's our initial condition
659
659
init_p = jnp.ones(dim)
660
660
```
661
661
662
-
Newton's method reaches a relatively small error within a minute
662
+
Newton's method reaches a relatively small error within 10 seconds
663
663
664
664
```{code-cell} python3
665
665
%%time
@@ -687,7 +687,7 @@ p = solution.x
687
687
np.max(np.abs(e(p, A, b, c)))
688
688
```
689
689
690
-
And the result is less accurate.
690
+
The result is also less accurate.
691
691
692
692
## Exercises
693
693
@@ -722,7 +722,7 @@ $$
722
722
723
723
- The computation of fixed point can be seen as computing $k^*$ such that $f(k^*) - k^* = 0$.
724
724
725
-
- If you are unsure about your solution, you can start with the known solution to check your formula:
725
+
- If you are unsure about your solution, you can start with the solved example:
726
726
727
727
```{math}
728
728
A = \begin{pmatrix}
@@ -736,14 +736,10 @@ with $s = 0.3$, $α = 0.3$, and $δ = 0.4$ and starting value:
736
736
737
737
738
738
```{math}
739
-
k_0 = \begin{pmatrix}
740
-
1 \\
741
-
1 \\
742
-
1
743
-
\end{pmatrix}
739
+
k_0 = (1, 1, 1)
744
740
```
745
741
746
-
The result should converge to the [solved solution in the one-dimensional problem](solved_k).
742
+
The result should converge to the [analytical solution](solved_k).
747
743
````
748
744
749
745
```{exercise-end}
@@ -790,14 +786,14 @@ for init in initLs:
790
786
attempt +=1
791
787
```
792
788
793
-
We find that the results are invariant to the starting values given the well-defined property of this question. We can apply more a restrictive threshold for tolerance to achieve more accurate results.
789
+
We find that the results are invariant to the starting values given the well-defined property of this question.
794
790
795
791
But the number of iterations it takes to converge is dependent on the starting values.
796
792
797
793
Substitute it back to the formulate to check our last result
798
794
799
795
```{code-cell} python3
800
-
multivariate_solow(k)
796
+
multivariate_solow(k) - k
801
797
```
802
798
803
799
Note the error is very small.
@@ -822,10 +818,12 @@ init = jnp.repeat(1.0, 3)
822
818
823
819
The result is very close to the ground truth but still slightly different.
824
820
825
-
We can increase the precision of the floating point numbers and restrict the tolerance to obtain a more accurate approximation
821
+
We can increase the precision of the floating point numbers and restrict the tolerance to obtain a more accurate approximation (see detailed discussion in the [lecture on JAX](https://python-programming.quantecon.org/jax_intro.html#differences))
826
822
827
823
```{code-cell} python3
828
-
from jax.config import config; config.update("jax_enable_x64", True)
824
+
from jax.config import config
825
+
826
+
config.update("jax_enable_x64", True)
829
827
830
828
init = init.astype('float64')
831
829
@@ -834,7 +832,7 @@ init = init.astype('float64')
834
832
tol=1e-7).block_until_ready()
835
833
```
836
834
837
-
We can see Newton's method steps towards a more accurate solution.
835
+
We can see it steps towards a more accurate solution.
838
836
839
837
```{solution-end}
840
838
```
@@ -874,8 +872,8 @@ $$
874
872
875
873
\begin{aligned}
876
874
p1_{0} &= (5, 5, 5) \\
877
-
p2_{0} &= (4.25, 4.25, 4.25) \\
878
-
p3_{0} &= (1, 1, 1)
875
+
p2_{0} &= (1, 1, 1) \\
876
+
p3_{0} &= (4.5, 0.1, 4)
879
877
\end{aligned}
880
878
$$
881
879
@@ -885,7 +883,7 @@ Set the tolerance to $0.0$ for more accurate output.
885
883
```{hint}
886
884
:class: dropdown
887
885
888
-
Similar to [exercise 1](newton_ex1), enabling `float64` for `JAX` can improve the precision of our results.
886
+
Similar to [exercise 1](newton_ex1), enabling `float64` for JAX can improve the precision of our results.
889
887
```
890
888
891
889
@@ -910,7 +908,7 @@ c = jnp.array([1.0, 1.0, 1.0])
910
908
911
909
initLs = [jnp.repeat(5.0, 3),
912
910
jnp.ones(3),
913
-
jnp.array([4.5, 0.1, 4])]
911
+
jnp.array([4.5, 0.1, 4.0])]
914
912
```
915
913
916
914
Let’s run through each initial guess and check the output
@@ -933,7 +931,7 @@ We can find that Newton's method may fail for some starting values.
933
931
934
932
Sometimes it may take a few initial guesses to achieve convergence.
935
933
936
-
Substitute one result back to the formula to check our result
934
+
Substitute the result back to the formula to check our result
0 commit comments