You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: lectures/svd_intro.md
+26-8Lines changed: 26 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -694,11 +694,6 @@ This is the case that we are interested in here.
694
694
Thus, we want to fit equation {eq}`eq:VARfirstorder` in a situation in which we have a number $n$ of observations that is small relative to the number $m$ of
695
695
variables that appear in the vector $X_t$.
696
696
697
-
We'll use efficient algorithms for computing and for constructing reduced rank approximations of $\hat A$ in formula {eq}`eq:hatAversion0`.
698
-
699
-
700
-
701
-
702
697
703
698
To reiterate and supply more detail about how we can efficiently calculate the pseudo-inverse $X^+$, as our estimator $\hat A$ of $A$ we form an $m \times m$ matrix that solves the least-squares best-fit problem
704
699
@@ -711,10 +706,15 @@ where $|| \cdot ||_F$ denotes the Frobeneus norm of a matrix.
711
706
The solution of the problem on the right side of equation {eq}`eq:ALSeqn` is
712
707
713
708
$$
714
-
\hat A = X' X^{+} .
709
+
\hat A = X' X^{+}
715
710
$$ (eq:hatAform)
716
711
717
-
Here the (possibly huge) $ \tilde n \times m $ matrix $ X^{+} $ is the pseudo-inverse of $ X $.
712
+
where the (possibly huge) $ \tilde n \times m $ matrix $ X^{+} = (X^T X)^{-1} X^T$ is again the pseudo-inverse of $ X $.
713
+
714
+
For the situations that we are interested in, $X^T X $ can be close to singular, a situation that can lead some numerical algorithms to be error-prone.
715
+
716
+
To confront that situationa, we'll use efficient algorithms for computing and for constructing reduced rank approximations of $\hat A$ in formula {eq}`eq:hatAversion0`.
717
+
718
718
719
719
The $ i $th row of $ \hat A $ is an $ m \times 1 $ vector of pseudo-regression coefficients of $ X_{i,t+1} $ on $ X_{j,t}, j = 1, \ldots, m $.
720
720
@@ -730,9 +730,27 @@ where $ U $ is $ m \times p $, $ \Sigma $ is a $ p \times p $ diagonal matrix,
730
730
731
731
Here $ p $ is the rank of $ X $, where necessarily $ p \leq \tilde n $.
732
732
733
+
734
+
We can use the singular value decomposition {eq}`eq:SVDDMD` efficiently to construct the pseudo-inverse $X^+$
735
+
by exploiting the implication of the following string of equalities:
736
+
737
+
$$
738
+
\begin{aligned}
739
+
X^{+} & = (X^T X)^{-1} X^T \\
740
+
& = (V \Sigma U^T U \Sigma V^T)^{-1} V \Sigma U^T \\
741
+
& = (V \Sigma \Sigma V^T)^{-1} V \Sigma U^T \\
742
+
& = V \Sigma^{-1} \Sigma^{-1} V^T V \Sigma U^T \\
743
+
& = V \Sigma^{-1} U^T
744
+
\end{aligned}
745
+
$$ (eq:efficientpseudoinverse)
746
+
747
+
733
748
(We described and illustrated a **reduced** singular value decomposition above, and compared it with a **full** singular value decomposition.)
734
749
735
-
We can construct a pseudo-inverse $ X^+ $ of $ X $ by using
750
+
751
+
752
+
753
+
Thus, we shall construct a pseudo-inverse $ X^+ $ of $ X $ by using
736
754
a singular value decomposition of $X$ in equation {eq}`eq:SVDDMD` to compute
0 commit comments