@@ -531,32 +531,47 @@ This is the **tall and skinny** case associated with **Dynamic Mode Decompositio
531531
532532You can read about Dynamic Mode Decomposition here {cite}`DMD_book`.
533533
534- Starting with an $m \times n $ matrix of data $X$, we form two matrices
534+ We start with an $m \times n $ matrix of data $\tilde X$ of the form
535+
536+
537+ $$
538+ \tilde X = \begin{bmatrix} X_1 \mid X_2 \mid \cdots \mid X_n\end{bmatrix}
539+ $$
540+
541+ where for $t = 1, \ldots, n$, the $m \times 1 $ vector $X_t$ is
542+
543+ $$ X_t = \begin{bmatrix} X_{1,t} & X_{2,t} & \cdots & X_{m,t} \end{bmatrix}^T $$
544+
545+ where $T$ denotes transposition and $X_ {i,t}$ is an observations on variable $i$ at time $t$.
546+
547+ From $\tilde X$, form two matrices
535548
536549$$
537- \tilde X = \begin{bmatrix} X_1 \mid X_2 \mid \cdots \mid X_ {n-1}\end{bmatrix}
550+ X = \begin{bmatrix} X_1 \mid X_2 \mid \cdots \mid X_{n-1}\end{bmatrix}
538551$$
539552
540553and
541554
542555$$
543- \tilde X' = \begin{bmatrix} X_2 \mid X_3 \mid \cdots \mid X_n\end{bmatrix}
556+ X' = \begin{bmatrix} X_2 \mid X_3 \mid \cdots \mid X_n\end{bmatrix}
544557$$
545558
546- In forming $\tilde X$ and $\tilde X'$, we have in each case dropped a column from $X$.
559+ (Note that here $'$ does not denote matrix transposition but instead is part of the name of the matrix $X'$.)
547560
548- Evidently, $\tilde X$ and $\tilde X'$ are both $m \times \tilde n$ matrices where $\tilde n = n - 1$.
561+ In forming $ X$ and $X'$, we have in each case dropped a column from $\tilde X$.
562+
563+ Evidently, $ X$ and $ X'$ are both $m \times \tilde n$ matrices where $\tilde n = n - 1$.
549564
550565We start with a system consisting of $m$ least squares regressions of ** everything** on one lagged value of ** everything** :
551566
552567$$
553- \tilde X' = A \tilde X + \epsilon
568+ X' = A X + \epsilon
554569$$
555570
556571where
557572
558573$$
559- A = \tilde X' \tilde X^{+}
574+ A = X' X^{+}
560575$$
561576
562577and where the (huge) $m \times m $ matrix $X^{+}$ is the Moore-Penrose generalized inverse of $X$ that we could compute
@@ -574,13 +589,13 @@ The idea behind **dynamic mode decomposition** is to construct an approximation
574589
575590* retains only the largest $\tilde r< < r$ eigenvalues and associated eigenvectors of $U$ and $V^T$
576591
577- * constructs an $m \times \tilde r$ matrix $\Phi$ that captures effects of $r$ dynamic modes on all $m$ variables
592+ * constructs an $m \times \tilde r$ matrix $\Phi$ that captures effects on all $m$ variables of $r$ dynamic modes
578593
579- * uses $\Phi$ and the $\tilde r$ leading singular values to forecast *future* $X_t$'s
594+ * uses $\Phi$ and powers of $\tilde r$ leading singular values to forecast * future* $X_t$'s
580595
581596The magic of ** dynamic mode decomposition** is that we accomplish this without ever computing the regression coefficients $A = X' X^{+}$.
582597
583- To accomplish a DMD, we deploy the following steps:
598+ To construct a DMD, we deploy the following steps:
584599
585600* Compute the singular value decomposition
586601
@@ -611,10 +626,10 @@ To accomplish a DMD, we deploy the following steps:
611626
612627 $$
613628 U^T X' V \Sigma^{-1} = U^T A U \equiv \tilde A
614- $$
629+ $$ (eq:tildeAform)
615630
616- * At this point, in constructing $\tilde A$ according to the above formula,
617- we take only the columns of $U$ corresponding to the $\tilde r$ largest singular values.
631+ * At this point, we deploy a reduced-dimension version of formula {eq}`eq:tildeAform} by
632+ * using only the columns of $U$ that correspond to the $\tilde r$ largest singular values.
618633
619634 Tu et al. {cite}`tu_Rowley` verify that eigenvalues and eigenvectors of $\tilde A$ equal the leading eigenvalues and associated eigenvectors of $A$.
620635
@@ -625,8 +640,7 @@ To accomplish a DMD, we deploy the following steps:
625640 $$
626641
627642 where $\Lambda$ is a $\tilde r \times \tilde r$ diagonal matrix of eigenvalues and the columns of $W$ are corresponding eigenvectors
628- of $\tilde A$.
629- Both $\Lambda$ and $W$ are $\tilde r \times \tilde r$ matrices.
643+ of $\tilde A$. Both $\Lambda$ and $W$ are $\tilde r \times \tilde r$ matrices.
630644
631645* Construct the $m \times \tilde r$ matrix
632646
@@ -644,15 +658,15 @@ To accomplish a DMD, we deploy the following steps:
644658
645659 where evidently $b$ is an $\tilde r \times 1$ vector.
646660
647- With $\Lambda, \Phi$ in hand, our least-squares fitted dynamics fitted to the $r$ dominant modes
661+ With $\Lambda, \Phi, \Phi^{+} $ in hand, our least-squares fitted dynamics fitted to the $r$ dominant modes
648662are governed by
649663
650664$$
651665X_ {t+1} = \Phi \Lambda \Phi^{+} X_t
652666$$
653667
654668
655- Conditional on $X_t$, forecasts $\check X_{t+j} $ of $X_{t+j}, j = 1, 2, \ldots, $ are evidently given by
669+ Conditional on $X_t$, we construct forecasts $\check X_{t+j} $ of $X_{t+j}, j = 1, 2, \ldots, $ from
656670
657671$$
658672\check X_ {t+j} = \Phi \Lambda^j \Phi^{+} X_t
0 commit comments