diff --git a/docs/make.jl b/docs/make.jl index f2a212b9..502dd6bc 100644 --- a/docs/make.jl +++ b/docs/make.jl @@ -22,6 +22,7 @@ makedocs( "Manual" => [ "Introduction" => "manual/introduction.md", "Compression" => "manual/compression.md", + "Low-Rank Approximation" => "manual/low_rank_approximators.md", ], "API Reference" => [ "Compressors" => [ diff --git a/docs/src/manual/images/projection.png b/docs/src/manual/images/projection.png new file mode 100644 index 00000000..d4fd8590 Binary files /dev/null and b/docs/src/manual/images/projection.png differ diff --git a/docs/src/manual/low_rank_approximators.md b/docs/src/manual/low_rank_approximators.md new file mode 100644 index 00000000..bac2ac07 --- /dev/null +++ b/docs/src/manual/low_rank_approximators.md @@ -0,0 +1,240 @@ +# Low-Rank Approximations of Matrices +Often large matrices contain a lot of redundant information. This means that it is often +possible to form representations of large matrices with far fewer vectors than what the +original matrix contains. Representing a matrix with a small number of vectors +is known as low-rank approximation. Generally, low-rank approximations of +a matrix ``A \in \mathbb{R}^{m \times n}`` take two forms either a two matrix form where +`` + A \approx MN, +`` +where ``M \in \mathbb{R}^{m \times r}`` and ``N \in \mathbb{R}^{r \times n}``, +or the three matrix form where +`` +A \approx MBN +`` +and ``M \in \mathbb{R}^{m \times r}``, ``N \in \mathbb{R}^{s \times n}``, and +``B \in \mathbb{R}^{r \times s}``. + +Once one of the above representations has been obtained they can then be used to speed up: +matrix multiplication, clustering, or approximate eigenvalue decompositions +[halko2011finding, eckart1936approximation, udell2019why, park2025curing](@cite). + +Low rank approximations can take two different forms one being the orthogonal projection +form where coordinates are projected perpendicularly to onto a plane and the second being +the oblique forms where points are projected along another plane (see the below figure +for a visualization). +```@raw html + +``` + +We also can consider low-rank approximations for symmetric matrices and general matrices. +For symmetric and general matrices, the RandomizedSVD can be used as the orthogonal +projection method [halko2011finding](@cite). + +As far as oblique methods go, the difference between symmetric and asymmetric decompositions +becomes more complicated. For symmetric matrices, the go to approximation is the Nystrom +approximation. For the non-symmetric matrices, we can have a generalization of Nystrom +known as Generalized Nystrom or we can interpolative approaches, which select subsets of +the rows and/or columns to a matrix. If it these interpolative decompositions are performed +to select only columns or only rows then they are known as one sided IDs, if they are used +to select both columns and rows then they are known as a CUR decomposition. Below, we +present a summary of the decompositions in a table. + +|Approximation Name| General Matrices| Interpolative| Type| Form of Approximation| +|:-----------------|:----------------|:-------------|:----|:---------------------| +|RandRangeFinder| Yes| No| Orthogonal| ``A \approx QQ^\top A``| +|RandSVD|Yes|No|Orthogonal|``A \approx U \Sigma V^\top``| +|Nystrom| Symmetric| Can be| Oblique| ``(AS)((SA)^\top AS)^\dagger(AS)^\top``| +|Generalizedd Nystrom| Yes| Can be| Oblique| ``(AS_1)(S_2A AS_1)^\dagger S_2 A``| +|CUR| Yes| Yes| Oblique| ``(A[:,J])U(A[I,:])``| +|One-Sided-ID| Yes| Yes| Oblique| ``A[:,J]U_c`` or ``U_r A[I,:]``| + +In RLinearAlgebra, +once you have obtained a low-rank approximation `Recipe` you can then use it to perform +multiplications in all cases or in some specific areas use it to precondition a linear +system through the `ldiv!` function. Below we have the table of approximation recipes +and indicate how they can be used. + +|Approximation Name| `mul!`| `ldiv!`| +|:-----------------|:------|:-------| +|RandRangeFinderRecipe| Yes| No| +|RandSVDRecipe|Yes| No| +|NystromRecipe|Yes| No| +|CURRecipe|Yes| No| +|IDRecipe(One-Sided-ID)|Yes|No| +# The Randomized Rangefinder +The idea behind the randomized range finder is to find an orthogonal matrix ``Q`` such that +``A \approx QQ^\top A``. In their seminal work [halko2011finding](@cite) showed that +forming ``Q`` was as simple as compressing ``A`` from the right and storing the Q from the +resulting QR factorization. Despite the simplicity of this procedure they were able to show + if the compression dimension, ``k>2``, then +``\|A - QQ^\top A\|_F \leq \sqrt{k+1} (\sum_{i=k+1}^{\min{(m,n)}}\sigma_{i})^{1/2}``, + where ``\sigma_{k+1}`` is the ``k+1^\text{th}`` singular value of A (see Theorem 10.5 +of [halko2011finding](@cite)). This is very close to the error from the truncated SVD, which +is known to be the lowest achievable error. + + +For many matrices that singular values that decay quickly, this bound can be far more +conservative than the observed performance. However, for some matrices whose singular values +decay slowly this bound is fairly tight. Luckily, using power iterations we can +still improve the quality of the approximation. Power Iterations basically involve +multiplying the matrix with itself, which results in raising each singular value to a +higher power. This powering of the singular values increases the gap between the singular +values making them easier to accurately capture. +In `RLinearAlgebra`, you can control the number of power iterations using the `power_its` +keyword in the constructor. + +One issue with power iterations is that they can sometimes be +unstable. We can also improve the stability of these iterations by orthogonalizing between +power iterations. Meaning that instead of computing ``A A^\top A`` as is done in the power +iterations we compute ``A^\top A`` and take a QR factorization of this matrix to obtain a +``Q`` then compute ``A Q``. In RLinearAlgebra you can control whether or not the +orthogonalization is performed using the `orthogonalize` keyword argument in the +constructor. + +!!! info + If the cardinality of the compressor in the `RangeFinder` is not `Right()` a warning + will be returned and the approximation may be incorrect. + +## A RangeFinder Example +Lets say that we wish to obtain a rank-5 RandomizedSVD to matrix with 1000 rows and columns. +In RLinearAlgebra.jl we can do this by first generating the `RandomizedSVD` `Approximator`. +This will require us to specify a `Compressor` with the desired rank of approximation as the +`compression_dim` and the `cardinality=Right()`, the number of power iterations we want +to be performed, and whether we want to orthogonalize the power iterations. + + +To begin, we consider forming an approximation to a rank 5 matrix with 1000 rows and +columns. Then will define a `RangeFinder` structure with a `FJLT` compressor with a +`compression_dim = 5` and a +`cardinality = Right()`. After defining this structure we will then use `rapproximate` to +generate a `RangeFinderRecipe`, which we will then compute the error relying on the ability +to multiply `RangeFinderRecipe`s. +```julia +using RLinearAlgebra, LinearAlgebra + +# Generate the matrix we wish to approximate +A = randn(1000, 5) * randn(5, 1000); + +# Form the RangeFinder Structure +approx = RangeFinder( + compressor = FJLT(compression_dim = 5, cardinality = Right()) +) + +# Approximate A +range_A = rapproximate(approx, A) + +# Check the error of the approximation +norm(A - range_A * (range_A' * A)) +``` +To see the benefits of power iterations we consider the same example but now with +`compression_dim = 3`, then we consider the truncated SVD error, +the error of an approximation with no power iterations, +the error of an approximation with 10 power iterations but no +orthogonalization, and the error of an approximation with 10 power iterations and +orthogonalization. + +```julia +# Get error of truncated svd by computing the sqrt of the sum^2 of singular values 4:1000 +printstyled("Error of rank 3 truncated SVD:", + sqrt(sum(svd(A).S[4:end].^2)), + "\n" +) + +# Try approximating with a compression dimension of 3 and no power its/orthogonalization +# Form the RangeFinder Structure +approx = RangeFinder( + compressor = FJLT(compression_dim = 3, cardinality = Right()) +); + +range_A = rapproximate(approx, A); + +printstyled("Error of rank 3 approximation:", + norm(A - range_A * (range_A' * A)), + "\n" +) + +# Now consider adding power iterations +approx_pi = RangeFinder( + compressor = FJLT(compression_dim = 3, cardinality = Right()), + power_its = 10 +); + +range_A_pi = rapproximate(approx_pi, A); + + +printstyled("Error with 10 Power its and Orthogonalization:", + norm(A - range_A_pi * (range_A_pi' * A)), + "\n" +) + +# Now consider power its with orthogonalization +approx_pi_o = RangeFinder( + compressor = FJLT(compression_dim = 3, cardinality = Right()), + power_its = 10, + orthogonalize = true +); + +range_A_pi_o = rapproximate(approx_pi_o, A); + +printstyled("Error with 10 Power its and Orthogonalization:", + norm(A - range_A_pi_o * (range_A_pi_o' * A)), + "\n" +) +``` +# The RandSVD +The RandomizedSVD is a form of low-rank approximation that returns the approximate +singular values and vectors to the truncated SVD. Algorithmically, it is implemented as +three additional steps to the Randomized Rangefinder in [halko2011finding](@cite). +Specifically these steps are: +1. Take the ``Q`` matrix from the Randomized Rangefinder and compute ``Q^\top A``. +2. Compute the ``W,S,V = \text{svd}(Q^\top A)``. +3. Obtain the left singular vectors from ``U = Q^\top W``. + +Since, the RandomizedSVD is simply an extension of the Randomized RangeFinder, the effects +of all modifications, such as power iterations and orthogonalization still apply. The +difference between the two procedures is found in the Recipes. Where for the `RandSVDRecipe` +you find a approximate truncated SVD where the singular values can be accessed by +calling `recipe.S`, the left singular vectors can be accessed by calling `recipe.U`, +and the right singular vectors can be accessed by calling `recipe.V`. Additionally, +when you multiply with the RandomizedSVD it is as if you are multiplying with the +truncated SVD, meaning for a vector ``x`` the operation ``USV^\top x`` is performed. This +type of multiplication can be substantially faster than multiplications with the original +matrix. + +!!! info + As for the RandomizedSVD if the cardinality of the compressor is not `Right()` a warning + will be returned and the approximation may be incorrect. + +## A RandSVD example +We now demonstrate how to use the RandSVD, by first generating the technique structure +with a `FJLT` compressor with `compression_dim = 5` and `cardinality = Right()`. Then we +will run `rapproximate` and compare the singular values of the returned recipe to the +5 singular values of the truncated SVD. We will then end the experiment by comparing +the difference between multiplying a our `RandSVDRecipe` to vector and multiplying the +original matrix. + +```julia +using RLinearAlgebra, LinearAlgebra + +# Generate the matrix we wish to approximate +A = randn(1000, 5) * randn(5, 1000); + +# Form the RangeFinder Structure +approx = RandSVD( + compressor = FJLT(compression_dim = 5, cardinality = Right()) +) + +# Approximate A +randsvd_A = rapproximate(approx, A) + +# Compare singular vectors +svd(A).S[1:5] + +randsvd_A.S + +# Compare multiplications +x = rand(1000); + +norm(A * x - randsvd_A * x) +``` diff --git a/docs/src/refs.bib b/docs/src/refs.bib index f504507c..59a17002 100644 --- a/docs/src/refs.bib +++ b/docs/src/refs.bib @@ -1,160 +1,195 @@ +%% This BibTeX bibliography file was created using BibDesk. +%% https://bibdesk.sourceforge.io/ + +%% Created for Nathaniel pritchard at 2025-10-08 10:50:23 +0100 + + +%% Saved with string encoding Unicode (UTF-8) + + + @article{ailon2009fast, - title = {The {{Fast Johnson}}--{{Lindenstrauss Transform}} and {{Approximate Nearest Neighbors}}}, - author = {Ailon, Nir and Chazelle, Bernard}, - year = {2009}, - month = jan, - journal = {SIAM Journal on Computing}, - volume = {39}, - number = {1}, - pages = {302--322}, - issn = {0097-5397, 1095-7111}, - doi = {10.1137/060673096}, - langid = {english}, -} + author = {Ailon, Nir and Chazelle, Bernard}, + doi = {10.1137/060673096}, + issn = {0097-5397, 1095-7111}, + journal = {SIAM Journal on Computing}, + month = jan, + number = {1}, + pages = {302--322}, + title = {The {{Fast Johnson}}--{{Lindenstrauss Transform}} and {{Approximate Nearest Neighbors}}}, + volume = {39}, + year = {2009}, + bdsk-url-1 = {https://doi.org/10.1137/060673096}} @article{halko2011finding, - title = {Finding {{Structure}} with {{Randomness}}: {{Probabilistic Algorithms}} for {{Constructing Approximate Matrix Decompositions}}}, - shorttitle = {Finding {{Structure}} with {{Randomness}}}, - author = {Halko, N. and Martinsson, P. G. and Tropp, J. A.}, - year = {2011}, - month = jan, - journal = {SIAM Review}, - volume = {53}, - number = {2}, - pages = {217--288}, - issn = {0036-1445, 1095-7200}, - doi = {10.1137/090771806}, - langid = {english} -} + author = {Halko, N. and Martinsson, P. G. and Tropp, J. A.}, + doi = {10.1137/090771806}, + issn = {0036-1445, 1095-7200}, + journal = {SIAM Review}, + month = jan, + number = {2}, + pages = {217--288}, + shorttitle = {Finding {{Structure}} with {{Randomness}}}, + title = {Finding {{Structure}} with {{Randomness}}: {{Probabilistic Algorithms}} for {{Constructing Approximate Matrix Decompositions}}}, + volume = {53}, + year = {2011}, + bdsk-url-1 = {https://doi.org/10.1137/090771806}} @misc{martinsson2020randomized, - title = {Randomized {{Numerical Linear Algebra}}: {{Foundations}} \& {{Algorithms}}}, - shorttitle = {Randomized {{Numerical Linear Algebra}}}, - author = {Martinsson, Per-Gunnar and Tropp, Joel}, - year = {2020}, - publisher = {arXiv}, - doi = {10.48550/ARXIV.2002.01387}, - copyright = {arXiv.org perpetual, non-exclusive license}, - keywords = {FOS: Mathematics,Numerical Analysis (math.NA)} -} + author = {Martinsson, Per-Gunnar and Tropp, Joel}, + doi = {10.48550/ARXIV.2002.01387}, + publisher = {arXiv}, + shorttitle = {Randomized {{Numerical Linear Algebra}}}, + title = {Randomized {{Numerical Linear Algebra}}: {{Foundations}} \& {{Algorithms}}}, + year = {2020}, + bdsk-url-1 = {https://doi.org/10.48550/ARXIV.2002.01387}} @article{motzkin1954relaxation, - title = {The {{Relaxation Method}} for {{Linear Inequalities}}}, - author = {Motzkin, T. S. and Schoenberg, I. J.}, - year = {1954}, - journal = {Canadian Journal of Mathematics}, - volume = {6}, - pages = {393--404}, - issn = {0008-414X, 1496-4279}, - doi = {10.4153/CJM-1954-038-x}, - copyright = {https://www.cambridge.org/core/terms}, - langid = {english}, -} + author = {Motzkin, T. S. and Schoenberg, I. J.}, + doi = {10.4153/CJM-1954-038-x}, + issn = {0008-414X, 1496-4279}, + journal = {Canadian Journal of Mathematics}, + pages = {393--404}, + title = {The {{Relaxation Method}} for {{Linear Inequalities}}}, + volume = {6}, + year = {1954}, + bdsk-url-1 = {https://doi.org/10.4153/CJM-1954-038-x}} @article{needell2014paved, - title = {Paved with Good Intentions: {{Analysis}} of a Randomized Block {{Kaczmarz}} Method}, - shorttitle = {Paved with Good Intentions}, - author = {Needell, Deanna and Tropp, Joel A.}, - year = {2014}, - month = jan, - journal = {Linear Algebra and its Applications}, - volume = {441}, - pages = {199--221}, - issn = {00243795}, - doi = {10.1016/j.laa.2012.12.022}, - langid = {english}, -} + author = {Needell, Deanna and Tropp, Joel A.}, + doi = {10.1016/j.laa.2012.12.022}, + issn = {00243795}, + journal = {Linear Algebra and its Applications}, + month = jan, + pages = {199--221}, + shorttitle = {Paved with Good Intentions}, + title = {Paved with Good Intentions: {{Analysis}} of a Randomized Block {{Kaczmarz}} Method}, + volume = {441}, + year = {2014}, + bdsk-url-1 = {https://doi.org/10.1016/j.laa.2012.12.022}} @article{patel2023randomized, - title = {Randomized {{Block Adaptive Linear System Solvers}}}, - author = {Patel, Vivak and Jahangoshahi, Mohammad and Maldonado, D. Adrian}, - year = {2023}, - month = sep, - journal = {SIAM Journal on Matrix Analysis and Applications}, - volume = {44}, - number = {3}, - pages = {1349--1369}, - issn = {0895-4798, 1095-7162}, - doi = {10.1137/22M1488715}, - langid = {english}, -} + author = {Patel, Vivak and Jahangoshahi, Mohammad and Maldonado, D. Adrian}, + doi = {10.1137/22M1488715}, + issn = {0895-4798, 1095-7162}, + journal = {SIAM Journal on Matrix Analysis and Applications}, + month = sep, + number = {3}, + pages = {1349--1369}, + title = {Randomized {{Block Adaptive Linear System Solvers}}}, + volume = {44}, + year = {2023}, + bdsk-url-1 = {https://doi.org/10.1137/22M1488715}} @misc{pilanci2014iterative, - title = {Iterative {{Hessian}} Sketch: {{Fast}} and Accurate Solution Approximation for Constrained Least-Squares}, - shorttitle = {Iterative {{Hessian}} Sketch}, - author = {Pilanci, Mert and Wainwright, Martin J.}, - year = {2014}, - publisher = {arXiv}, - doi = {10.48550/ARXIV.1411.0347}, - copyright = {arXiv.org perpetual, non-exclusive license}, - keywords = {FOS: Computer and information sciences,FOS: Mathematics,Information Theory (cs.IT),Machine Learning (cs.LG),Machine Learning (stat.ML),Optimization and Control (math.OC)} -} + author = {Pilanci, Mert and Wainwright, Martin J.}, + doi = {10.48550/ARXIV.1411.0347}, + publisher = {arXiv}, + shorttitle = {Iterative {{Hessian}} Sketch}, + title = {Iterative {{Hessian}} Sketch: {{Fast}} and Accurate Solution Approximation for Constrained Least-Squares}, + year = {2014}, + bdsk-url-1 = {https://doi.org/10.48550/ARXIV.1411.0347}} @article{pritchard2023practical, - title = {Towards {{Practical Large-Scale Randomized Iterative Least Squares Solvers}} through {{Uncertainty Quantification}}}, - author = {Pritchard, Nathaniel and Patel, Vivak}, - year = {2023}, - month = sep, - journal = {SIAM/ASA Journal on Uncertainty Quantification}, - volume = {11}, - number = {3}, - pages = {996--1024}, - issn = {2166-2525}, - doi = {10.1137/22M1515057}, - langid = {english} -} + author = {Pritchard, Nathaniel and Patel, Vivak}, + doi = {10.1137/22M1515057}, + issn = {2166-2525}, + journal = {SIAM/ASA Journal on Uncertainty Quantification}, + month = sep, + number = {3}, + pages = {996--1024}, + title = {Towards {{Practical Large-Scale Randomized Iterative Least Squares Solvers}} through {{Uncertainty Quantification}}}, + volume = {11}, + year = {2023}, + bdsk-url-1 = {https://doi.org/10.1137/22M1515057}} @article{pritchard2024solving, - title = {Solving, Tracking and Stopping Streaming Linear Inverse Problems}, - author = {Pritchard, Nathaniel and Patel, Vivak}, - year = {2024}, - month = aug, - journal = {Inverse Problems}, - volume = {40}, - number = {8}, - pages = {085003}, - issn = {0266-5611, 1361-6420}, - doi = {10.1088/1361-6420/ad5583}, -} + author = {Pritchard, Nathaniel and Patel, Vivak}, + doi = {10.1088/1361-6420/ad5583}, + issn = {0266-5611, 1361-6420}, + journal = {Inverse Problems}, + month = aug, + number = {8}, + pages = {085003}, + title = {Solving, Tracking and Stopping Streaming Linear Inverse Problems}, + volume = {40}, + year = {2024}, + bdsk-url-1 = {https://doi.org/10.1088/1361-6420/ad5583}} @article{strohmer2009randomized, - title = {A {{Randomized Kaczmarz Algorithm}} with {{Exponential Convergence}}}, - author = {Strohmer, Thomas and Vershynin, Roman}, - year = {2009}, - month = apr, - journal = {Journal of Fourier Analysis and Applications}, - volume = {15}, - number = {2}, - pages = {262--278}, - issn = {1069-5869, 1531-5851}, - doi = {10.1007/s00041-008-9030-4}, - copyright = {http://www.springer.com/tdm}, - langid = {english} -} + author = {Strohmer, Thomas and Vershynin, Roman}, + doi = {10.1007/s00041-008-9030-4}, + issn = {1069-5869, 1531-5851}, + journal = {Journal of Fourier Analysis and Applications}, + month = apr, + number = {2}, + pages = {262--278}, + title = {A {{Randomized Kaczmarz Algorithm}} with {{Exponential Convergence}}}, + volume = {15}, + year = {2009}, + bdsk-url-1 = {https://doi.org/10.1007/s00041-008-9030-4}} @article{tropp2011improved, - title = {{{Improved Analysis of the Subsampled Randomized Hadamard Transform}}}, - author = {Tropp, Joel A.}, - year = {2011}, - month = apr, - journal = {Advances in Adaptive Data Analysis}, - volume = {03}, - number = {01n02}, - pages = {115--126}, - issn = {1793-5369, 1793-7175}, - doi = {10.1142/S1793536911000787}, - langid = {english}, -} + author = {Tropp, Joel A.}, + doi = {10.1142/S1793536911000787}, + issn = {1793-5369, 1793-7175}, + journal = {Advances in Adaptive Data Analysis}, + month = apr, + number = {01n02}, + pages = {115--126}, + title = {Improved {{Analysis}} of the {{Subsampled Randomized Hadamard Transform}}}, + volume = {03}, + year = {2011}, + bdsk-url-1 = {https://doi.org/10.1142/S1793536911000787}} @article{woodruff2014sketching, - title = {Sketching as a {{Tool}} for {{Numerical Linear Algebra}}}, - author = {Woodruff, David P.}, - year = {2014}, - journal = {Foundations and Trends in Theoretical Computer Science}, - volume = {10}, - number = {1-2}, - pages = {1--157}, - issn = {1551-305X, 1551-3068}, - doi = {10.1561/0400000060}, - langid = {english}, + author = {Woodruff, David P.}, + date-modified = {2025-10-08 10:50:19 +0100}, + doi = {10.1561/0400000060}, + issn = {1551-305X, 1551-3068}, + journal = {Foundations and Trends in Theoretical Computer Science}, + number = {1-2}, + pages = {1--157}, + shorttitle = {Computational {{Advertising}}}, + title = {Sketching as a {{Tool}} for {{Numerical Linear Algebra}}}, + volume = {10}, + year = {2014}, + bdsk-url-1 = {https://doi.org/10.1561/0400000060}} +@misc{park2025curing, + title = {{{CURing Large Models}}: {{Compression}} via {{CUR Decomposition}}}, + shorttitle = {{{CURing Large Models}}}, + author = {Park, Sanghyeon and Moon, Soo-Mook}, + year = 2025, + month = jan, + number = {arXiv:2501.04211}, + eprint = {2501.04211}, + primaryclass = {cs}, + publisher = {arXiv}, + doi = {10.48550/arXiv.2501.04211}, } +@article{udell2019why, + title = {Why {{Are Big Data Matrices Approximately Low Rank}}?}, + author = {Udell, Madeleine and Townsend, Alex}, + year = 2019, + month = jan, + journal = {SIAM Journal on Mathematics of Data Science}, + volume = {1}, + number = {1}, + pages = {144--160}, + issn = {2577-0187}, + doi = {10.1137/18M1183480} +} +@article{eckart1936approximation, + title = {The {{Approximation}} of {{One Matrix}} by {{Another}} of {{Lower Rank}}}, + author = {Eckart, Carl and Young, Gale}, + year = 1936, + month = sep, + journal = {Psychometrika}, + volume = {1}, + number = {3}, + pages = {211--218}, + issn = {0033-3123, 1860-0980}, + doi = {10.1007/BF02288367}, +} + + diff --git a/src/Approximators/RangeApproximators/rangefinder.jl b/src/Approximators/RangeApproximators/rangefinder.jl index 5426dcea..5e225c91 100644 --- a/src/Approximators/RangeApproximators/rangefinder.jl +++ b/src/Approximators/RangeApproximators/rangefinder.jl @@ -13,7 +13,7 @@ Suppose we have a matrix ``A \\in \\mathbb{R}^{m \\times n}`` of which we wish t A simple way to find such a matrix is to choose a ``k`` representing the number of vectors we wish to have in the subspace. Then we can generate a compression matrix ``S\\in\\mathbb{R}^{n \\times k}`` and compute ``Q = \\text{qr}(AS)``. - With high probability we will have ``\\|A - QQ^\\top A\\|_2 \\leq + With high probability we will have ``\\|A - QQ^\\top A\\|_F \\leq \\sqrt{k+1} (\\sum_{i=k+1}^{\\min{(m,n)}}\\sigma_{i})^{1/2}``, where ``\\sigma_{k+1}`` is the ``k+1^\\text{th}`` singular value of A (see Theorem 10.5 of [halko2011finding](@cite)). This bound is often conservative @@ -86,7 +86,7 @@ RangeFinder(; compressor = SparseSign(cardinality = Right()), orthogonalize = false, power_its = 0 -) = RangeFinder(compressor, orthogonalize, power_its) +) = RangeFinder(compressor, power_its, orthogonalize) """ RangeFinderRecipe