@@ -29,7 +29,7 @@ a few ways:
2929 can slow down calculations. LinearSolve.jl has proper caches for fully preallocated no-GC workflows.
30303 . LinearSolve.jl makes a lot of other optimizations, like factorization reuse and symbolic factorization reuse, automatic.
3131 Many of these optimizations are not even possible from the high-level APIs of things like Python's major libraries and MATLAB.
32- 4 . LinearSolve.jl has a much more extensive set of sparse matrix solvers, which is why you see a major difference (2x-10x) for sparse
32+ 4 . LinearSolve.jl has a much more extensive set of sparse matrix solvers, which is why you see a major difference (2x-10x) for sparse
3333 matrices. Which sparse matrix solver between KLU, UMFPACK, Pardiso, etc. is optimal depends a lot on matrix sizes, sparsity patterns,
3434 and threading overheads. LinearSolve.jl's heuristics handle these kinds of issues.
3535
@@ -48,7 +48,7 @@ A = rand(n,n)
4848b = rand (n)
4949
5050prob = LinearProblem (A,b)
51- sol = solve (prob,IterativeSolvers_GMRES (),Pl= Pl,Pr= Pr)
51+ sol = solve (prob,IterativeSolversJL_GMRES (),Pl= Pl,Pr= Pr)
5252```
5353
5454If you want to use a "real" preconditioner under the norm ` weights ` , then one
@@ -64,5 +64,5 @@ A = rand(n,n)
6464b = rand (n)
6565
6666prob = LinearProblem (A,b)
67- sol = solve (prob,IterativeSolvers_GMRES (),Pl= Pl,Pr= Pr)
67+ sol = solve (prob,IterativeSolversJL_GMRES (),Pl= Pl,Pr= Pr)
6868```
0 commit comments