I have a fairly large, (sizes like $50k \times 100k$ or $200k \times 300k$ are not unusual) rectangular sparse matrix $A$ and need to find a basis for it's null space. At the moment I form the square symmetric matrix $A^t A$ and use an iterative solver to find a few eigenvectors to almost zero eigenvalues. scipy.sparse.eigsh
does an impressive job (wrapping ARPACK) and I get quite reasonable results. Usually I know the dimension of the null space a priori and there is a visible gap in the order of the almost-zero eigenvalues (for instance for a certain matrix the dimension of the null space is $5$ and I obtain five eigenvalues of order $10^{-16}$. If I solve for more, the next are of order $10^{-9}$ to $10^{-7}$)
I now want to try out the SLEPcEigenSolver
wrapper class for comparison (which is also based on ARPACK if I remember correctly).
However I'm not sure which settings are best for my purpose. For example I've tried a very simple, singular $4\times 4$ matrix $A$ with $a_{11}=a_{22}$ and all other entries $=0$ (in particular $A^T A = A$), so the expected $2d$ eigenbasis for small eigenvalues should be in the span of $(0,0,1,0)$ and $(0,0,0,1)$, but the solver does not converge (in contrast eigsh
does).
Here is a snippet:
A = scipy.sparse.lil_matrix((4, 4))
A[0, 0] = 1
A[1, 1] = 1
A = A.tocsr()
sigma = 1e-14
eigensolver = SLEPcEigenSolver(A)
# Target magnitude solves for eigenvalues close to sigma given as spectral shift.
eigensolver.parameters['spectrum'] = 'target magnitude'
eigensolver.parameters["solver"] = 'krylov-schur' # 'arnoldi' doesn't work either
eigensolver.parameters["spectral_shift"] = sigma
eigensolver.parameters["spectral_transform"] = 'shift-and-invert'
eigensolver.parameters["problem_type"] = "hermitian"
eigensolver.solve(num_of_vecs)
A few questions:
Q1: What are the best settings for SLEPcEigenSolver
to solve for almost-zero eigenvalues? With eigsh
I use shift-inversion and specify largest magnitude
as the target spectrum, but I'm not sure if this applies to SLEPcEigenSolver
, too.
Q2: (general numerics) Is there a better strategy to obtain a basis for the null space for matrices of this size? Direct methods seem inappropriate and SVDs for sparse matrices are often based on iterative eigensolvers. The only reasonable alternative that came to my mind is some kind of sparse QR and these slides here suggest that this might be indeed a performant alternative - they obtain the factorization for a $62k\times 77k$ matrix in $0.2$ seconds! Unfortunately there is no convenient Python wrapper for spqr which I just found in Eigen and SuiteSparse.
Thanks!
Disclaimer: I know that this is more a SLEPc/ general numerics question, but since the SLEPcEigenSolver
class is quite fundamental for many Fenics applications I thought this is a good place to ask nevertheless.