This is a read only copy of the old FEniCS QA forum. Please visit the new QA forum to ask questions

SLEPc no longer works in parallel

+1 vote

In older versions of FEniCS I could successfully run SLEPc to find eigenvalues in parallel, but this no long seems to work (see also this question). For example, if I run the code below in serial it successfully computes 3 eigenvalues as requested, but if run with mpirun, it reports 0 eigenvalues successfully computed. Is it still possible to run SLEPc in parallel with FEniCS 1.6.0? If so, could someone explain how or post an example code. Thanks.

from dolfin import *
mesh = UnitSquareMesh(25, 25)
V = FunctionSpace(mesh, "CG", 1)
u = TrialFunction(V)
v = TestFunction(V)
a = dot(grad(u), grad(v))*dx
m = u*v*dx

# Assemble the stiffness matrix and the mass matrix.
A = assemble(a)
M = assemble(m)
A = as_backend_type(A)
M = as_backend_type(M)

# Create eigensolver and compute eigenvalues
eigensolver = SLEPcEigenSolver(A, M)
eigensolver.parameters["spectrum"] = "smallest magnitude"
eigensolver.solve(3)

# Check the number of eigenvalues that converged.
# Extract the eigenvalues (ignore the imaginary part) and compare with the exact values
nconv = eigensolver.get_number_converged()
if MPI.rank(mesh.mpi_comm()) == 0:
    print "Number of eigenvalues successfully computed: ", nconv
eigenvalues = []
for i in range(nconv):
    r, c, rx, cx = eigensolver.get_eigenpair(i)  # real and complex part of eigenvalue
    eigenvalues.append(r)

if MPI.rank(mesh.mpi_comm()) == 0:
  print "Smallest positive eigenvalues computed and exact: "
  for i in range(min(nconv, 3)):
    print eigenvalues[i]
asked May 20, 2016 by dnarnold FEniCS User (2,360 points)

1 Answer

+2 votes

Hi, interestingly enough your code works in parallel if eigensolver = SLEPcEigenSolver(A) but I don't observe much scaling. The generalized eigenvalue problem works in paralllel with the following

from dolfin import *
mesh = UnitSquareMesh(25, 25)
V = FunctionSpace(mesh, "CG", 1)
u = TrialFunction(V)
v = TestFunction(V)
a = dot(grad(u), grad(v))*dx
m = u*v*dx

print V.dim()

# Assemble the stiffness matrix and the mass matrix.
A = assemble(a)
M = assemble(m)
A = as_backend_type(A).mat()
M = as_backend_type(M).mat()

from slepc4py import SLEPc
from petsc4py import PETSc

# Setup the eigensolver
solver = SLEPc.EPS().create()
solver.setOperators(A, M)
solver.setType(solver.Type.KRYLOVSCHUR)
solver.setDimensions(3, PETSc.DECIDE)
solver.setWhichEigenpairs(solver.Which.SMALLEST_REAL)
solver.setTolerances(1E-8, 1000)

solver.solve()

nconv = solver.getConverged()
niters = solver.getIterationNumber()
if MPI.rank(mesh.mpi_comm()) == 0:
    print "Number of eigenvalues successfully computed: ", nconv
    print "Iterations used", niters

eigenvalues = []
for i in range(nconv):
    r = solver.getEigenvalue(i).real  # real part of eigenvalue
    eigenvalues.append(r)

if MPI.rank(mesh.mpi_comm()) == 0:
    print "Smallest positive eigenvalues computed and exact: "
    for i in range(min(nconv, 3)):
        print eigenvalues[i]

By works I mean that it gets approx. the same answer with 1 and more cpus.

answered May 20, 2016 by MiroK FEniCS Expert (80,920 points)
...