I solve generalized eigenvalues with large input data. In my laptop not enough memory. I want to run my task on cluster.
mesh = Mesh("domain.xml")
subdomains = MeshFunction("size_t", mesh, "domain_physical_region.xml")
dx = Measure("dx")[subdomains]
V = FunctionSpace(mesh, "CG", fem_order)
V0 = FunctionSpace(mesh, "DG", 0)
V_array = [V for i in range(group_size)]
W = MixedFunctionSpace(V_array)
u = TrialFunctions(W)
v = TestFunctions(W)
#coefficients lots of
a = 0
b = 0
for i in range(group_size):
a += #many letters
b += #many letters
A = PETScMatrix()
B = PETScMatrix()
assemble(a, tensor=A)
assemble(b, tensor=B)
eigensolver = SLEPcEigenSolver(A, B)
eigensolver.solve(0)
r, c, rx, cx = eigensolver.get_eigenpair(0)
I tried running with mpirun -np 2 solver.py and obtained it doesn't speed up.
Does the SLEPc solver work in parallel? If so, how?
Thanks in advance!