In an application, I assemble a matrix manually with FIAT. I then add this contribution to an usual matrix obtained through UFL. The following minimal example runs sequentially. However, when I run this example in parallel with "mpirun -n 2 python file.py
", I get an error.
from dolfin import *
# Create mesh and space
s = 21
mesh = UnitSquareMesh(s,s)
V = FunctionSpace(mesh, 'CG', 1)
dim = V.dim()
# Setup zero matrix
Mat = PETScMatrix()
mat = Mat.mat()
mat.create()
mat.setSizes((dim, dim))
mat.setType('aij')
mat.setUp()
# Add some entries
# for p in range(dim):
# mat[p, p] = 1.
mat.assemble()
A1 = PETScMatrix(mat)
# Define second matrix
u = TrialFunction(V)
v = TestFunction(V)
a = u*v*dx
A2 = assemble(a)
# Add matrices in order to solve the associated linear system
A3 = A1 + A2
# A3 = A1 + as_backend_type(A2)
The error is the following.
-------------------------------------------------------------------------
*** Error: Unable to successfully call PETSc function 'MatAXPY'.
*** Reason: PETSc error code is: 63.
*** Where: This error was encountered inside /home/fenics-work/FEniCS/src/dolfin/dolfin/la/PETScMatrix.cpp.
*** Process: unknown
***
*** DOLFIN version: 1.4.0+
*** Git changeset: efaa954b779b7844c490ab1b82b68cb70931e009
***
-------------------------------------------------------------------------
I suspect this error has to do with PETScMatrix A1
not being partitioned among the MPI processes as A2
, as hinted in this question and this question.
An ideal solution would let me use A3
in the usual FEniCS way. Now that FEniCS allows to call PETSc solvers directly though, having A3
in PETScMatrix format is probably acceptable.