This is a read only copy of the old FEniCS QA forum. Please visit the new QA forum to ask questions

PETScMatrix built from petsc4py matrix cannot do a matvec in parallel

+1 vote

In Python, I assemble a PETSc matrix using petsc4py, then convert it to the dolfin format PETScMatrix. When trying to do a matvec with the resulting PETScMatrix, and while using multiple processes, I get a RuntimeError.

Below is a simpler version of the code. It works with a single process but fails on the last line when using 2 processes,

mpirun -n 2 python code.py

giving the error message

Error: Unable to successfully call PETSc function 'MatMult'.
Reason: PETSc error code is: 60.
Where: This error was encountered inside /build/dolfin-k_QrtL/dolfin-1.6.0/dolfin/la/PETScMatrix.cpp.

import dolfin as dl
import petsc4py, sys
petsc4py.init(sys.argv)
from petsc4py import PETSc

from dolfin import MPI, mpi_comm_world
mycomm = mpi_comm_world()
mpisize = MPI.size(mycomm)
mpirank = MPI.rank(mycomm)

mesh = dl.UnitSquareMesh(2,2)
Vr = dl.FunctionSpace(mesh, 'Lagrange', 1)
Vc = dl.FunctionSpace(mesh, 'Lagrange', 1)
MPETSc = PETSc.Mat()
MPETSc.create(PETSc.COMM_WORLD)
MPETSc.setSizes([Vr.dim(), Vc.dim()])
MPETSc.setType('aij') # sparse
MPETSc.setPreallocationNNZ(5)
MPETSc.setUp()
Istart, Iend = MPETSc.getOwnershipRange()
for I in xrange(Istart, Iend) :
    MPETSc[I,I] = I
    if I-1 >= 0: MPETSc[I,I-1] = 1.0
    if I-2 >= 0: MPETSc[I,I-2] = -1.0
    if I+1 < Vc.dim():  MPETSc[I,I+1] = 1.0
    if I+2 < Vc.dim():  MPETSc[I,I+2] = -1.0
MPETSc.assemblyBegin()
MPETSc.assemblyEnd()

Mdolfin = dl.PETScMatrix(MPETSc)

x, y = MPETSc.getVecs()
x.set(1.0)
y.set(0.0)
y = MPETSc * x

x2 = dl.interpolate(dl.Constant(1.0), Vc)
y2 = Mdolfin * x2.vector()

I'm using the Ubuntu distribution of Fenics 1.6.0 on Ubuntu 14.04LTS.

Any help on this (is that bug? is there a workaround?) would be very much appreciated.

Thank you!

asked Aug 29, 2016 by BC FEniCS Novice (790 points)

Follow-up: It turns out that petsc4py and dolfin do not partition the matrix in the same manner. So that explains where the error comes from. The real question now is,
Is there a workaround to this issue that someone is aware of?
And also, is there any way to define the partitioning of the matrix MPETSc in the above example to match the mesh partition from fenics?

1 Answer

+2 votes
 
Best answer

Alright, I'll just answer my own question then.
1. Yes, you can define the size of the PETSc partition, and
2. yes, you can define the local-to-global-dofs map for the PETSc matrix using the local-to-global-dofs map defined by the mesh partition.

I only show the relevant (modified/added) lines from the original code,

...
mesh = dl.UnitSquareMesh(2,2)
Vr = dl.FunctionSpace(mesh, 'Lagrange', 1)
Vc = dl.FunctionSpace(mesh, 'Lagrange', 2)
Vrdofmap, Vcdofmap = Vr.dofmap(), Vc.dofmap()
rmap = PETSc.LGMap().create(Vrdofmap.dofs(), mycomm)
cmap = PETSc.LGMap().create(Vcdofmap.dofs(), mycomm)
...
MPETSc.create(PETSc.COMM_WORLD)
MPETSc.setSizes([ [Vrdofmap.local_dimension("owned"), Vr.dim()], [Vcdofmap.local_dimension("owned"), Vc.dim()] ])
MPETSc.setType('aij') # sparse
MPETSc.setUp()
MPETSc.setLGMap(rmap, cmap)
...

This works with fenics default mesh generation (structured mesh). I have not tested it with unstructured meshes. For this reason, it's probably best to add a check somewhere to make sure that the mesh partition is 'continuous' (or sometimes called 'monotonous'). Something like that should work

Istart, Iend = MPETSc.getOwnershipRange()
assert list(Vrdofmap.dofs()) == range(Istart, Iend)

Hope it helps someone.

answered Sep 2, 2016 by BC FEniCS Novice (790 points)
selected Sep 9, 2016 by BC
...