I would like to assemble and solve systems of equations on separate meshes in parallel. I have a code that reads an externally partitioned mesh and extracts interface marker data and creates FEniCS meshes and meshfunctions marking the interfaces. I do this in parallel using mpi4py, but when I try to assemble the systems on the separate meshes on each processor, I get "PETSc error code is: 63".
This is an index out of bounds error, and I think it is coming from FEniCS partitioning the domain on top of my own partitioning. Is there a way to disable the automatic partitioning that would normally occur when you call UnitSquareMesh(), for example, in a script run with mpirun -np 4 ...
Thanks,
Ben