This is a read only copy of the old FEniCS QA forum. Please visit the new QA forum to ask questions

Is there a way to disable the mesh partitioning when run in MPI mode

+1 vote

I would like to assemble and solve systems of equations on separate meshes in parallel. I have a code that reads an externally partitioned mesh and extracts interface marker data and creates FEniCS meshes and meshfunctions marking the interfaces. I do this in parallel using mpi4py, but when I try to assemble the systems on the separate meshes on each processor, I get "PETSc error code is: 63".

This is an index out of bounds error, and I think it is coming from FEniCS partitioning the domain on top of my own partitioning. Is there a way to disable the automatic partitioning that would normally occur when you call UnitSquareMesh(), for example, in a script run with mpirun -np 4 ...

Thanks,
Ben

asked Apr 13, 2016 by brk888 FEniCS Novice (990 points)

I believe code for (something like) this just landed in the master branch, see https://bitbucket.org/fenics-project/dolfin/pull-requests/270/various-fixes-for-solving-different-pdes/diff

I am not sure how this would interact with your own mesh partitioning, but it seems that you should be able to use different meshes on different subsets of processors if you let dolfin handle the partitioning...

I am pretty there are few others that the author of the linked pull request that has tested this, so you will probably have to do some investigation to see if it is sufficient for your needs.

...