MPI has been integrated into FEniCS as explained by the following discussion.
https://fenicsproject.org/qa/3025/solving-a-problem-in-parallel-with-mpi-&-petsc
What if instead of breaking each FEniCS problems into pieces, I want to run several similar FEniCS problems in parallel? Lets say there is a parameter that I need to adjust over and over again and I want to parallelize the process of running different parameter values? (for example the diffusion equation with different diffusion constants) I want to use MPI but NOT to break up the FEniCS problem itself. I can't find how to send/recieve messages between processors through dolfin and importing mpi4py conflicts with dolfin. (see below) Suggestions?
In a script I have the following.
from dolfin import *
from mpi4py import MPI
comm=MPI.COMM_WORLD
Running mpirun -n 3 myscript.py in the terminal produces the following error.
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
...
= EXIT CODE: 6
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Aborted (signal 6)
This typically refers to a problem with your application.
Please see the FAQ page for debugging suggestions
I'm not too surprised as FEniCS already has MPI implementation in it and that is interfering with using mpi4py. dolfin has mpi_comm_world() but no functions to send or receive messages as far as I can tell.