Hi,
I just noticed that running jobs in parallel doesn't work correctly. The MWE below doesn't show correct MPI rank and world size at all, it rather looks like MPI executes N copies of the code instead.
from __future__ import absolute_import, print_function
from dolfin import *
mpi_rank = MPI.rank(mpi_comm_world())
mpi_size = MPI.size(mpi_comm_world())
print("rank / size: {0} / {1}".format(mpi_rank, mpi_size))
Output:
fenics@5d9c83c65909:~/shared/scratch/mpi_test$ python mpi_test.py
rank / size: 0 / 1
fenics@5d9c83c65909:~/shared/scratch/mpi_test$ mpirun -n 4 python mpi_test.py
rank / size: 0 / 1
rank / size: 0 / 1
rank / size: 0 / 1
rank / size: 0 / 1
I believe that I saw some other code running in parallel correctly, but that's a few weeks back before I pulled the latest stable Docker image.
Can anyone reproduce this? Or what am I doing wrong here?