The solution is to call another mpirun script with n -1 using e.g. subprocess. In this way I can solve an FEM problem with mpirun n -12 on 12 processes, after that I can directly calculate a SubMesh with the correct topological dimension IN SERIAL. Finally, I can directly import the SubMesh again in parallel like in the end of the code snippet and interpolate on it or do something else.
import dolfin as df
import subprocess
comm = df.mpi_comm_world()
rank = df.MPI.rank(comm)
if rank == 0:
os.chdir('"change to script directory"')
subprocess.call('mpirun -n 1 python interp_tools.py', shell=True)
else:
pass
df.MPI.barrier(comm)
computed_SubMesh = df.Mesh('"mesh directory"/test_line_x.xml')
The script "interp_tools.py" I'm calling looks like this:
import dolfin as df
import os
def calc_x_line_mesh(out_name, start, stop, y=0, z=0, dim=1e4,
ne=100, tol=1e-8, force_calc=False):
if (os.path.isfile('../meshes/' + out_name + '_line_x.xml') is False or
force_calc is not False):
slice_mesh = df.BoxMesh(df.Point(start, -dim, -dim),
df.Point(stop, dim, dim), ne, 2, 2)
boundary_mesh = df.BoundaryMesh(slice_mesh, 'exterior')
cc = df.CellFunction('size_t', boundary_mesh, 0)
x1 = df.AutoSubDomain(lambda x: x[1] < -dim + tol)
x1.mark(cc, 1)
x_l = df.SubMesh(boundary_mesh, cc, 1)
x_l.coordinates()[:, 1] += dim + y
print('here')
boundary_mesh2 = df.BoundaryMesh(x_l, 'exterior')
cc2 = df.CellFunction('size_t', boundary_mesh2, 0)
x2 = df.AutoSubDomain(lambda x: x[2] < -dim + tol)
x2.mark(cc2, 2)
line_x = df.SubMesh(boundary_mesh2, cc2, 2)
line_x.coordinates()[:, 2] += dim + z
print('here2')
df.File('../../meshes/lines/' + out_name +
'_line_x.xml') << line_x
print('here3')
calc_x_line_mesh('test', -250.0, 250.0)
Maybe the idea is useful for other FEniCS users. Regards!