Hello,
I am having issues with the direction of the facet normals on interior facet integration in 3D meshes when the code is run in parallel.
For 2D meshes, this issue can be alleviated by specifying parameters['ghost_mode']='shared_facet'
(or 'shared_vertex'
) as answered in the similar question: 'Interior facet integration not working properly in parallel'.
However, when I run a 3D mesh in parallel it seems that the interior facet integration (i.e. dS
) does not have the cell index information necessary to define the facet normal directions '+'/'-'
across the interface in a global manner. The global normal direction are defined using a cell marker for the mesh with a "dummy" volume integration part in the form, i.e. Constant(0)*dx(99, domain=mesh, subdomain_data=cell_f)
as specified in the assembly script. This only happens if the mesh is partitioned such that at least a part of the interior facet is on the edge of the mesh for a parallel processor and setting the 'ghost_mode' parameters to either facet or vertex does not seem to fix this issue for 3D meshes.
The issue was first encountered in Fenics 2016.2.0. I then found that the new version 2017.1.0 had been released. However, the issue still remains (but I get an error if have ghost mode set to none if i try to run the code in parallel in the new version).
I have created the following minimum working example that can mimic the issue using mpirun -np 2 python code_below.py
:
from dolfin import *
parameters['ghost_mode'] = 'shared_facet'
mesh = BoxMesh(Point(0, 0, 0), Point(1, 1, 1), 12,4,3)
mesh = refine(mesh)
facet_f = FacetFunction('size_t', mesh, 0)
CompiledSubDomain('near(x[0], 0.5)').mark(facet_f, 1)
cell_f = CellFunction('size_t', mesh, 0)
CompiledSubDomain('x[0] - 0.5 + 1e-6 >= 0').mark(cell_f, 1)
n = FacetNormal(mesh)
form = n('-')[0]*dS(domain=mesh, subdomain_data=facet_f, subdomain_id=1) + \
Constant(0)*dx(99, domain=mesh, subdomain_data=cell_f)
value = assemble(form)
print('normal -n_x (this should be equal to 1.0):', value)
plot(cell_f)
interactive()
When the above code is run using a single processor I correctly find the value 1.0 (actually 0.9999999999999986 but that is close enough for me).
However using two processors (mpirun -np 2) I get 0.6666666666666664 instead of 1.0.
The mesh is partitioned just at the interface for this case, see the plot figure:
Using between 3 and 5 processors I again get 1.0 (notably the mesh is partitioned such that the complete interface is only on a single processor for most of these cases). However, with 6 processors I again get 0.6666.
I find similar results in my more complex code which in addition to involving the facet normals also includes surface integration of the solution from a FEM 3D code, where for one and some odd number of processors I get the correct results while for other number of processors I get smaller values. The mesh is partitioned in a similar manner for my other examples.
This behavior seems erratic and strange to me and I would be very happy to receive some clarification if this it the intended behavior and I have missed something important for parallel computation.
Thanks in advance
Johan