This is a read only copy of the old FEniCS QA forum. Please visit the new QA forum to ask questions

Interior facet integration with facet normals not working properly in parallel for 3D meshes

0 votes

Hello,

I am having issues with the direction of the facet normals on interior facet integration in 3D meshes when the code is run in parallel.

For 2D meshes, this issue can be alleviated by specifying parameters['ghost_mode']='shared_facet' (or 'shared_vertex') as answered in the similar question: 'Interior facet integration not working properly in parallel'.

However, when I run a 3D mesh in parallel it seems that the interior facet integration (i.e. dS) does not have the cell index information necessary to define the facet normal directions '+'/'-' across the interface in a global manner. The global normal direction are defined using a cell marker for the mesh with a "dummy" volume integration part in the form, i.e. Constant(0)*dx(99, domain=mesh, subdomain_data=cell_f) as specified in the assembly script. This only happens if the mesh is partitioned such that at least a part of the interior facet is on the edge of the mesh for a parallel processor and setting the 'ghost_mode' parameters to either facet or vertex does not seem to fix this issue for 3D meshes.

The issue was first encountered in Fenics 2016.2.0. I then found that the new version 2017.1.0 had been released. However, the issue still remains (but I get an error if have ghost mode set to none if i try to run the code in parallel in the new version).

I have created the following minimum working example that can mimic the issue using mpirun -np 2 python code_below.py:

from dolfin import *
parameters['ghost_mode'] = 'shared_facet' 

mesh = BoxMesh(Point(0, 0, 0), Point(1, 1, 1), 12,4,3)
mesh = refine(mesh) 

facet_f = FacetFunction('size_t', mesh, 0)
CompiledSubDomain('near(x[0], 0.5)').mark(facet_f, 1)

cell_f = CellFunction('size_t', mesh, 0)
CompiledSubDomain('x[0] - 0.5 + 1e-6 >= 0').mark(cell_f, 1)

n = FacetNormal(mesh)

form = n('-')[0]*dS(domain=mesh, subdomain_data=facet_f, subdomain_id=1) + \
Constant(0)*dx(99, domain=mesh, subdomain_data=cell_f) 
value = assemble(form)
print('normal -n_x (this should be equal to 1.0):', value)

plot(cell_f) 
interactive()

When the above code is run using a single processor I correctly find the value 1.0 (actually 0.9999999999999986 but that is close enough for me).
However using two processors (mpirun -np 2) I get 0.6666666666666664 instead of 1.0.
The mesh is partitioned just at the interface for this case, see the plot figure:
Plot(cell_f) using ghost mode facets and two processors

Using between 3 and 5 processors I again get 1.0 (notably the mesh is partitioned such that the complete interface is only on a single processor for most of these cases). However, with 6 processors I again get 0.6666.

I find similar results in my more complex code which in addition to involving the facet normals also includes surface integration of the solution from a FEM 3D code, where for one and some odd number of processors I get the correct results while for other number of processors I get smaller values. The mesh is partitioned in a similar manner for my other examples.

This behavior seems erratic and strange to me and I would be very happy to receive some clarification if this it the intended behavior and I have missed something important for parallel computation.

Thanks in advance
Johan

asked May 29, 2017 by johwing FEniCS Novice (120 points)

Hi, similar bug is observed here.

For the 2D case the bug seems to have been fixed in the new release 2017.1.0 of Fenics by requiring that ghost_mode 'shared_facet/vertex' is set instead of 'none' (none seems to be the default setting otherwise). Failure to set this parameter results in an error when the code is run in parallel. However, setting the ghost parameter does not seem to yield the same effect for the 3D meshes that I work with, but unless I am missing something in my problem description and parameter setup I really think that the ghost mode should work in a similar manner also for the 3D meshes.

Note that in Fenics 2016.2.0 I could reproduce this issue in 2D by setting ghost mode to none in the following code in parallel with 2 cores:

from dolfin import *
parameters['ghost_mode'] = 'none' #  'shared_facet'

mesh = UnitSquareMesh(12, 3)
mesh = refine(mesh)

facet_f = FacetFunction('size_t', mesh, 0)
CompiledSubDomain('near(x[0], 0.5)').mark(facet_f, 1)
cell_f = CellFunction('size_t', mesh, 0)
CompiledSubDomain('x[0] - 0.5 > 0').mark(cell_f, 1)

n = FacetNormal(mesh)
form = n('-')[0]*dS(domain=mesh, subdomain_data=facet_f, subdomain_id=1) + \ 
Constant(0)*dx(99, domain=mesh, subdomain_data=cell_f) 
value = assemble(form)
print('-n_x value (should be 1.0)', value)

plot(cell_f)
interactive()

However, in the new version the same code produces an error unless I change the ghost mode and the correct result is found for any number of processors.

...