I found that Functionals assembling over boundaries throw a segfault when they are run in parallel now that I have updated to 1.4 (I had no issue in 1.2)
Modifying demo/undocumented/functional/cpp slightly produces the segfault.
Into main() I add
dolfin::parameters["num_threads"] = 8;
In the form file, I updated it to use the FacetNormal
element = FiniteElement("Lagrange", triangle, 2)
n = FacetNormal(triangle) # New line
v = Coefficient(element)
M = ( Dx(v,i)*n[i])*ds # Modified just to use the FacetNormal on Facet
Running the demo simply with
$ ffc -l dolfin EnergyNorm.ufl && make && ./demo_functional
produces the segfault:
FFC finished in 0.12523 seconds.
Scanning dependencies of target demo_functional
[100%] Building CXX object CMakeFiles/demo_functional.dir/main.cpp.o
Linking CXX executable demo_functional
[100%] Built target demo_functional
*** Warning: Form::coloring does not properly consider form type.
Coloring mesh.
[ubuntu:03700] *** Process received signal ***
[ubuntu:03700] Signal: Segmentation fault (11)
[ubuntu:03700] Signal code: Address not mapped (1)
[ubuntu:03700] Failing at address: (nil)
Segmentation fault (core dumped)
Is this functionality no longer supported? I had been using it to integrate a temperature gradient along a wall previously.
I tried to work around it by using SpatialCoordinate(triangle), but the result was the same.
I found an old workaround that's letting me proceed just fine (the boundary assembly is cheap)
https://answers.launchpad.net/dolfin/+question/173088
I set the number of threads to zero just before the assembly and restore it to the original value just after.
https://bitbucket.org/fenics-project/dolfin/issue/326/intermittent-failures-with-multiple
I believe this is my issue - trying to compile petsc with openmp support.
For now I have all assembly occurring on a single thread, this has resolved the segfault issue. It would be nice to have OpenMP parallelization on assembly - in my case it's by far the most expensive step.