This is a read only copy of the old FEniCS QA forum. Please visit the new QA forum to ask questions

Segfault on assembly of facets with 1.4? (with repo)

+1 vote

I found that Functionals assembling over boundaries throw a segfault when they are run in parallel now that I have updated to 1.4 (I had no issue in 1.2)

Modifying demo/undocumented/functional/cpp slightly produces the segfault.

Into main() I add

dolfin::parameters["num_threads"] = 8;

In the form file, I updated it to use the FacetNormal

element = FiniteElement("Lagrange", triangle, 2)
n = FacetNormal(triangle)     # New line
v = Coefficient(element)
M = (  Dx(v,i)*n[i])*ds    # Modified just to use the FacetNormal on Facet

Running the demo simply with

$ ffc -l dolfin EnergyNorm.ufl && make && ./demo_functional

produces the segfault:

FFC finished in 0.12523 seconds.
Scanning dependencies of target demo_functional
[100%] Building CXX object CMakeFiles/demo_functional.dir/main.cpp.o
Linking CXX executable demo_functional
[100%] Built target demo_functional
*** Warning: Form::coloring does not properly consider form type.
Coloring mesh.
[ubuntu:03700] *** Process received signal ***
[ubuntu:03700] Signal: Segmentation fault (11)
[ubuntu:03700] Signal code: Address not mapped (1)
[ubuntu:03700] Failing at address: (nil)
Segmentation fault (core dumped)

Is this functionality no longer supported? I had been using it to integrate a temperature gradient along a wall previously.

I tried to work around it by using SpatialCoordinate(triangle), but the result was the same.


I found an old workaround that's letting me proceed just fine (the boundary assembly is cheap)

https://answers.launchpad.net/dolfin/+question/173088

I set the number of threads to zero just before the assembly and restore it to the original value just after.


https://bitbucket.org/fenics-project/dolfin/issue/326/intermittent-failures-with-multiple

I believe this is my issue - trying to compile petsc with openmp support.


For now I have all assembly occurring on a single thread, this has resolved the segfault issue. It would be nice to have OpenMP parallelization on assembly - in my case it's by far the most expensive step.

asked Jul 15, 2014 by Charles FEniCS User (4,220 points)
edited Jul 16, 2014 by Charles

1 Answer

+1 vote
 
Best answer

This was odd, but simply constructing SystemAssembler and calling the instance's assemble member works well with multiple threads (MPI and MP) with SLEPc.

I would get the seg faults on numerous forms whenever calling the dolfin::assemble(,) functions directly. Replacing those calls removed the seg faults. I didn't find a way to replace the helper for assembling a scalar, so i just forced that assembly down to one thread and it works well enough (the boundary assembly is quick).

Overall I am floored by how nice getting to MPI support has been - thank you for an amazing framework!

answered Jul 20, 2014 by Charles FEniCS User (4,220 points)

Yes, I have the same issue.

I use PETSc:

parameters["linear_algebra_backend"] = "PETSc";

If I run a solver like this:

  LinearVariationalProblem problem(a, L, u, bc);
  LinearVariationalSolver solver(problem);
  solver.parameters["linear_solver"] = "gmres";
  solver.parameters["preconditioner"] = "ilu";
  solver.solve();

or like this:

solve(a == L, u, bc);

I don't have an error.

But when I try to assemble a matrix and vector and then solve it then the solver crashes:

  Matrix A;
  Vector b;

  assemble(A, a);
  assemble(b, L);

  solve(A, *u.vector(), b, "gmres", "default");

Error message:

location: location = "PETScKrylovSolver.cpp"
task: task = "solve linear system using PETSc Krylov solver"
reason: reason = "Solution failed to converge in 10000 iterations (PETSc reason DIVERGED_ITS, residual norm ||r|| = 2.446559e-003)"
mpi_rank = -1

That's a different issue where the Krylov Solver failed to converge.

But If I solve the same problem like this:

  LinearVariationalProblem problem(a, L, u, bc);
  LinearVariationalSolver solver(problem);
  solver.parameters["linear_solver"] = "gmres";
  solver.parameters["preconditioner"] = "ilu";

solution converges successfully
I think the problem with assemble/solve

This is unrelated to the original question, but... try a different solver/preconditioner. such as

solve(A, *u.vector(), b, "bicgstab", "bjacobi");

Yes, you right. Thank you very much!

I simple forget to add "bc.apply(A, b);" after assemble so the problem doesn't converge.

Hovewer, I can't run solve "Incompressible Navier-Stokes equations" even with direct lu solver and PETSc linear algebra backend with OpenMP threads.

parameters["linear_algebra_backend"] = "PETSc"

link to the code:
http://fenicsproject.org/documentation/dolfin/1.5.0/cpp/demo/documented/navier-stokes/cpp/documentation.html

If I set another linear algebra backend, like uBLAS, I solve the problem easily.

parameters["linear_algebra_backend"] = "uBLAS";

But how about PETSc, do you use it or not with OpenMP threads?

...