This is a read only copy of the old FEniCS QA forum. Please visit the new QA forum to ask questions

Distributed Expressions with Pointwise evaluations require allow_expression due to ghosts?

0 votes

One of my expressions, for a boundary condition, uses Pointwise evaluation to get the value of a different FunctionSpace at the current coordinate (i.e., CG4 density at the location for a CG3 velocity).

There are no issues up until a fairly large mesh (150x150 UnitSquare for example), where I start to get extrapolation errors:

*** Error: Unable to evaluate function at point.
*** Reason: The point is not inside the domain. Consider setting "allow_extrapolation" to allow extrapolation.
*** Where: This error was encountered inside Function.cpp.

It appears that this is because the function fails to find a local cell containing the point, as it's on a different rank. However, the point is within a ghost cell - should I simply enable extrapolation to extrapolate into the ghost cell, or is it possible to evaluate within the actual ghost cell?

asked Jul 22, 2014 by Charles FEniCS User (4,220 points)

1 Answer

+1 vote
 
Best answer

In the development version, you can now have distributed meshes with ghost cells. Add the line

parameters["ghost_mode"] = "shared_vertex"

at the top of your program.

It can also be the case that you're evaluating functions on an exterior boundary, and due to finite precision arithmetic can mean you need to allow extrapolation.

answered Jul 27, 2014 by Garth N. Wells FEniCS Expert (35,930 points)
selected Jul 27, 2014 by Charles

Excellent, thank you! I've been following the commits and the work there is very exciting (truly!).

I have my code running locally in Ubuntu and on a large cluster (30k+ nodes available) well, but I have noticed the MPI speed up is not great on assembly - is this something I should see a big change with the ghost_mode assembly? Intuitively I'm guessing the assembly is currently being done globally on each node? (In my case assembly is 70% of my execution time due to a coefficient in two of my bi-linear terms)

The other question (sorry to piggy back on a piggy back question), is dev fairly stable at the moment?

If it provides a hint to whether the exterior boundary is failing due to floating point arithmetic, it mostly occurs when the square mesh is finer than 30x30 with fourth order elements.

Assembly should scale very well. Send sample code and a description of what performance you see to fenics@fenicsproject.org.

Wow, big difference when I test with dev! :)

Everything works splendidly except for refinement with the ghosts - is there a way to rebuild the ghosts after refinement?

Perhaps bring the mesh to rank 0, do the refinement, and then partition it again? I poked around through the source a bit and think I found the steps needed to partition, but, I couldnt find a way to bring the mesh back to one process without the ghosts.

Perhaps I could keep a 'local' copy of the entire mesh on rank 0, refine that, and then use a copy of it for partitioning after refinement?

*** -------------------------------------------------------------------------
*** Error:   Unable to build ghost mesh.
*** Reason:  Ghost cell information not available.
*** Where:   This error was encountered inside MeshPartitioning.cpp.
*** Process: unknown
*** 
*** DOLFIN version: 1.4.0+
*** Git changeset:  cae05875f2e3c3b987d10c4a0735235abcb8d71b
*** -------------------------------------------------------------------------

which happens when I call refine:

  void refine(Mesh& refined_mesh, const Mesh& mesh,
              const MeshFunction<bool>& cell_markers, bool redistribute = true);

with redistribute = false, when redistribute = true I get a segfault a during solves.

Local refinement is okay for me, as the way I mark the mesh for refinement tends to keep each local mesh at about the same DOF as the others.

Note the refinement worked in parallel with MPI before the ghosts (but the assembly performed poorly without the ghosts).

Register this as an Issue on Bitbucket (with simple but complete code to reproduce error).

Will do, thank you. I think I am jumping ahead of the development, rather than this being a bug.

I revisited this issue. In case someone else get's here, the work around I used was to set ghost_mode to none during refinement, and back to shared_vertex after.

...