This is a read only copy of the old FEniCS QA forum. Please visit the new QA forum to ask questions

evaluate function at point when using MPI parallelisation

0 votes

What is the right way to evaluate the value of a function when using MPI?

Normally (single process) you just call the function. e.g. adapting demo_poisson.py from the standard demos, to evaluate the solution at the centre point (0.5,0.5):

# Compute solution
u = Function(V)
solve(a == L, u, bc)
print "value at centre = " + str(u(0.5,0.5)) + "  [" + str(MPI.rank(mpi_comm_world())) + "]"

On a single process, that returns "value at centre = 0.251894786516 [0]".

But if I run it as "mpirun -np 4 python demo_poisson.py", it gives an error:

value at centre = 0.251894786516  [3]
Traceback (most recent call last):    
  File "demo_poisson_MPI.py", line 63, in <module>
Traceback (most recent call last):
  File "demo_poisson_MPI.py", line 63, in <module>
Traceback (most recent call last):
  File "demo_poisson_MPI.py", line 63, in <module>
    print "value at centre = " + str(u(0.5,0.5)) + "  [" + str(MPI.rank(mpi_comm_world())) + "]"
  File "/usr/lib/python2.7/dist-packages/dolfin/functions/function.py", line 601, in __call__
    print "value at centre = " + str(u(0.5,0.5)) + "  [" + str(MPI.rank(mpi_comm_world())) + "]"
  File "/usr/lib/python2.7/dist-packages/dolfin/functions/function.py", line 601, in __call__
    print "value at centre = " + str(u(0.5,0.5)) + "  [" + str(MPI.rank(mpi_comm_world())) + "]"
  File "/usr/lib/python2.7/dist-packages/dolfin/functions/function.py", line 601, in __call__
    self.eval(values, x)
RuntimeError: 

*** -------------------------------------------------------------------------
*** DOLFIN encountered an error. If you are not able to resolve this issue
*** using the information listed below, you can ask for help at
***
***     fenics@fenicsproject.org
***
*** Remember to include the error message listed below and, if possible,
*** include a *minimal* running example to reproduce the error.
***
*** -------------------------------------------------------------------------
*** Error:   Unable to evaluate function at point.
*** Reason:  The point is not inside the domain. Consider calling "Function::set_allow_extrapolation(true)" on this Function to allow extrapolation.
*** Where:   This error was encountered inside Function.cpp.
*** Process: 2
*** 
*** DOLFIN version: 1.6.0
*** Git changeset:  unknown
*** -------------------------------------------------------------------------

In one sense this makes sense: each process handles a subdomain, so the origin lies only within the domain of one process (process 3 here). For the other processes the origin point is outside their subdomain.

But suppose I need to inspect the value of the solution at a particular point. I don't necessarily know beforehand which process handles the subdomain that the point lies in. So how can I get the value without triggering the error messages on the wrong processes?

The error message mentions "set_allow_extrapolation(true)". But that doesn't help. It just extrapolates out of each subdomain to a point which might in general be far away, so the evaluation will be badly wrong. If I do try it:

solve(a == L, u, bc)
u.set_allow_extrapolation(True)
print "value at centre = " + str(u(0.5,0.5)) + "  [" + str(MPI.rank(mpi_comm_world())) + "]"

then I confirm that the extrapolations do not match the exact value (except for process 3, which contains the point) and are different for each process:

Process 1: Building point search tree to accelerate distance queries.
Process 1: Computed bounding box tree with 979 nodes for 490 points.
value at centre = 0.265262040993  [1]
value at centre = 0.251894786516  [3]
Process 0: Building point search tree to accelerate distance queries.
Process 0: Computed bounding box tree with 985 nodes for 493 points.
value at centre = 0.251835898676  [0]
Process 2: Building point search tree to accelerate distance queries.
Process 2: Computed bounding box tree with 1063 nodes for 532 points.
value at centre = 0.261791908549  [2]

I could gather all the values of the solution into a single Vector, e.g.

x = Vector()
u.vector().gather(x, numpy.array(range(V.dim()), "intc"))
print "value gathered at centre = " + str(x.array()[V.dim()/2]) + "  [" + str(MPI.rank(mpi_comm_world())) + "]"

But that doesn't seem the best approach. It only returns discrete array points and therefore makes interpolation to the point of interest unclear. For instance, the value of x.array()[V.dim()/2] is 0.227221210641 rather than the correct value 0.251894786516.

What is the proper way to evaluate functions under MPI?

asked Jul 1, 2016 by dparsons FEniCS Novice (490 points)

2 Answers

+1 vote

You can identify the process(es) that has a mesh containing the point.
Consider the following example.

from dolfin import *
from numpy import ones, arange
set_log_level(ERROR)

rank = mpi_comm_world().rank

mesh = UnitSquareMesh(8, 8)
V = FunctionSpace(mesh, "CG", 1)

u = Function(V)
u.vector()[:] = rank * ones(u.vector().local_size())

x = Point(0.5, 0.5)

bbt = mesh.bounding_box_tree() 
if bbt.collides_entity(x):
    print "Process {}: u(x) = {}".format(rank, u(x))
answered Jul 4, 2016 by Magne Nordaas FEniCS Expert (13,820 points)

I see it. Something like that should work for me, testing the point for each process.

Incidentally, you suggest bbt.collides_entity(x). I notice that bbt.collides(x) also works. It's a subtle difference between the two, whether the point collides with the tree or with an entity inside the tree. When should you use collides() instead of collides_entity()?

0 votes

I met with the same issue (FEniCS2016.1) and the workaround given by Nordaas does not work for me.

Is there any better advice to evaluate function at points when using MPI parallelization?

answered Apr 9, 2017 by jywu FEniCS Novice (510 points)

I implemented Magne's suggestion in two steps:

from mpi4py import MPI as MPI4

# evaluate Function f at Point point
# returns [ value, isValid ]
# where value = f(point)
# and isValid marks whether the value is valid or not
# (whether point is located in the local fragment of the mesh)
def evaluateAtPoint( f, point ):
  mesh = f.function_space().mesh()
  box = mesh.bounding_box_tree()
  isValid = box.collides_entity(point)
  if (isValid):
    value = f(point)
  else:
    value = 0
  return [ value, isValid ]

# returns a single value f(point)
# Returns 0 if the point is not contained in any part of the domain of f.
def evaluateAtTestedPoint( f, point ):
  dcomm = mpi_comm_world()
  mcomm = MPI4.COMM_WORLD
  root = 0

  [ valueTentative, valueValid ] = evaluateAtPoint(f, point)
  valueTentativeA = numpy.array(range(1), numpy.double)
  valueTentativeA[0] = valueTentative
  values = numpy.array(range(MPI.size(dcomm)), numpy.double)
  mcomm.Gather( valueTentativeA, values )

  # MPI crashes if we try to gather bool values,
  # so convert validities to int instead
  valueValidA = numpy.array(range(1), numpy.int)
  valueValidA[0] = valueValid
  valueValidities = numpy.array(range(MPI.size(dcomm)), numpy.int)
  mcomm.Gather( valueValidA, valueValidities )

  testedValue=0
  if (MPI.rank(mpi_comm_world()) == root):
    validityIndex = numpy.where(valueValidities==True)[0]
    if( validityIndex.size > 0 ):
      validIndex = validityIndex[0]
      testedValue = values[validIndex]
      isValid = valueValidities[validIndex]
    else:
      testedValue = 0
      isValid = False

  # get the root process to share the tested value with the other processes
  testedValueA = numpy.array(range(1), numpy.double)
  testedValueA[0] = testedValue
  mcomm.Bcast(testedValueA)
  testedValue = testedValueA[0]

  return testedValue

It seems to work well enough, e.g. invoking as
value = evaluateAtTestedPoint( myfunc, point ), where myfunc is a Function, point is a dolfin Point object.

Thinking about it, would probably be better to have it return NaN rather than 0 if the point is not in the domain of the function.

Thank you, dparsons.

You can also use fenicstools (https://github.com/mikaem/fenicstools) to evaluate the function in parallel. See also the discussion: https://github.com/mikaem/fenicstools/issues/19

...