This is a read only copy of the old FEniCS QA forum. Please visit the new QA forum to ask questions

Getting cells neighbors in parallel

+3 votes

Dear all,
for some regularization issues I need to compute the average of a DG-0 function in the neighborhood of every cell. Using cells(current_cell) I get only the neighbors on the same process. So the following code gives good results in serial, but shows the mesh partition when it is run with two threads.

from dolfin import *


def neighbor_average(alpha):
    start_dof = dm0.ownership_range()[0]
    valpha = alpha.vector().get_local()
    average = Function(V0, name="average")
    vaverage = average.vector().get_local()
    for c in cells(mesh):
        dof_c = dm0.cell_dofs(c.index())[0] - start_dof
        n_count = 0
        n_sum = 0
        for n in cells(c):
            dof_n = dm0.cell_dofs(n.index())[0] - start_dof
            neighbor_value = valpha[dof_n]
            n_sum = n_sum + neighbor_value
            n_count = n_count + 1
        vaverage[dof_c] = n_sum / n_count
    average.vector().set_local(vaverage)
    return average


mesh = UnitSquareMesh(10, 10)
V0 = FunctionSpace(mesh, "DG", 0)
dm0 = V0.dofmap()

f = project(Expression("x[0]"), V0)
g = neighbor_average(f)

file_g = File("g.pvd")
file_g << g

My questions are:
Is there a possibility to find neighbor cells and getting reading acces to off-process neighbors without producing a communication overkill?
Is there a better way to access the dof of a cell than mine?
Is there a big benefit of using c++ instead of python for such tasks?

Thanks in advance for any help!

asked Dec 9, 2014 by hezo FEniCS Novice (230 points)

1 Answer

+3 votes

Recently in dolfin, the developers introduced ghost cells, allowing for access to neighbour cells without communication. You can specify this with ghost_mode in parameters. In your case, this should be set to shared_vertex. The iterator cells(c) will then also iterate over the ghost cell neighbours.

Secondly, the call to get_local() returns the vector owned by the current process. However, you can also get values of shared dofs through this method by passing value and index arrays as arguments. This logic seems a bit strange, and I believe this is being adressed.

Thirdly, pvd files have a bug in parallel, which don't allow for ghost cells. Therefore, I suggest using xdmf files.

For your case, the following code should do the trick with the development version (but not with 1.4):

from dolfin import *
import numpy as np

def neighbor_average(alpha):
    start_dof = dm0.ownership_range()[0]
    valpha = alpha.vector()
    average = Function(V0, name="average")
    vaverage = average.vector().get_local()
    for c in cells(mesh):
        dof_c = dm0.cell_dofs(c.index())[0]
        n_count = 0
        n_sum = 0
        for n in cells(c):
            dof_n = dm0.cell_dofs(n.index())[0]
            neighbor_value = np.array([0.0], dtype=np.float)
            idx = np.array([dof_n], dtype=np.intc)
            valpha.get_local(neighbor_value, idx)
            n_sum = n_sum + neighbor_value[0]
            n_count = n_count + 1
        vaverage[dof_c] = n_sum / n_count
    average.vector().set_local(vaverage)
    return average

parameters["ghost_mode"] = "shared_vertex"

mesh = UnitSquareMesh(10,10)
V0 = FunctionSpace(mesh, "DG", 0)
dm0 = V0.dofmap()

f = project(Expression("x[0]"), V0)
g = neighbor_average(f)

print norm(g)

file_g = File("g.xdmf")
file_g << g

This can also easily be improved by gathering the indices in the inner loop, and making a single call to get_local.

answered Dec 9, 2014 by Øyvind Evju FEniCS Expert (17,700 points)
...