# Assigning expansion coefficients to function does not work in parallel

### I have problem trying to assign material coefficients on the base of subdomains. The problem may be ilustrated by a slightly modified part of the Fenics Manual example form chapter 1.5 (Handling domains with different materials). The code reads:

from dolfin import *
import sys, math, numpy

rank= MPI.process_number()

nx = 4; ny = 6
mesh = UnitSquareMesh(nx, ny)

# Define a MeshFunction over two subdomains
subdomains = MeshFunction('size_t', mesh, 2)

class Omega0(SubDomain):
def inside(self, x, on_boundary):
return True if x[1] <= 0.5 else False

class Omega1(SubDomain):
def inside(self, x, on_boundary):
return True if x[1] >= 0.5 else False

# Mark subdomains with numbers 0 and 1
subdomain0 = Omega0()
subdomain0.mark(subdomains, 0)
subdomain1 = Omega1()
subdomain1.mark(subdomains, 1)

V0 = FunctionSpace(mesh, 'DG', 0)
k = Function(V0)

print 'p{}: mesh:'.format(rank), mesh
print 'p{}: subdomains:'.format(rank), subdomains
print 'p{}: k:'.format(rank), k

# Loop over all cell numbers, find corresponding
# subdomain number and fill cell value in k
k_values = [1.5, 50] # values of k in the two subdomains
for cell_no in range(len(subdomains.array())):
subdomain_no = subdomains.array()[cell_no]
k.vector()[cell_no] = k_values[subdomain_no]

print 'p{}:'.format(rank), k.vector().array()

# Much more efficient vectorized code
help = numpy.asarray(subdomains.array(), dtype=numpy.int32)
k.vector()[:] = numpy.choose(help, k_values)

print 'p{}:'.format(rank), k.vector().array()


Ran classically the code prints as expected

p0: mesh: <Mesh of topological dimension 2 (triangles) with 35 vertices and 48 cells, ordered>
p0: subdomains: <MeshFunction of topological dimension 2 containing 48 values>
p0: k: w_0
p0: [ 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5
1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5
50. 50. 50. 50. 50. 50. 50. 50. 50. 50. 50. 50.
50. 50. 50. 50. 50. 50. 50. 50. 50. 50. 50. 50. ]
p0: [ 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5
1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5
50. 50. 50. 50. 50. 50. 50. 50. 50. 50. 50. 50.
50. 50. 50. 50. 50. 50. 50. 50. 50. 50. 50. 50. ]


Ran with mpirun -np 2 the result is

Process 0: Number of global vertices: 35
Process 0: Number of global cells: 48
p1: mesh: <Mesh of topological dimension 2 (triangles) with 20 vertices and 24 cells, ordered>
p1: subdomains: <MeshFunction of topological dimension 2 containing 24 values>
p0: mesh: <Mesh of topological dimension 2 (triangles) with 22 vertices and 24 cells, ordered>
p0: subdomains: <MeshFunction of topological dimension 2 containing 24 values>
p1: k: w_0
p0: k: w_0
p1: [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.]
p0: [ 1.5 1.5 1.5 50. 50. 50. 50. 50. 50. 50. 50. 50.
50. 50. 50. 50. 50. 50. 50. 50. 50. 50. 50. 50. ]
Traceback (most recent call last):
File "err.py", line 51, in <module>
Traceback (most recent call last):
File "err.py", line 51, in <module>
k.vector()[:] = numpy.choose(help, k_values)
File "/usr/lib/python2.7/dist-packages/dolfin/cpp/la.py", line 1183, in __setslice__
k.vector()[:] = numpy.choose(help, k_values)
self.__setitem__(slice(i, j, 1), values)
File "/usr/lib/python2.7/dist-packages/dolfin/cpp/la.py", line 1214, in __setitem__
File "/usr/lib/python2.7/dist-packages/dolfin/cpp/la.py", line 1183, in __setslice__
_set_vector_items_array_of_float(self, indices, values)
RuntimeError: non matching dimensions on input
self.__setitem__(slice(i, j, 1), values)
File "/usr/lib/python2.7/dist-packages/dolfin/cpp/la.py", line 1214, in __setitem__
_set_vector_items_array_of_float(self, indices, values)
RuntimeError: non matching dimensions on input


I understand that the vectorized version:

k.vector()[:] = numpy.choose(help, k_values)


fails because the k function expects 48 values for all mesh cells, but subdomains and thus help contains only 24 values for a sub-mesh in given process.
I understand that the naive implementation with loop over cells works fine in process 0, which has info on global indices of mesh entities and fails in process 1 whic does not have this information.

Is that right?
How to make this example working with mpi (still using the MeshFunction for marking cells with different material properties)?

Or, if you want to set from a MeshFunction, try this:

class MyExpr2D(Expression):
def __init__(self, cell_fun):
assert(cell_fun.dim()==2)
self.cell_fun = cell_fun
def eval_cell(self, values, x, cell):
values[0] = self.cell_fun[cell.index]

M = UnitSquareMesh(50,50)
MF = CellFunction("double", M)
E = MyExpr2D(MF)
Q = FunctionSpace(M,"DG",0)

K = Function(Q)
K.interpolate(E)

answered Jun 7, 2013 by FEniCS Expert (31,740 points)
selected Jun 11, 2013
Thanks!   This seems to be a good solution for me. I do not feel comfortable with it, but it should work ;-)
Please reply if it works for you. Strange things can happen in parallel.
You might also be interested in looking at the HDF5 and XDMF file formats which allow you to read and write MeshFunctions in parallel.
VTK suppports MeshFunctions in parallel also.

The test code:

from dolfin import *
import sys, math, numpy

rank= MPI.process_number()

nx = 4;  ny = 6
mesh = UnitSquareMesh(nx, ny)

subdomains = CellFunction('size_t', mesh)

class Omega0(SubDomain):
def inside(self, x, on_boundary):
return True if x[1] <= 0.5 else False

class Omega1(SubDomain):
def inside(self, x, on_boundary):
return True if x[1] >= 0.5 else False

# Mark subdomains with numbers 0 and 1
subdomain0 = Omega0()
subdomain0.mark(subdomains, 0)
subdomain1 = Omega1()
subdomain1.mark(subdomains, 1)

V0 = FunctionSpace(mesh, 'DG', 0)
k = Function(V0)

print 'p{}: mesh:'.format(rank), mesh
print 'p{}: subdomains:'.format(rank), subdomains
print 'p{}: k:'.format(rank), k

class Mi(Expression):
def __init__(self,cell_fun):
self.cell_fun = cell_fun
self.values = [1.5,50]
def eval_cell(self,values,x,cell):
values[0]= self.values[ self.cell_fun[cell.index] ]

mi = Mi(subdomains)
k.interpolate(mi)

print 'p{}:'.format(rank), k.vector().array()


works as expected:
oer:~/pliki/fenics_jstar/simulators/msp> mpirun -np 2 python sol.py
Process 0: Number of global vertices: 35
Process 0: Number of global cells: 48
p1: mesh:
p0: mesh:
p0: subdomains:
p0: k: w_0
p1: subdomains:
p1: k: w_0
p0: [ 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5
1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 50. 50. 50. ]
p1: [ 1.5 1.5 1.5 50. 50. 50. 50. 50. 50. 50. 50. 50.
50. 50. 50. 50. 50. 50. 50. 50. 50. 50. 50. 50. ]

It will take some minutes to test it in my simulator, but I don't expect problems ;-)

VTK only supports visualisation output, not input.
You're right. I didn't know that XDMF supports input of Mesh and MeshFunction.
M = UnitSquareMesh(50,50)