This is a read only copy of the old FEniCS QA forum. Please visit the new QA forum to ask questions

How to do array slicing when running in parallell?

0 votes

Hi there!

I have some problems when running my FEniCS implementation in parallell with mpirun -np and so on. The problems are occurring because I'm applying list slicing different places. E.g. I have got a PETSc vector x and want to assign a new vector to this PETSc vector as

x[:] = some_array

However, when applying MPI in parallell this causes problems, error message

File "/usr/lib/python2.7/dist-packages/dolfin/cpp/la.py", line 1226, in setslice
self.setitem(slice(i, j, 1), values)
File "/usr/lib/python2.7/dist-packages/dolfin/cpp/la.py", line 1257, in setitem
_set_vector_items_array_of_float(self, indices, values)
RuntimeError: non matching dimensions on input

I am not experienced with MPI at all, have tried reading here and there but I can't find an answer to my problem: how to easily apply list slicing in parallell?

Thanks!

asked Oct 21, 2014 by joakibo FEniCS User (1,140 points)

1 Answer

+2 votes
 
Best answer

Hi Joakim, try with one of the approaches below

from dolfin import *
import numpy as np

comm = mpi_comm_world()
rank = MPI.rank(comm)
rank_shift = rank + 1

mesh = UnitIntervalMesh(1000)
V = FunctionSpace(mesh, 'CG', 1)
# Get dofs owned by process
first_dof, last_dof = V.dofmap().ownership_range()

u = Function(V)
U = u.vector()

w = interpolate(Expression('sin(3*x[0])'), V)
W = w.vector()

# Assign with slicing
U[first_dof:last_dof] = W[first_dof:last_dof]
plot(u, interactive=True)

# Change W just to see the difference
W[:] += rank_shift

# Assign with set_local
U.set_local(W.get_local())
U.apply('insert')
plot(u, interactive=True)

# Now assign numpy values
values = rank_shift*np.ones(last_dof - first_dof)
U[first_dof:last_dof] = values  # or U.set_local(values)
plot(u, interactive=True) 
answered Oct 21, 2014 by MiroK FEniCS Expert (80,920 points)
selected Oct 22, 2014 by joakibo

Thanks, Miro! I'll give it a try and report back.

Thank you for an instructive answer, excellent! However, I have discovered a new problem:

When running with mpirun and splitting the meshes,

DG = FunctionSpace(mesh, "DG", 0)
DG_dofmap = DF.dofmap()

Should have returned the dofmap of the mesh on that process only, however it returns the dofmap of the whole mesh. Why? Any way to fix this? If I print DG_dofmap I get

<DofMap of global dimension 40000>

which is the global dimension, not the splitted, local mesh dimension..

Maybe this would be more suited for a new question :-)

EDIT: I think I found something, let me check out the ownership_range() function, I'm sure it'll fix my problem.

...