This is a read only copy of the old FEniCS QA forum. Please visit the new QA forum to ask questions

Setting Function values in parallel [repeated question]

+1 vote

Dear all,
i'm trying to set function values in parallel using the following code proposed as solution to this question (1) :

from dolfin import *
import numpy
from mpi4py import MPI as nMPI

mesh = UnitSquareMesh(5, 5, "crossed")
V  = FunctionSpace(mesh, "CG", 1)

comm = nMPI.COMM_WORLD                                                          

mesh = UnitSquareMesh(10, 10)                                                   
V = FunctionSpace(mesh, "CG", 1)                                                
u = Function(V)                                                                 
v = u.vector()                                                                  
dofmap = V.dofmap()                                                             

rank = comm.Get_rank()  

ranks = []                                                                         
visited = []                                                                    

node_min, node_max = v.local_range()                                            

for cell in cells(mesh):                                                        
  nodes = dofmap.cell_dofs(cell.index())                                        
  for node in nodes:                                                            
    if (node not in visited) and node_min <= node <= node_max:                  
      visited.append(node)                                                      
      ranks.append(float(rank))                                                    

ranks = numpy.array(ranks)                                                            
v.set_local(ranks)

The above code works fine using only one procces, but in contrast to the discussion presented in (1) when i run in parallel the next error message appears:

** Error: Unable to set local values of PETSc vector.
** Reason: Size of values array is not equal to local vector size.
** Where: This error was encountered inside PETScVector.cpp.

I'm using fenics 1.6. Any idea about this?

Thanks in advance!

asked May 4, 2016 by hernan_mella FEniCS Expert (19,460 points)
edited May 5, 2016 by hernan_mella

1 Answer

+3 votes
 
Best answer

Hi, here is an updated and slightly modified example working with 1.7.0dev version built from recent master. It should work with 1.6.0 too

from dolfin import *
mesh = UnitSquareMesh(10, 10)                                                    
V = FunctionSpace(mesh, 'CG', 1)  
u = Function(V)                                                                 

vec = u.vector()
values = vec.get_local()                                            

dofmap = V.dofmap()                                                             
my_first, my_last = dofmap.ownership_range()                # global

# Let's build manually function x**2 + y**2
visited = []

# 'Handle' API change of tabulate coordinates
if dolfin_version().split('.')[1] == '7':
    x = V.tabulate_dof_coordinates()
else:
    x = V.dofmap().tabulate_all_coordinates(mesh)
x = x.reshape((-1, mesh.geometry().dim()))

for cell in cells(mesh):                                                        
    dofs = dofmap.cell_dofs(cell.index())                   # local                                    
    for dof in dofs:                                                            
        if not dof in visited:
            global_dof = dofmap.local_to_global_index(dof)  # global
            if my_first <= global_dof < my_last:                  
                visited.append(dof)                                                      
                values[dof] = sum(x[dof]**2)

vec.set_local(values)
vec.apply('insert')

# Check
u0 = interpolate(Expression('x[0]*x[0]+x[1]*x[1]', degree=2), V)
u0.vector().axpy(-1, vec)

error = u0.vector().norm('linf')
if MPI.rank(mpi_comm_world()) == 0: print error

Btw, the old example was for 1.3.0 or something like that.

answered May 5, 2016 by MiroK FEniCS Expert (80,920 points)
selected May 10, 2016 by hernan_mella

Many thanks!,
this is just what i was looking for. However, if i use a finer mesh with a CG function space of order 1 with almost 500.000 dofs the code works too slow (even if i run in multiple procces). This is a expected behaviour of the above loops?

Again, many thanks for your detailed answer.

Hi Herman, sorry for late reply. Yes the code is slow: most of this is due to looping in python. Also you are setting dof value multiple times (for each cell that has the dof). I am getting good speed with this code

from dolfin import *
import sys

n_cells = int(sys.argv[1])

mesh = UnitSquareMesh(n_cells, n_cells)                                                   
V = FunctionSpace(mesh, 'CG', 1)  
u = Function(V)                                                                 

vec = u.vector()
values = vec.get_local()                                            

dofmap = V.dofmap()                                                             
my_first, my_last = dofmap.ownership_range()                # global

# 'Handle' API change of tabulate coordinates
if dolfin_version().split('.')[1] == '7':
    x = V.tabulate_dof_coordinates().reshape((-1, 2))
else:
    x = V.dofmap().tabulate_all_coordinates(mesh)

unowned = dofmap.local_to_global_unowned()
dofs = filter(lambda dof: dofmap.local_to_global_index(dof) not in unowned, 
              xrange(my_last-my_first))
x = x[dofs]

import numpy as np
values[:] = x[:, 0]**2 + x[:, 1]**3

vec.set_local(values)
vec.apply('insert')

# Check
u0 = interpolate(Expression('x[0]*x[0]+x[1]*x[1]*x[1]', degree=2), V)
u0.vector().axpy(-1, vec)

error = u0.vector().norm('linf')
if MPI.rank(mpi_comm_world()) == 0: print V.dim(), error 

Here's some performance numbers for the new code

fenics-qa|(master)$ time mpirun -np 3 python test_126_fast.py 700
Process 0: Number of global vertices: 491401 Process 0: Number of
global cells: 980000 491401 0.0

real 0m9.246s user 0m26.656s sys 0m0.664s

and the old one (I did not have patience to let it finish)

fenics-qa|(master)$ time mpirun -np 3 python test_126.py 700 Process
0: Number of global vertices: 491401 Process 0: Number of global
cells: 980000 ^C[mpiexec@zeroSum] Sending Ctrl-C to processes as
requested [mpiexec@zeroSum] Press Ctrl-C again to force abort
Traceback (most recent call last): File "test_126.py", line 28, in

if not dof in visited: KeyboardInterrupt Traceback (most recent call last): File "test_126.py", line 28, in
if not dof in visited: KeyboardInterrupt Traceback (most recent call last): File "test_126.py", line 28, in
if not dof in visited: KeyboardInterrupt

real 11m47.514s user 35m8.408s sys 0m2.580s

Great! this was very helpful for me. I really appreciate your help, thanks.

...