I know this questioned might be asked and answered in several different ways. I have gone through all of them but still can't figure out a way. I guess my problem is slightly different than theirs. I have been struggling for quite a few days on this issue. Hopefully someone could shed some light.
Simply speaking, I import an xml format 2D mesh which has 72 vertices. Also, from a file I import a numpy vector which has the size of 144, i.e. 2 DOFs per vertex. I order the values in the file as
DOF1@Vertex1
DOF2@Vertex1
DOF1@Vertex2
DOF2@Vertex2
...
...
My code works fine in serial. But in parallel (>1 processor), I can't get it right. Below is a minimal working code snippet. You can see the output g looks different in serial and parallel when visualizing the .pvd file. The correct g looks like this.
from dolfin import *
import numpy as np
mesh = Mesh("xmlmesh.xml")
V = VectorFunctionSpace(mesh, "CG", 1)
g_np = np.loadtxt("measuredDisp_1col.txt") # Order: 1st node: x, y; 2nd node: x, y ...
g = Function(V)
dofs_local = dof_to_vertex_map(V)
my_first, my_last = V.dofmap().ownership_range() # get range of local process
dofs = filter(lambda dof: dof in range(my_last-my_first), dofs_local) #filter ghost dofs
g_np = g_np[dofs]
g.vector().set_local( g_np )
File("output/g.pvd") << g
If you want to run the code snippet, you may want to download the two files I'm using, the mesh and the numpy vector.
Thanks in advance!
By the way, I'm using fenics 2016.1.0.