This is a read only copy of the old FEniCS QA forum. Please visit the new QA forum to ask questions

calculate average of one component of a vector in parallel

0 votes

Hi,

I am solving for velocity and pressure using Navier-Stokes in fenics. Then I use the velocity vector for ensemble averaging in the following integral:

u, p = w.split()
avgVelx1 = assemble(u[0]*dx)

It works fine in serial, but when I use the same command in parallel it gives me a bunch of numbers for the above integral and they are different from that obtained in serial. How cal I fix it?

asked Dec 14, 2016 by Abi FEniCS Novice (270 points)

1 Answer

0 votes

Maybe you need to sum the values of avgVelx1 distributed at each process. Try something like:

comm = mpi_comm_world()
avgVelx1 = MPI.sum(comm, assemble(u[0]*dx))
print "sum: ", avgVelx1

UPDATE: In the versions 1.6, 2016.1 and 2016.2 the assemble function automatically sum the results over all the processes.

answered Dec 14, 2016 by hernan_mella FEniCS Expert (19,460 points)
edited Dec 15, 2016 by hernan_mella

Thanks, but it does not give me the correct answer and the result is off. I should have 0.2 from the integral, but in parallel it gives me 14, I think this is because we have some common nodes between processors and calculate them more than once. If I reduce the number of processors then the result of this integral reduces also, such as to 7!

Maybe your mesh has not being distributed over the processes?. The next example works fine for me in the versions 1.6, 2016.1 and 2016.2:

from dolfin import *

mesh = UnitSquareMesh(10, 10)
V = VectorFunctionSpace(mesh, "CG", 1)
u = interpolate(Expression(("x[0]", "x[1]"), degree=1), V)

avgVelx1 = assemble(u[1]*dx)
print "avg: ", avgVelx1

I read the mesh as a h5 and I think it's fine. Have you tried for both serial and parallel and get similar results?

Yes, the above example was executed using:

mpirun -n x example.py

with x in range of 1 to 10. In each case the result was the same (avgVelx1 = 0.5 in each process)

Here is some parts of my code:

from dolfin import *
import numpy as np

mesh_filename = 'light_mesh_final.h5'
mesh = Mesh()
mesh_in = HDF5File(mpi_comm_world(), mesh_filename, 'r')
mesh_in.read(mesh, 'mesh', False)
mesh_in.close()

set_log_level(PROGRESS)

Define function spaces

V = VectorFunctionSpace(mesh, "CG", 2)
P = FunctionSpace(mesh, "CG", 1)
W = MixedFunctionSpace([V, P]) #Taylor-Hood

.
.
.

Define variational problem

(u, p) = TrialFunctions(W)
(v, q) = TestFunctions(W)
a = (inner(grad(u), grad(v)) - div(v)p + qdiv(u))dx
L = inner(f,v)
dx

Compute solution

A, b = assemble_system(a, L)
bc_4.apply(A,b)

w = Function(W)
solve(A, w.vector(), b,'gmres', 'hypre_amg')
u, p = w.split()

out_velocity = HDF5File(mpi_comm_world(),'velocity'+ '.h5','w')
out_velocity.write(u,'velocity')
out_velocity.close()

comm = mpi_comm_world()
avgVelx1 = MPI.sum(comm, assemble(u [0]*dx))
print "sum: ", avgVelx1

Your code looks fine. Is there any difference if you use

mesh = Mesh(mpi_comm_world())
mesh_in = HDF5File(mesh.mpi_comm(), mesh_filename, 'r')
avgVelx1 = assemble(u[0]*dx)
print "avg: ", avgVelx1

?

Can you verify if effectively the mesh has being distributed over the processes?

...