This is a read only copy of the old FEniCS QA forum. Please visit the new QA forum to ask questions

solve independent equations on different cores

+7 votes

Hello,
I would like to solve two independent equations parallel on different cores, is it possible to do something like

....
if(process == 1)
 solve(B == G, m, bcs = bcs)
if(process == 2)
 solve(A == F, u, bcs = bcs)
....

in Fenics? I already read that it would be possible to use mpirun, but how can I specify what should be done on each core, (e.i. which solution should be solved on core one etc.) ?
Thanks in advance.

asked Feb 10, 2014 by bobby53 FEniCS User (1,260 points)
edited Feb 10, 2014 by bobby53

1 Answer

+3 votes
 
Best answer

Hi,

If you have a newer version of dolfin (>1.3.0), then it is trivial. For older versions it is not possible.The following bit of code solves the Poisson equation on two processors using a positive source on rank 0 and negative on rank 1. The mesh is not distributed. On rank 0 the mesh is 10x10 and on rank 1 it is 20x20.

script.py:

from dolfin import *

mpi_comm = mpi_comm_world() 
my_rank = MPI.rank(mpi_comm)

mesh = UnitSquareMesh(mpi_comm_self(), 10+10*my_rank, 10+10*my_rank)
V = FunctionSpace(mesh, "CG", 1)
u = TrialFunction(V)
v = TestFunction(V)
u_ = Function(V)
source = Constant(1) if my_rank == 0 else Constant(-1)
solve(inner(grad(u), grad(v))*dx == source*v*dx, u_, bcs=DirichletBC(V, 0, "on_boundary"))
plot(u_, title="Proc {}".format(my_rank))
interactive()

Run with:

mpirun -np 2 python script.py

If you need to solve completely different equations, then split by if my_rank==0 etc, but use mpi_comm_self when creating the mesh.

answered Feb 14, 2014 by mikael-mortensen FEniCS Expert (29,340 points)
selected Feb 16, 2014 by bobby53

Hey Mikael,
thank you for your answer. I came across mpirun while searching for a way to solve my problem, but I couldn't really find an easy example.
So, from what I understand, MPI.rank(mpi_comm) gives back the number of cores with which I started the programm. If I want the processes to wait for each other, I have to use mpi.barrier().
Earlier I experimented a little with the module multiprocessing, which works fine for me as well.

MPI.rank(mpi_comm) is the rank of the individual processor (here 0 and 1 for the two different cpu's, thus giving two different mesh sizes). MPI.size(mpi_comm) gives the total number of processors (here 2). MPI.barrier() makes the processes wait for each other.

When I try to run your code I get as an error:

  NameError: name 'mpi_comm_world' is not defined

:/ I don't have, as you probably can tell by now, any experience with mpi. What would I have to do if I want to, for example, compare the two solutions or get the error by errornorm(u1, u2, 'l2', 3), where u1 has been computed on the first and u2 on the second core? Could you please give me some keyword as to what I need to look up?

Me too. Did you ever get this working?

Not really, and I'm not sure whether it is possible to compare the two solutions afterwards using MPI.
My "workaround" is saving the vectors in numpy arrays and solve the linear system by np.linalg.solve(). Numpy arrays are pickable, therefore I can use multiprocessing to define a pool of workers and let them solve the system in parallel.
But, up till now, it takes longer to solve the equations in parallel due to the overhead created by transferring the data. Please let me know if you came up with another solution.

Spawning independent dolfin processes with mpi4py
...