This is a read only copy of the old FEniCS QA forum. Please visit the new QA forum to ask questions

DG solve in parallel

+7 votes

Dear all,

I have been trying to run my DG code in parallel and it generates the following error:

Error: Unable to perform operation in parallel.
Reason: Assembly over interior facets is not yet working in parallel.

I saw that it is simply because it has not been implemented so far (see https://answers.launchpad.net/dolfin/+question/219901 )

But I was still wondering: would there be a way to assemble the forms in serial and perform the linear solve in parallel?

Thanks a lot!
Vincent

asked Jul 8, 2014 by V_L FEniCS User (4,440 points)

3 Answers

+7 votes
 
Best answer

This will be supported in the main branch soon. Experimental support is in https://bitbucket.org/fenics-project/dolfin/branch/garth/ghost-mesh-dof-map

answered Jul 10, 2014 by Garth N. Wells FEniCS Expert (35,930 points)
selected Jul 10, 2014 by V_L
+1 vote

But I was still wondering: would there be a way to assemble the forms
in serial and perform the linear solve in parallel?

You can serialize it if the parallelization is with OpenMP:

int num_threads = dolfin::parameters["num_threads"];
dolfin::parameters["num_threads"] = 0;

// do assembly

dolfin::parameters["num_threads"] = num_threads;

I did this for assembling along the facets (boundary integral to get heat flux)

answered Jul 20, 2014 by Charles FEniCS User (4,220 points)
edited Jul 21, 2014 by Charles

I tried to do as you suggested:

num_threads = dolfin.parameters["num_threads"]
dolfin.parameters["num_threads"] = 0

# do assembly

dolfin.parameters["num_threads"] = num_threads

but it still give me the same error ("Unable to perform operation in parallel.") along with:

mpirun detected that one or more processes exited with non-zero
status, thus causing the job to be terminated.

So I changed it a little bit as follows:

mpi_comm = mpi_comm_world()
my_rank = MPI.rank(mpi_comm)
num_threads = dolfin::parameters["num_threads"]
dolfin.parameters["num_threads"] = 0

if my_rank == 0:    
    # do assembly

dolfin.parameters["num_threads"] = num_threads

This time, it didn't return any error but the thread with rank 0 takes forever to do the assembly (I had to stop it while this process is done in a reasonable time if I run the code in serial).

Thanks for the suggestion though! :)

I'm a novice with MPI, but I think having just rank 0 hitting the assembly will cause it to hang, as it is waiting for the other ranks to also reach the assemble call so that it can try and perform it in parallel.

My understanding is that the num_threads is only for the current process (rank), so setting it to zero won't help with MPI, just OpenMP. I actually get an error whenever num_threads is not zero when using MPI. The error simply states that MPI + MP is currently not supported.

This brings up a nice point though - how can I within an MPI run solve a problem locally on each worker?

+4 votes

Parallel DG support is now in the DOLFIN development version (master branch). There are a number of DG demos that now run in parallel.

answered Jul 28, 2014 by Garth N. Wells FEniCS Expert (35,930 points)
...