This is a read only copy of the old FEniCS QA forum. Please visit the new QA forum to ask questions

SubMesh workaround for parallel computation

+1 vote

I'm solving a Maxwell problem on a larger domain $\Omega$ and then use the result as a right-hand side $f$ for a Navier-Stokes time integration in a subdomain $\Omega_s\subset \Omega$. Right now, I'm using SubMesh(...) to extract the submesh corresponding to $\Omega_s$. This function however does not work in parallel (which I believe may be related to the fact that the partitioning of $\Omega$ vs. $\Omega_s$ is not trivial to organize).

Since either of the two steps (solving Maxwell, time-stepping Navier-Stokes) runs in parallel, and the transition has to be done only once (as a preprocessing step to Navier-Stokes), I thought there might be a parallel-friendly way of creating the submesh corresponding to $\Omega_s$ and carrying over $f$. -- Maybe even without writing out the data into files and rereading them.

Any ideas?

asked Dec 18, 2013 by nschloe FEniCS User (7,120 points)
edited Dec 18, 2013 by nschloe

1 Answer

+3 votes

I do not think you can avoid storing the relevant data to file and then reread them in a parallel runs, as SubMesh is not supported in parallel yet. It is an issue that is ripe for fixing and feel free to contribute :) If so you might take a look into BoundaryMesh which recently was fixed in parallel.

answered Dec 18, 2013 by johanhake FEniCS Expert (22,480 points)

I imagine that a problem you would quickly run into here is that dolfin doesn't support a mesh which doesn't live on all processes.

How did you handle that when you fixed BoundaryMesh in parallel?

I didn't. It will break if you have mesh partitions not connected to external boundary. As I see it, this is a general dolfin issue.

I have done a very simple workaround this in another application (slice of a mesh), by just moving a single cell (during mesh creation) from the largest mesh partition to the processes with no cells.

Meshes using different MPI communicators is in progress - see https://bitbucket.org/fenics-project/dolfin/branch/garth/use-mpi-communicator

For me to write out that data, I would need to be able to run a computation in parallel (check) and write out the resulting data for only a submesh (i.e., one subdomain index). Is that possible without using SubMesh(mesh, subdomains, index)?

...