This is a read only copy of the old FEniCS QA forum. Please visit the new QA forum to ask questions

Is there a way to save a submesh on each process for a distributed mesh?

0 votes

I am looking for a way to assemble/solve local subdomain problems on the pieces of a distributed mesh. The simplest way I can see would be to create submeshes, but after compiling in the c++ interface a small example I get an RuntimeError "SubMesh::init not working in parallel". Is there any way to accomplish this, or am I dead in the water here?

Here is the small example:

#include <dolfin.h>

using namespace dolfin;

int main()
{
// Create mesh
UnitSquareMesh mesh(2,2);

// Initialize Cell Function
CellFunction<std::size_t> cell_fn(mesh); // don't set values

// Fill Cell Function with the process id
for (CellIterator c(mesh); !c.end(); ++c)
     cell_fn[*c] = MPI::rank(MPI_COMM_WORLD);

/// Extract the submesh from the process id cell function
dolfin::SubMesh submesh = SubMesh(mesh, cell_fn, MPI::rank(MPI_COMM_WORLD));

}
asked Sep 9, 2015 by brk888 FEniCS Novice (990 points)
edited Sep 9, 2015 by chris_richardson

1 Answer

0 votes

SubMesh is not ideal.

I know some people are working on FETI methods which are similar to what you are thinking of.

If you just want to solve on the local mesh, you could make a copy using the MeshEditor and MPI_COMM_SELF. But that would create an entirely new mesh on each process, and you would have to find some way of mapping from the original mesh, if you wanted to copy data between them. Not impossible, but not totally easy.

answered Sep 9, 2015 by chris_richardson FEniCS Expert (31,740 points)

Ok, let's say I go that route. I could use MeshEditor to step through all the cells in each partition, duplicate them and assign a new local number to each. Every time I duplicate a cell, if I store something like a key value pair of old - new numbering, in principle I should have everything I need. Is there anything obviously wrong with this strategy?

Sounds OK - just create the new meshes with Mesh(MPI_COMM_SELF).
It also depends on what FunctionSpace you are using. e.g. for CG1 the dofs live on the vertices, so you need to be able to map from the dofs of one mesh to the other. You may need to use the dof_to_vertex_map() function etc.

...