Recently I have been running a program in parallel which keeps crashing because of a memory error. It is a 3D problem where the solution is a vector function, so I expect it to require lots of memory. However, the memory error always occurs at a particular point in the program - when I call the project()
function. Here are all the relevant parts of my program (portions like the mesh definition, boundary conditions, variational form, etc are excluded):
# function spaces
V2 = VectorFunctionSpace(mesh, "Lagrange", 2) # vector space
T2 = TensorFunctionSpace(mesh, "Lagrange", 2) # tensor space
...
# symmetric gradient operator
def D(u):
return sym(nabla_grad(u))
...
u = Function(V2)
# the "u" function gets populated with values later on when solving a mechanics problem
...
# compute strain
strain = project(D(u),T2)
This is the point where the program crashes due to a memory error - when I use the project()
function to compute strain
. I am running my program on several nodes, each with 12 Gb of memory available. Increasing the amount of available memory not seem to help because the program still crashes when I call project()
.
Does anyone know a better way to compute the strain
tensor?