This is a read only copy of the old FEniCS QA forum. Please visit the new QA forum to ask questions

form compilation error -> not enough memory

+1 vote

I am trying to solve some problem related to hyperelasticity, see simplified code below. It works in 2D, but unfortunately in 3D the form compilation fails: if I set dolfin.parameters["form_compiler"]["optimize"] = True then python runs out of memory while generating the code; if I set dolfin.parameters["form_compiler"]["optimize"] = False then gcc runs out of memory while compiling the (1GB) generated code (even with dolfin.parameters["form_compiler"]["cpp_optimize"] = False and dolfin.parameters["form_compiler"]["cpp_optimize_flags"] = '-O0'). I am running Kubuntu on a MacBookPro with 16GB of RAM, FEniCS 2016.0.2 from the ppa, and gcc 6.2.0. Any idea how to get the code to compile? Thanks!

import dolfin
##############################################################
#dolfin.parameters["form_compiler"]["optimize"] = True
#dolfin.parameters["form_compiler"]["cpp_optimize"] = False
#dolfin.parameters["form_compiler"]["cpp_optimize_flags"] = '-O0'
#
dim = 2 # 2 or 3
#
degree = 1
quadrature = None
##############################################################
if (dim == 2):
mesh = dolfin.UnitSquareMesh(1, 1)
elif (dim == 3):
mesh = dolfin.UnitCubeMesh(1, 1, 1)
#
dV = dolfin.Measure("dx",
domain=mesh)
dF = dolfin.Measure("dS",
domain=mesh)
#
function_space = dolfin.VectorFunctionSpace(
mesh=mesh,
family="Lagrange",
degree=degree)
#
U = dolfin.Function(function_space)
#
I = dolfin.Identity(dim)
F = I + dolfin.grad(U)
J = dolfin.det(F)
C = F.T * F
Ic = dolfin.tr(C)
Ic0 = dolfin.tr(I)
Cinv = dolfin.inv(C)
#
E = dolfin.Constant(1.0)
nu = dolfin.Constant(0.1)
kappa = E/3/(1-2 * nu)
lmbda = E * nu/(1+nu)/(1-(dim-1) * nu)
mu = E/2/(1+nu)
C1 = mu/2
D1 = kappa/2
#
psi = D1 * (J**2 - 1 - 2 * dolfin.ln(J)) + C1 * (Ic - Ic0 - 2 * dolfin.ln(J))
S = 2 * D1 * (J**2 - 1) * Cinv + 2 * C1 * (I - Cinv)
P = F * S
#
Div_P = dolfin.div(P)
J_V = dolfin.dot(Div_P,Div_P)
#
N = dolfin.FacetNormal(mesh)
Jump_P_N = dolfin.jump(P,N)
J_F = dolfin.dot(Jump_P_N,Jump_P_N)
#
dU_test = dolfin.TestFunction(function_space)
dU_trial = dolfin.TrialFunction(function_space)
#
DJ_V = dolfin.derivative( J_V, U, dU_test )
DDJ_V = dolfin.derivative(DJ_V, U, dU_trial)
DJ_F = dolfin.derivative( J_F, U, dU_test )
DDJ_F = dolfin.derivative(DJ_F, U, dU_trial)
#
A = None
if (quadrature is None):
A = dolfin.assemble(DDJ_V * dV + DDJ_F * dF, tensor=A)
else:
A = dolfin.assemble(DDJ_V * dV + DDJ_F * dF, tensor=A, form_compiler_parameters={'quadrature_degree':quadrature})

asked Mar 29, 2017 by Martin Genet FEniCS User (1,460 points)
edited Mar 30, 2017 by Martin Genet

This compiles with little fuss on my Linux desktop using gcc-5.4. Have you tried using uflacs at all?

dolfin.parameters["form_compiler"]["representation"] = 'uflacs'

Thanks Nate. Did you try putting dim=3? Does it compile as well?

A bit slower but compiles just fine.

On my computer it is MUCH slower, but at the end it works like a charm…thanks so much!

(If you create an actual answer, I will accept it with pleasure.)

Thanks. I like points.

1 Answer

+1 vote
 
Best answer

I believe, in the past, there have been issues with quadrature representation and the code generated by FFC. This was especially true of the FEM formulations occurring in elasto-plasticity, as you're seeing. The generated code was large and computationally expensive to compile. This has been investigated, and one approach is to use uflacs.

Try using

dolfin.parameters["form_compiler"]["representation"] = 'uflacs'

which should help reduce both compilation time and memory use.

The developer(s) of uflacs may be able to shine more light onto how it works to achieve this.

answered Mar 30, 2017 by nate FEniCS Expert (17,050 points)
selected Mar 30, 2017 by Martin Genet
...