This is a read only copy of the old FEniCS QA forum. Please visit the new QA forum to ask questions

Computing stresses by projection consumes too much time

+1 vote

I am solving a very simple linear elasticity problem, and would like to output the stresses as well as the displacement. To this end, I am using the project function to project the stress quantity to a TensorFunctionSpace, as advised in previous answers on the forum:

from __future__ import print_function
from dolfin import *
import time
import numpy

# Form compiler options
parameters["form_compiler"]["cpp_optimize"] = True
parameters["form_compiler"]["optimize"] = True

# Sub domain for clamp at left end
def left(x, on_boundary):
    return x[0] < 0.001 #and on_boundary

# Sub domain for rotation at right end
def right(x, on_boundary):
    return x[0] > 0.99 #and on_boundary

# Load mesh and define function space
mesh = UnitCubeMesh(15, 15, 15)

# Define function space
V = VectorFunctionSpace(mesh, "CG", 1)

# Test and trial functions
u = TrialFunction(V)
delta_u = TestFunction(V)

E  = 10.0
nu = 0.0
mu    = E / (2.0*(1.0 + nu))
lmbda = E*nu / ((1.0 + nu)*(1.0 - 2.0*nu))


# External forces (body and applied tractions
f  = Constant((1.0, 0.0, 0.0))

# Set up boundary condition at left end
zero = Constant((0.0, 0.0, 0.0))
bc = DirichletBC(V, zero, left)    

def eps(u):
    return sym(grad(u))

# Stress tensor
def sigma(u):
    return 2.0*mu*eps(u) + lmbda*tr(eps(u))*Identity(len(u))

# Forms
a = inner(sigma(u), eps(delta_u))*dx
L = inner(f, delta_u)*dx

# Time-stepping
u = Function(V)

xdmf_file = XDMFFile(mesh.mpi_comm(), "out.xdmf")

start = time.time()
print("Started solution")
solve(a == L, u,
      bcs=bc,
      solver_parameters={"linear_solver": "cg"},)
print("Finished solution in %f seconds"%(time.time()-start))

u.rename("u", "displacement")
xdmf_file.write(u, 0.0)

# Start projecting stress
start = time.time()
print("Started stress calculation")
sigma_fs = TensorFunctionSpace(mesh, "CG", 1)

# Following line is very inefficient
sigma_out = project(sigma(u), sigma_fs)
sigma_out.rename("sigma", "stress")
xdmf_file.write(sigma_out, 0.0)
print("Finished stress calculation in %f seconds"%(time.time()-start))

However, as you can see from the output, it takes a lot of time to do the projection. It appears that the time required for the stress computation increases exponentially with the number of elements:

Started solution
Solving linear variational problem.
Finished solution in 0.189142 seconds
Started stress calculation
Finished stress calculation in 17.490856 seconds

I understand that the solver is a black box, but it is often very necessary to me to have not just stress in the output, but also other fields of equal complexity. If I am to see data other than the solution vector in the output, I require a faster way of calculating---or storing---these quantities. I doubt that I am the only one who had such a problem, so I am guessing that you might have a better advice.

Thanks in advance.

asked Oct 11, 2016 by hosolmaz FEniCS User (1,510 points)

Are you sure it is not the xdmf_file.write that takes time?

The ILU used to be the default solver for projection and it is pretty fast.

Yes, xdmf_file.write is pretty quick:

Started stress calculation
Finished stress calculation in 16.603034 seconds
Started outputting stresses
Finished outputting stresses in 0.010333 seconds

Which version of FEniCS are you using? There used to be a problem with TensorFunctionSpace that caused it to be very slow.

Is it faster, if you split the projection into six scalar projections?

I am using version 2016.1. I did a few trials, and here are the results.

+------------------------------------------------+----------------+-----------------+
|            Stress calculation time             | 20250 elements | 162000 elements |
+------------------------------------------------+----------------+-----------------+
| Using CG solver, whole stress tensor           |       1.684316 | 20.613609       |
| Using CG solver, projecting 6 terms separately |       0.238721 | 6.04954         |
| Using LU solver, whole stress tensor           |      17.284297 | Depletes memory |
| Using LU solver, projecting 6 terms separately |       0.648268 | 23.457449       |
+------------------------------------------------+----------------+-----------------+

Here is the code for future reference. I commented and uncommented corresponding lines for different trials

from __future__ import print_function
from dolfin import *
import time
import numpy

# Form compiler options
parameters["form_compiler"]["cpp_optimize"] = True
parameters["form_compiler"]["optimize"] = True

# Sub domain for clamp at left end
def left(x, on_boundary):
    return x[0] < 0.001 #and on_boundary

# Sub domain for rotation at right end
def right(x, on_boundary):
    return x[0] > 0.99 #and on_boundary

# Load mesh and define function space
mesh = UnitCubeMesh(15, 15, 15)
# mesh = UnitCubeMesh(30, 30, 30)

# Define function space
V = VectorFunctionSpace(mesh, "CG", 1)

# Test and trial functions
u = TrialFunction(V)
delta_u = TestFunction(V)

E  = 10.0
nu = 0.0
mu    = E / (2.0*(1.0 + nu))
lmbda = E*nu / ((1.0 + nu)*(1.0 - 2.0*nu))

# External forces (body and applied tractions
f  = Constant((1.0, 0.0, 0.0))

# Set up boundary condition at left end
zero = Constant((0.0, 0.0, 0.0))
bc = DirichletBC(V, zero, left)

def eps(u):
    return sym(grad(u))

# Stress tensor
def sigma(u):
    return 2.0*mu*eps(u) + lmbda*tr(eps(u))*Identity(len(u))

# Forms
a = inner(sigma(u), eps(delta_u))*dx
L = inner(f, delta_u)*dx

# Time-stepping
u = Function(V)

xdmf_file = XDMFFile(mesh.mpi_comm(), "out.xdmf")

start = time.time()
print("Started solution")
solve(a == L, u,
      bcs=bc,
      solver_parameters={"linear_solver": "cg"},)
print("Finished solution in %f seconds"%(time.time()-start))

u.rename("u", "displacement")
xdmf_file.write(u, 0.0)

start = time.time()
print("Started normal stress calculation")

sigma_fs = TensorFunctionSpace(mesh, "CG", 1)
sigma_out = project(sigma(u), sigma_fs, solver_type='cg', preconditioner_type='hypre_amg')
# sigma_out = project(sigma(u), sigma_fs)
sigma_out.rename("sigma", "stress")

print("Finished normal stress calculation in %f seconds"%(time.time()-start))
xdmf_file.write(sigma_out, 0.0)


start = time.time()
print("Started projected stress calculation")
scalar_space = FunctionSpace(mesh,'CG',1)
tensor_space = TensorFunctionSpace(mesh, "CG", 1)
sigma_out_separate = Function(tensor_space)
directions = [
    Constant(numpy.array([[1, 0, 0], [0, 0, 0], [0, 0, 0]])) ,
    Constant(numpy.array([[0, 0, 0], [0, 1, 0], [0, 0, 0]])) ,
    Constant(numpy.array([[0, 0, 0], [0, 0, 0], [0, 0, 1]])) ,
    Constant(numpy.array([[0, 1, 0], [0, 0, 0], [0, 0, 0]])) ,
    Constant(numpy.array([[0, 0, 0], [0, 0, 1], [0, 0, 0]])) ,
    Constant(numpy.array([[0, 0, 1], [0, 0, 0], [0, 0, 0]])) ,
]

for i, direction in enumerate(directions):
    FunctionAssigner(tensor_space.sub(i), scalar_space).assign(sigma_out_separate.sub(i), project(inner(sigma(u), direction), scalar_space, solver_type='cg', preconditioner_type='hypre_amg'))
    # FunctionAssigner(tensor_space.sub(i), scalar_space).assign(sigma_out_separate.sub(i), project(inner(sigma(u), direction), scalar_space))

print("Finished projected stress calculation in %f seconds"%(time.time()-start))
xdmf_file.write(sigma_out_separate, 0.0)

Calculating stress tensor components separately seems to do the trick, although I am open to more elegant and simple solutions.

2 Answers

+1 vote

Check the documentation for project() here. I believe you're using a direct LU solver by default.

Try

sigma_out = project(sigma(u), sigma_fs, solver_type='cg', preconditioner_type='hypre_amg')
answered Oct 11, 2016 by nate FEniCS Expert (17,050 points)
+2 votes

So I guess the problems with TensorFunctionSpace still remain to be solved. Maybe you could open an issue on bitbucket, if this has not already been done.

answered Oct 14, 2016 by KristianE FEniCS Expert (12,900 points)

I found this issue, but I am not sure whether it addresses the same problem. It appears to have not arrived at a conclusion.

...