This is a read only copy of the old FEniCS QA forum. Please visit the new QA forum to ask questions

Computation of local singular values of some tensor function

+1 vote

Hi!

I would like to find the singular value fields of e.g. a gradient tensor of some vector. If u is some vector field, the gradient is given as

Gij = grad(u).

Finding the singular values is the same as finding the square root of the three eigenvalues to the matrix

GTG = grad(u).T*grad(u)

As a solution GTG is now computed by some fast inverse diagonal DG-matrix, then I loop over each cell in the mesh, create a cell local matrix GTG_local which contains all the values for that specific cell. Then numpy.linalg is applied to solve for the eigenvectors of GTG_local. Code example:

for cell in cells(mesh):

    # Extract cell dofs
    dofs_TFS = dofmap_TFS.cell_dofs(cell.index())    # TensorFunctionSpace, DG
    dof_DG = dofmap_DG.cell_dofs(cell.index())  # FunctionSpace, DG

    # Compute local matrix
    for k in xrange(len(dofs)):
        i,j = ij_pairs[k] # Extract i,j pairs; e.g. (0,0), (0,1) and so on
        GTG_local[i][j] = GTG[dofs_TFS[k]]

    # Compute eigenvalues of matrix
    w,a = linalg.eig(GTG_local)

    # Sort eigenvalues by size and add to arrays
    w.sort()
    s3[dof_DG], s2[dof_DG], s1[dof_DG] = np.sqrt(w)

This solution seems to work fine, but it is problematic in several ways. The cell loop is slow, and if this process is to be repeated each step in a time loop it will be too time consuming. Even worse is the fact that one needs to solve N number of eigenvalue problems with numpy, where N equals the number of cells in the mesh. Testing shows that the latter is the most time consuming operation.

Does there exist a more elegant and faster solution? Or is the solution here to wrap som C++ code of this approach?

Thanks!

asked Sep 15, 2015 by joakibo FEniCS User (1,140 points)

Just a comment:

Alternatively one could just solve for Gij = grad(u), and then apply numpy.linalg.svd directly, instead of applying Gij.T*Gij. This seems to work better, but not sure whichever solution is the fastest.

Hi, the product of G.T and G is symmetric so you should see some speed up(2x) by using eigenvalue solver for Hermitian matrices, e.g. eigvalsh.

Thanks a lot Miro, that's certainly a nice tip! Will check it out.

1 Answer

+1 vote

With numpy.version. >= 1.8.0, you can vectorise the solution of many small eigenvalue problems, i.e. you input arrays with dimensions NxDxD and NxD. If you play around with it, I am sure it is possible to completely avoid any for loops.

answered Sep 21, 2015 by KristianE FEniCS Expert (12,900 points)

Thanks for the tip! I've investigeted a little bit and found a fast way of constructing all the local gradient matrices avoiding the for loop. Assume that I have arrays of all the gradients available as

ux, uy, uz, vx, vy, vz, wx, wy, wz = grad(u)

Then all the local gradient matrices can be created extremely fast as

G = np.concatenate((ux,uy,uz,vx,vy,vz,wx,wy,wz)).reshape(3,3,-1).transpose(2,0,1)

Then the SVD in np.linalg (or linalg.eigvals) can be called for G directly, such that the singular values of all the matrices are returned.

So, again, thanks :-)

...