Hi all,
I'm trying to implement a somewhat complicated expression to calculate a tensor field from a vector \( u \) (which is the output from a standard a == L solver). The tensor-values are assigned using a conditional expression, where the condition is essentially \( ||\nabla u|| > y \) for some constant scalar \( y \). If the condition fails the tensor elements are all zero, otherwise the elements are filled using another algebraic function involving \( \nabla u \) and the previous iteration of the tensor I am trying to calculate. The result is then fed back into the solver as part of the linear form L.
I have a method to do this using the conditional() function from UFL, followed by a project() stage, that works. However, it isn't particularly fast and I have concerns as to whether this is giving a reasonable result. I am wondering if anyone can either reassure me that this is in fact the best way to do it, or suggest an alternative avenue.
I did consider project()ing the \( \nabla u \) field then doing all of the operations element-wise by looping through each node, but keeping track of DOF / vertex relationships across the scalar/vector/tensor spaces seems overly-complex for uncertain performance gains (especially since this requires a project() stage anyway!).