We have a (2d) vector function w on a 2d mesh, and like to compute the cartesian norm of the vector at every node(=vertex for CG1).
We don't want to project (not required and likely to be slow). We think that the ideal solution would be to use dolfin's generic vector, so that the code runs in parallel and with all backends. Below is an annotated example, where we are looking for better code to replace the function compute_norm_at_vertices.
(The example below does not work in parallel and is likely to be slowl.)
Many thanks,
Hans
import dolfin as df
print("Using dolfin version {}".format(df.__version__))
import numpy as np
# create a 2d mesh
mesh = df.RectangleMesh(0, 0, 1, 1, 10, 10)
# Vector function space with given data for vector function
W = df.VectorFunctionSpace(mesh, "CG", 1)
w = df.interpolate(df.Expression(["1", "x[1]"]), W)
# scalar function space to keep the norm when computed
V = df.FunctionSpace(mesh, "CG", 1)
# this is dolfin-function holding the target dolfin-vector into which
# we like to write the results - as we need to do this repeatedly, we
# want to allocate memory once only
wnorm_function = df.Function(V)
# now we have a 2d vector function in w. What we like to get is
# the norm (i.e. sqrt(w_x^2 + w_y^2)) computed at each mesh vertex
# and stored in the generic vector of wnorm_function.
#
# Here is an attempt using dolfin's generic vector
def compute_norm_at_vertices(w, target):
wx, wy = w.split(deepcopy=True) # copy? Can we avoid it?
wnorm2 = wx.vector() * wx.vector() \
+ wy.vector() * wy.vector() # doesn't work in parallel
wnorm = np.sqrt(wnorm2) # sqrt from numpy - avoid?
target.vector().set_local(wnorm)
# this implementation of compute_norm_at_vertices above is
# not particularly fast, requires temporary memory allocation,
# and doesn't work in parallel. Can we do better?
# call the function
compute_norm_at_vertices(w, wnorm_function)
# and print some data - we expect the smallest values to be 1.0
# and the largest to approach sqrt(2) = 1.41..
print(wnorm_function.vector().array())
When run serially, the code above produces the following output - this works as we like it although execution is likely to be slow:
fangohr@xubu1404$ python demo-compute-norm-question.py
Using dolfin version 1.4.0
[ 1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 1.00498756 1.00498756
1.00498756 1.00498756 1.00498756 1.00498756 1.00498756 1.00498756
1.00498756 1.00498756 1.00498756 1.0198039 1.0198039 1.0198039
1.0198039 1.0198039 1.0198039 1.0198039 1.0198039 1.0198039
1.0198039 1.0198039 1.04403065 1.04403065 1.04403065 1.04403065
1.04403065 1.04403065 1.04403065 1.04403065 1.04403065 1.04403065
1.04403065 1.07703296 1.07703296 1.07703296 1.07703296 1.07703296
1.07703296 1.07703296 1.07703296 1.07703296 1.07703296 1.07703296
1.11803399 1.11803399 1.11803399 1.11803399 1.11803399 1.11803399
1.11803399 1.11803399 1.11803399 1.11803399 1.11803399 1.16619038
1.16619038 1.16619038 1.16619038 1.16619038 1.16619038 1.16619038
1.16619038 1.16619038 1.16619038 1.16619038 1.22065556 1.22065556
1.22065556 1.22065556 1.22065556 1.22065556 1.22065556 1.22065556
1.22065556 1.22065556 1.22065556 1.28062485 1.28062485 1.28062485
1.28062485 1.28062485 1.28062485 1.28062485 1.28062485 1.28062485
1.28062485 1.28062485 1.3453624 1.3453624 1.3453624 1.3453624
1.3453624 1.3453624 1.3453624 1.3453624 1.3453624 1.3453624
1.3453624 1.41421356 1.41421356 1.41421356 1.41421356 1.41421356
1.41421356 1.41421356 1.41421356 1.41421356 1.41421356 1.41421356]
Executing the code in parallel, results in the following error:
fangohr@xubu1404$ mpirun -n 2 python demo-compute-norm-question.py
Using dolfin version 1.4.0
Using dolfin version 1.4.0
Number of global vertices: 121
Number of global cells: 200
Traceback (most recent call last):
File "demo-compute-norm-question.py", line 39, in <module>
compute_norm_at_vertices(w, wnorm_function)
File "demo-compute-norm-question.py", line 31, in compute_norm_at_vertices
wnorm = np.sqrt(wnorm2) # sqrt from numpy - avoid?
File "/usr/lib/python2.7/dist-packages/dolfin/cpp/la.py", line 1237, in __getitem__
return _get_vector_single_item(self, indices)
RuntimeError: index must belong to this process, cannot index off-process entries in a GenericVector
Traceback (most recent call last):
File "demo-compute-norm-question.py", line 39, in <module>
compute_norm_at_vertices(w, wnorm_function)
File "demo-compute-norm-question.py", line 31, in compute_norm_at_vertices
wnorm = np.sqrt(wnorm2) # sqrt from numpy - avoid?
File "/usr/lib/python2.7/dist-packages/dolfin/cpp/la.py", line 1237, in __getitem__
return _get_vector_single_item(self, indices)
RuntimeError: index must belong to this process, cannot index off-process entries in a GenericVector
--------------------------------------------------------------------------
mpirun noticed that the job aborted, but has no info as to the process
that caused that situation.
--------------------------------------------------------------------------