A FEniCS Tutorial - FEniCS Project
A FEniCS Tutorial - FEniCS Project
A FEniCS Tutorial - FEniCS Project
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
e_Ve.vector()[:] = u_e_Ve.vector().array() - \<br />
u_Ve.vector().array()<br />
error = e_Ve**2*dx<br />
return sqrt(assemble(error))<br />
The errornorm procedure turns out to be identical to computing the expression<br />
(u_e - u)**2*dx directly in the present test case.<br />
Sometimes it is of interest to compute the error of the gradient field: ||∇(u−<br />
ue)|| (often referred to as the H 1 seminorm of the error). Given the error field<br />
e_Ve above, we simply write<br />
H1seminorm = sqrt(assemble(inner(grad(e_Ve), grad(e_Ve))*dx))<br />
Finally, we remove all plot calls and printouts of u values in the original<br />
program, and collect the computations in a function:<br />
def compute(nx, ny, degree):<br />
mesh = UnitSquare(nx, ny)<br />
V = FunctionSpace(mesh, ’Lagrange’, degree=degree)<br />
...<br />
Ve = FunctionSpace(mesh, ’Lagrange’, degree=5)<br />
E = errornorm(u_e, u, Ve)<br />
return E<br />
Calling compute for finer and finer meshes enables us to study the convergence<br />
rate. Define the element size h = 1/n, where n is the number of<br />
divisions in x and y direction (nx=ny in the code). We perform experiments<br />
with h 0 > h 1 > h 2··· and compute the corresponding errors E 0 ,E 1 ,E 3 and so<br />
forth. Assuming E i = Ch r i for unknown constants C and r, we can compare<br />
two consecutive experiments, E i = Ch r i and E i−1 = Ch r i−1 , and solve for r:<br />
r = ln(E i/E i−1 )<br />
ln(h i /h i−1 ) .<br />
The r values should approach the expected convergence rate degree+1 as i<br />
increases.<br />
The procedure above can easily be turned into Python code:<br />
import sys<br />
degree = int(sys.argv[1]) # read degree as 1st command-line arg<br />
h = [] # element sizes<br />
E = [] # errors<br />
for nx in [4, 8, 16, 32, 64, 128, 264]:<br />
h.append(1.0/nx)<br />
E.append(compute(nx, nx, degree))<br />
# Convergence rates<br />
from math import log as ln # (log is a dolfin name too - and logg :-)<br />
for i in range(1, len(E)):<br />
r = ln(E[i]/E[i-1])/ln(h[i]/h[i-1])<br />
print ’h=%10.2E r=.2f’ (h[i], r)<br />
33