Faux tricubic interpolation
In the two hemispheres, octants of the computational grid look like this:
North * * * * * * *
* * * * * * *
* * * * * * *
* * * * * * *
* * * * * * *
* * * * * * *
* * * * * * *
* * * * * * * South
Each face has upright and inverted equilateral triangles, and after the patches
have been stitched (here), every triangle is surrounded
by others on which known data is specified at vertices.
The naming of nodes is:
upright inverted
v' 'u 'w w'
'w' u' 'v' 'u' 'v
'v 'u' 'v' u' 'w'
w' 'w 'u v'
Perhaps I should apologize for such arcane notation, but I don't think so.
It's a matter of trying to make the notation sufficiently redundant to supply
the cues that help us to write bug-free code. Here we have the vertices, 'u',
'v' and 'w', and extended edges, like
'v'u'v'u' = 'v-to-'u'-to-'v'-to-u' and permutations of that.
A newer version of all this is under construction.
By design:
- The homogeneous coordinates are u, v & w such that u+v+w=1.
- u=1 at 'u', v=1 at 'v' and w=1 at 'w'.
- The triad [u v w] is always right-handed, relative to the outward normal.
- 'u', 'v' & 'w' refer to vertices of the central triangle.
- u', v' & w' refer neighbors from surrounding triangles.
- 'u, 'v & 'w refer neighbors from surrounding triangles.
- 'vu' refers to the extended edge 'v'u'v'u'
- 'wv' refers to the extended edge 'w'v'w'v'
- 'uw' refers to the extended edge 'u'w'u'w'
- Data at o is not used.
Again, define (for ultralinear interpolation, discussed here)
U(L,C,R) |
= |
ì í î |
L |
|
LÎ[C,R] |
C |
if |
CÎ[R,L] |
R |
|
RÎ[L,C] |
A one-parameter family of superlinear interpolations is
where
The first and third arguments of U(L,C,R) are the linear extrapolations and the
second arguments are linear interpolations on the edges. When a=0
the (correct) linear interpolation (five flops) is
and it is a=1 that gets the two slopes right (central differences) at the vertices.
The computational effort that goes into this faux-cubic interpolation is: 6 linear extrapolations,
3 linear interpolations, 3 sorts of 3 items, and a few incidental adds and
multiplies - (36 flops when a=1), and the systematic error has been fixed.
In a more traditional notation the naming of nodes is:
upright inverted
9 6 5 7
3 8 2 1 4
4 1 2 8 3
7 5 6 9
with u(1)=v(2)=w(3)=1 and u(2)=u(3)=v(3)=v(1)=w(1)=w(2)=0. Then with F(k) (one argument)
for the value of F(u,v,w) at the kth node, the linear interpolation
of F is
| |
F(u,v,w) | = |
uF(1) + vF(2) + wF(3) |
The superlinear interpolation (same a as above) is then
| |
F(u,v,w) | = |
uF(1) + vF(2) + wF(3) + |
| | |
a _ 2
|
{ M(u[F(3)-F(6)],w[F(1)-F(7)]) |
| | | |
+ M(v[F(3)-F(9)],w[F(2)-F(5)]) |
|
| | | |
+ M(u[F(2)-F(8)],v[(F(1)-F(4)]) } |
where M(L,R) = U(L,0,R). This version of superlinear interpolation is
somewhat more efficient than the previous one - (19 flops when a=1).
Since inflection points are relatively rare in smooth curves the arguments
of M(L,R) usually have the same sign. For this application
the number of logical operations done by
M can be reduced from 3 to near 2.5 on average with algorithms like
#define M(L,R) (L>0) | ? | (R>L) | ? | L |
| | | : | (R>0) ? R : 0 |
| : | (L>R) | ? | L |
| | | : | (0>R) ? R : 0 |
or
#define M(L,R) (L>R) | ? | (R>0) | ? | R |
| | | : | (0>L) ? L : 0 |
| : | (0>R) | ? | R |
| | | : | (L>0) ? L : 0 |
My current plan is to use a conservation-law predictor / semi-lagrangian
corrector algorithm to simulate the shallow layer equations. That means
there will be many interpolations required in the updates, and every improvement
of the efficiency of the interpolator is worth the effort.