Faux tricubic interpolation


In the two hemispheres, octants of the computational grid look like this:
                  North               * * * * * * *
                    *                  * * * * * *
                   * *                  * * * * *
                  * * *                  * * * *
                 * * * *                  * * *
                * * * * *                  * *
               * * * * * *                  *
              * * * * * * *               South
Each face has upright and inverted equilateral triangles, and after the patches have been stitched (here), every triangle is surrounded by others on which known data is specified at vertices.

The naming of nodes is:
                 upright                  inverted

                  v' 'u                  'w       w'

                   'w'                  u' 'v' 'u' 'v 

             'v  'u' 'v'  u'                 'w'   

                w'     'w                  'u   v'

Perhaps I should apologize for such arcane notation, but I don't think so. It's a matter of trying to make the notation sufficiently redundant to supply the cues that help us to write bug-free code. Here we have the vertices, 'u', 'v' and 'w', and extended edges, like 'v'u'v'u' = 'v-to-'u'-to-'v'-to-u' and permutations of that.

A newer version of all this is under construction.

By design:
Again, define (for ultralinear interpolation, discussed here)

U(L,C,R) = ì
í
î
L LÎ[C,R]
C if CÎ[R,L]
R RÎ[L,C]

A one-parameter family of superlinear interpolations is



Let F(u,v,w)= 1
_
2
( F'vu'(u,v)+F'wv'(v,w)+F'uw'(w,u) )
where


F'vu'= (1-a)(uF'u'+vF'v')


+ aU( (1+v)F'u'-vF'v , uF'u'+vF'v' , (1+u)F'v'-uFu' )



F'wv'= (1-a)(vF'v'+wF'w')


+ aU( (1+w)F'v'-wF'w , vF'v'+wF'w' , (1+v)F'w'-vFv' )



F'uw'= (1-a)(wF'w'+uF'u')


+ aU( (1+u)F'w'-uF'u , wF'w'+uF'u' , (1+w)F'u'-wFw' )

The first and third arguments of U(L,C,R) are the linear extrapolations and the second arguments are linear interpolations on the edges. When a=0 the (correct) linear interpolation (five flops) is


F(u,v,w)= uF'u' + vF'v' + wF'w',

and it is a=1 that gets the two slopes right (central differences) at the vertices.

The computational effort that goes into this faux-cubic interpolation is: 6 linear extrapolations, 3 linear interpolations, 3 sorts of 3 items, and a few incidental adds and multiplies - (36 flops when a=1), and the systematic error has been fixed.

In a more traditional notation the naming of nodes is:
                 upright                  inverted

                  9   6                   5       7 

                    3                   8   2   1   4 

              4   1   2   8                   3    

                7       5                   6   9 
with u(1)=v(2)=w(3)=1 and u(2)=u(3)=v(3)=v(1)=w(1)=w(2)=0. Then with F(k) (one argument) for the value of F(u,v,w) at the kth node, the linear interpolation of F is



F(u,v,w)= uF(1) + vF(2) + wF(3)

The superlinear interpolation (same a as above) is then


F(u,v,w)= uF(1) + vF(2) + wF(3) +
a
_
2
{ M(u[F(3)-F(6)],w[F(1)-F(7)])
+ M(v[F(3)-F(9)],w[F(2)-F(5)])

+ M(u[F(2)-F(8)],v[(F(1)-F(4)]) }

where M(L,R) = U(L,0,R). This version of superlinear interpolation is somewhat more efficient than the previous one - (19 flops when a=1).

Since inflection points are relatively rare in smooth curves the arguments of M(L,R) usually have the same sign. For this application the number of logical operations done by M can be reduced from 3 to near 2.5 on average with algorithms like

#define M(L,R)  (L>0) ? (R>L) ? L
:(R>0) ? R : 0
: (L>R) ? L
:(0>R) ? R : 0

or
#define M(L,R)  (L>R) ? (R>0) ? R
: (0>L) ? L : 0
: (0>R) ? R
: (L>0) ? L : 0


My current plan is to use a conservation-law predictor / semi-lagrangian corrector algorithm to simulate the shallow layer equations. That means there will be many interpolations required in the updates, and every improvement of the efficiency of the interpolator is worth the effort.