Scattered manifold-valued data approximation

We consider the problem of approximating a function f from an Euclidean domain to a manifold M by scattered samples \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(f(\xi _i))_{i\in \mathcal {I}}$$\end{document}(f(ξi))i∈I, where the data sites \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\xi _i)_{i\in \mathcal {I}}$$\end{document}(ξi)i∈I are assumed to be locally close but can otherwise be far apart points scattered throughout the domain. We introduce a natural approximant based on combining the moving least square method and the Karcher mean. We prove that the proposed approximant inherits the accuracy order and the smoothness from its linear counterpart. The analysis also tells us that the use of Karcher’s mean (dependent on a Riemannian metric and the associated exponential map) is inessential and one can replace it by a more general notion of ‘center of mass’ based on a general retraction on the manifold. Consequently, we can substitute the Karcher mean by a more computationally efficient mean. We illustrate our work with numerical results which confirm our theoretical findings.


Introduction
Let f : ⊂ R s → M with M a Riemannian manifold be an unknown function and we only know its values at a set of distinct points (ξ i ) i∈I ⊂¯ . We are concerned with finding an approximant to f . Such an approximation problem for manifold-valued data arises in numerical geometric integration [23], diffusion tensor interpolation [7] and more recently in fast online methods for dimensionality reduced-order models [5]. In a Stanford press release associated with the aeronautical engineering publications [3][4][5], it is emphasized that being able to accurately and efficiently interpolate on manifolds is a key to fast online prediction of aerodynamic flutter, which, in turn, may help saving lives of pilots and passengers.
In the aforementioned references it was implicitly assumed that the data points ( f (ξ i )) i∈I ∈ M are close enough so that they can all be mapped to a single tangent space T p M by the inverse exponential map log. In these previous works the base point p ∈ M is typically one of ( f (ξ i )) i∈I and the choice can be quite arbitrary. In this setting, the problem simply reduces to a linear approximation problem on the tangent space T p M. To approximate the value f (x) ∈ M, use any standard linear method (polynomial, spline, radial basis function etc.) to interpolate the values (log( p, ( f (ξ i ))) i∈I ⊂ T p M at the abscissa (ξ i ) i∈I , then evaluate the interpolant Q at the desired value x, and finally apply the exponential map to get the approximation f (x) ∼ exp( p, Q(x)).
This 'push-interpolate-pull' technique only works when all the available data points ( f (ξ i )) i∈I fall within the injectivity radius of the point p, and the method only provides an approximation for f (x) if x is near p. In this case the problem is local, and the topology of the manifold plays no role. One may then question what would be the difference if one uses the push-interpolate-pull approach but with the exponential map replaced by an arbitrary chart. With the exponential map, the push-interpolate-pull method respects the symmetry, if any, of the manifold. However, it is not clear what is the practical advantage of the latter and if respecting symmetry is not the main concern, one is free to replace the exponential map by a retraction (see, e.g., [1,11]) on the manifold.
The problem is more challenging when we have available data points ( f (ξ i )) i∈I ∈ M scattered at different parts of the manifold, and the manifold has a nontrivial topology. In this setting, we desire an approximation method with the following properties: (i) The method is well-defined as long as the scattered data sites are reasonably close locally, but can otherwise be far apart globally. (ii) The approximant should provide a decent approximation order when f is sufficiently smooth. Since standard approximation theory (see for example chapter 3 of [9]) tells us that for linear data there are methods which provide an accuracy of O(h m ) when f is C m smooth, and h = min i = j |ξ i − ξ j | is the global meshwidth, it is natural to ask for a method for manifold-valued data with the same accuracy order. (iii) The approximant itself should be continuous.
(iv) The approximant should be efficiently computable for any given x.
Such a method ought to 'act locally but think globally', meaning that the approximating value should only depend on the data f (ξ i ) for ξ i near x, but yet the approximant should be continuous and accurate for all x in the domain. This forces the method to be genuinely nonlinear.
In the shift-invariant setting, i.e. when = R s and (ξ i ) i∈I = hZ s , the above problem was solved successfully first by a subdivision technique [13,32] and more recently by a method based on combining a general quasi-interpolant with the Karcher mean [14]. Even more recently, the work [17] used a projection-based approach to generalize B-spline quasi-interpolation for regularly spaced data. In either case, it was shown that an approximation method for linear data can be suitably adapted to manifold-valued data without jeopardizing the accuracy or smoothness properties of the original linear method. This kind of (smoothness or approximation order) equivalence properties are analyzed by a method known as proximity analysis. It is also shown that in certain setups, the smoothness or approximation order equivalence can breakdown in unexpected ways [10,11,17,[30][31][32].
Note that all the previous work assumes 'structured data' in the sense that samples are taken on a regular grid or the nodes of a triangulation of a domain. However, there exist several applications (for instance in model reduction applications [3][4][5]) where such structured samples are not available but only scattered samples are provided.
The case of scattered data interpolation has not been treated so far and in this paper we provide a solution to the above problem in the multivariate scattered data setting. More precisely we combine the ideas of [14,16] with the classical linear theory of scattered data approximation [29] to arrive at approximants which satisfy (i)-(iv) above.
We give a brief outline of this work. The following Sect. 2 presents a brief overview of approximation results for scattered data approximation of Euclidean data. Then in Sect. 3 we present our generalization of the linear theory to the manifold-valued setting. This section also contains our main result regarding the approximation power of our nonlinear construction. It turns out that our scheme retains the optimal approximation rate as expected from the linear case. This is formalized in Theorem 3.5. In this theorem the dependence of the approximation rate on the geometry of M is made explicit and appears in form of norms of iterated covariant derivatives of the log function of M. To measure the smoothness of an M-valued function we utilize a smoothness descriptor introduced in [16] and which forms a natural generalization of Hölder norms to the manifold-valued setting. Our results also hold true for arbitrary choices of retractions. We discuss this extension in Sect. 3.3. Finally in Sect. 4 we present numerical experiments for the approximation of functions with values in the sphere and in the manifold of symmetric positive definite (SPD) matrices. In all cases the approximation results derived in Sect. 3 are confirmed. We also examine an application to the interpolation of reduced order models and compare our method to the method introduced in [6] where it turns our that our method delivers superior approximation power.

Scattered data approximation in linear spaces
In this section we present classical results concerning the approximation of scattered data in Euclidean space. Our exposition mainly follows the monograph [29].

General setup
We start by describing a general setup for scattered data approximation. Let ⊂ be an open domain and¯ its closure. Let = (ξ i ) i∈I ⊂¯ ⊂ R s be a set of sites and := (ϕ i ) i∈I ⊂ C c (¯ , R) a set of basis functions, where C c (¯ , R) denotes the set of compactly supported real-valued continuous functions on¯ . The linear quasiinterpolation procedure, applied to a continuous function f :¯ → R, is defined as Essential for the approximation power of the operator Q is the property that polynomials up to a certain degree are reproduced: where P k (R s ) denotes the space of polynomials of (total) degree ≤ k on R s .
If ( , ) reproduces polynomials of degree k ≥ 0 then necessarily Definition 2. 2 We define the local meshwidth h :¯ → R ≥0 by and | · | denotes the Euclidean norm.
The following example gives a simple procedure to interpolate univariate data with polynomial reproduction degree 1. Example 2.3 Let = (0, 1), 0 = ξ 1 < ξ 2 · · · < ξ n = 1 and These functions, also known as hat functions, reproduce polynomials up to degree 1. The local meshwidth for We define the C k seminorm of f : → R on U ⊂ by where for l = (l 1 , . . . , l s ) ∈ N s we define |l| := s i=1 l i and Additionally we define ∂ l f = f if l = (0, . . . , 0). To estimate the approximation error at x ∈ we use the set Note that if is convex we have x ⊂ for all x ∈ . The following result gives a bound for the approximation error at x ∈ in terms of the local meshwidth, the polynomial reproduction degree and the C k seminorm of f on x .
We omit a proof here as a general theorem will be proven in the next section. If Then the estimate (4) with a constant C also depending on the domain holds true (see Remark 3.6). In contrast to other results, such as those in [29], Theorem 2.4 poses no restrictions on the sites = (ξ i ) i∈I . In the following subsection we will show how the approximation results in [29] follow as a corollary to the previous Theorem 2.4.

Moving least squares
In this subsection, we will follow [29] and show how to construct a set of compactly supported basis functions = (ϕ i ) i∈I with polynomial reproduction degree k for a set of sites = (ξ i ) i∈I ⊂¯ ⊂ R s which is (k, δ)-unisolvent.
For every i ∈ I and x ∈¯ we define where p ∈ P k (R s ) is chosen such that the basis functions = (ϕ i ) i∈I satisfy the polynomial reproduction property (2) of degree k, i.e.
As is (k, δ)-unisolvent the left hand side of (6) can be regarded as an inner product of p and q. Furthermore q → q(x) is a linear functional on P k (R s ). Hence by the Riesz representation theorem there exists a unique polynomial p ∈ P k (R s ) with (6). Note that choosing a basis on P k (R s ) one can compute p by solving a linear system of equations. Hence we can compute ϕ(x) in a reasonable time and Q satisfies (iv) from the introduction. In [29] it is shown that the function ϕ has the same smoothness as α. This shows that Q satisfies (iii) from the introduction.
The theory in [29] considers a quasi-uniform set of sites as defined below.
where q is the separation distance defined by For a shorter notation we introduce the symbol [n] := {1, . . . , n}.
The following theorem is the main result of [29] regarding the approximation with moving least squares.
the basis functions as defined in (5). Let δ > 0 and assume that ( , δ) is quasi-uniform with c > 0. Then there exists a constant C > 0, depending only on c, k and s such that for all f ∈ C k ( , We show that Theorem 2.9 is a consequence of Theorem 2.4. [29,Theorem 4.7(2)] sup i∈I(x) |ϕ i (x)| can be bounded by a constant depending only on k, c and s. We now show that |I(x)| can be bounded by a constant depending only on c and s. Note that Hence the pairwise disjoint balls with centers (ξ i ) i∈I(x) and radii q lie in the ball with center x and radius δ + q . As the fraction (δ + q )/q is bounded from above by (1 + c) we have that sup x∈ | I(x)| is bounded by the number of balls with radius 1 that fit into a ball of radius (1 + c). In the following we use the letter C as a symbol for a constant whose value may change from equation to equation. By Theorem 2. 4 we have,

Scattered data approximation in manifolds
The present section extends Theorems 2.4 and 2.9 to the case of manifold-valued functions f : → M with a Riemannian manifold M. First, in Sect. 3.1 we present a geometric generalization of the operator Q to the manifold-valued case. The construction is based on the idea to replace affine averages by a Riemannian mean as studied e.g. in [21]. Subsequently, in Sect. 3.2 we study the resulting approximation error. Finally, in Sect. 3.3 we extend our results to the case of more general notions of geometric average, induced by arbitrary retractions as studied e.g. in [15].

Definition of Riemannian moving least squares
In this section we consider a Riemannian manifold M with distance metric d : M × M → R ≥0 and metric tensor g, inducing a norm | · | g( p) on the tangent space T p M at p ∈ M.
Extending the classical theory which we briefly described in Sect. 2 we now aim to construct approximation operators for functions f : → M. We follow the ideas of [14,16,[26][27][28] where the sum in (1) is interpreted as a weighted mean of the data points ( f (ξ i )) i∈I(x) . Due to (3) this is justified.
For weights = (γ i ) i∈I ⊂ R with i∈I γ i = 1 and data points P = ( p i ) i∈I ⊂ M we can define the Riemannian average av M ( , P) := argmin p∈M i∈I One can show [21,28] that av M ( , P) is a well-defined operation, if the diameter of the set P is small enough: Let p 0 ∈ M and denote for ρ > 0 by B ρ the geodesic ball of radius ρ around p 0 . Then there exist 0 < ρ 1 ≤ ρ 2 < ∞, depending only on i∈I |γ i | and the geometry of M such that for all points P = ( p i ) i∈I ⊂ B ρ 1 the functional i∈I γ i d( p, p i ) 2 assumes a unique minimum in B ρ 2 .
Whenever the assumptions of Theorem 3.1 hold true, the Riemannian average is uniquely determined by the first order condition which could alternatively be taken as a definition of the weighted average in M, see also [28].
We can now define an M-valued analogue for Eq. (1).
It is clear that in the linear case this definition coincides with (1). Furthermore it is easy to see that the smoothness of the basis functions = (ϕ i ) i∈I gets inherited by the approximation procedure Q M , see e.g. [27]. Hence our approximation operator Q M satisfies (iii) from the introduction.

Remark 3.3
We wish to emphasize that the approximation procedure as defined in Definition 3.2 is completely geometric in nature. In particular it is invariant under isometries of M. In mechanics this leads to the desirable property of objectivity. The push-interpolate-pull described in the introduction does in general not have this property.

Approximation error
We now wish to assess the approximation error of the nonlinear operator Q M and generalize Theorem 2.4 to the M-valued case.

The smoothness descriptor of manifold-valued functions
Two basic things need to be considered to that end. First we need to decide how we measure the error between the original function f and its approximation Q M f . This will be done pointwise using the geodesic distance d : where we sum over repeated indices and denote with r i j the Christoffel symbols associated to the metric of M [8]. For iterated covariant derivatives we introduce the symbol D l f which means covariant partial differentiation along f with respect to the multi-index l in the sense that Note that (9) differs from the usual multi-index notation, which cannot be used because covariant partial derivatives do not commute. The smoothness descriptor of an M-valued function is defined as follows.

Definition 3.4 [Smoothness descriptor]
For a function f : → M, k ≥ 1 and U ⊂ we define the k-th order smoothness descriptor The smoothness descriptor as defined above represents a geometric analogue of the classical notions of Hölder norms and seminorms. Note that, even in the Euclidean case (for instance M = R) the expression k,U ( f ) is not equal to the Hölder seminorm | f | C k (U,R) , as additional terms are present in the definition of k,U ( f ). But we have the implications In the proof of Theorem 3.5 it will become clear why the additional terms in k,U ( f ) are needed in the case of general M where, in contrast to M = R, higher order covariant derivatives of the logarithm mapping need not vanish, compare also Remark 3.7 below.

Further geometric quantities
Coming back to the anticipated generalization of Theorem 2.4, we also aim to quantify exactly to which extent the approximation error depends on the geometry of M. To this end let log( p, ·) : M → T p M be the inverse of the exponential map at p. Denote by ∇ 1 , ∇ 2 the covariant derivative of a bivariate function with respect to the first and second argument, respectively. In particular, for l ∈ N we will require the derivatives .

Main approximation result
We now state and prove our main result.
Theorem 3.5 Let ⊂ R s be an open domain, k > 0 a positive integer, = (ξ i ) i∈I ⊂ ⊂ R s a set of sites and = (ϕ i ) i∈I ⊂ C c (¯ , R). Assume that ( , ) reproduces polynomials of degree smaller than k. Then there exists a constant C > 0, depending only on k and s, such that for all f ∈ C k ( , M) and x ∈ with x ⊂ and Remark 3.6 If x we can consider an extension operator (see e.g. [19]) M). Then the estimate (10) with f (y) replaced by E x f (y) and a constant C also depending on the domain holds true.
Proof We shall use the balance law (8) which implies that we can write Now we consider the function G : × → T M defined by Since, for fixed x ∈ , the function G maps into a linear space we can perform a Taylor expansion of G in y around the point (x, x) ∈ × and obtain where for any l = (l 1 , . . . , l s ) ∈ N s and z = (z 1 , . . . , z s ) ∈ R s we define We insert (13) into (11) and get the following expression for ε(x): Exchanging summation order in (15) yields .
We will show that (I ) = 0 and (I I ) = O(h(x) k ) which implies our claim. Let us start by showing that (I ) = 0. As a first observation we note that, due to (3), we can write .
We claim that (I l ) = 0 for all l ∈ N s with |l | < k. Indeed, pick x * ∈ arbitrary. Then, by the polynomial reproduction property (2) and k ≤ m + 1 we get (16) Setting x * = x in (16) yields which proves the desired claim. We now move on to prove our second claim, namely that To this end we need to estimate, for any l ∈ N s with |l | = k the quantity To this end we consider, for fixed l and i ∈ I(x), the quantity R l (x, ξ i ). In the following we use the letter C as a symbol for a constant whose value may change from equation to equation. Inserting Definition (12) and using the chain rule we obtain that , which can be estimated by Inserting estimate (18) into the definition (14) of R l we get that Finally, putting (19) into (17) we get that Summing up over all l ∈ N s with |l | = k we get the desired estimate.
Two remarks are in order regarding Theorem 3.5.
Remark 3.7 Clearly, in the linear case higher order (i.e. higher than order 1) derivatives of the logarithm mapping log( p, q) = q − p vanish. Using this fact it is easy to see that our proof of Theorem 3.5 reduces to Theorem 2.4 in the linear case.

Remark 3.8
The estimate in Theorem 3.5 completely separates the error contributions of f and of the geometry of M. We thus see that the only geometric quantity which influences the approximation consists of iterated covariant derivatives of the logarithm mapping.
Using Theorem 3.5 we can now state and prove a geometric generalization to Wendland's main theorem on moving least squares approximation, e.g., Theorem 2.9.
Proof The proof proceeds exactly as the proof of Theorem 2.9, using Theorem 3.5 instead of Theorem 2.4.
Our approximation operator Q M satisfies (i) and (ii) from the introduction.

Generalization to retraction pairs
The computation of the quasi-interpolant Q M f (x) requires the efficient computation of the exponential and logarithm mapping of M. For many practical examples of M this is not an issue, however in certain cases (for instance the Stiefel manifold [15]) it is computationally expensive to compute the exponential or logarithm function of a given manifold. Then, alternative functions can sometimes be used. This idea is formalized by the concept of retraction pairs. Definition 3.10 ( [15], see also [1,12]) A pair (P, Q) of smooth functions Q (x, y)) = y, for all x, y ∈ M, and In general P may only be defined locally around M, and Q around the diagonal of M × M. Example 3.11 Certainly, the pair (exp, log) satisfies the above assumptions [8], and therefore forms a retraction pair. Let S m = {x ∈ R m+1 ||x| = 1} be the m-dimensional sphere. Here we can define a retraction pair (P, Q) by where ·, · is the standard inner product. We refer to [1] for further examples of retraction pairs for several manifolds of practical interest.
Given a retraction pair (P, Q), we can construct generalized quasi-interpolants Q (P,Q) f (x) based on the first order condition (8), which defines a geometric average based on (P, Q) via i∈I γ i Q av P,Q ( , P), The results in [15] show that this expression is locally well-defined. The construction above allows us to define a geometric quasi-interpolant based on an arbitrary retraction pair as follows.

Definition 3.12 Given a retraction pair (P, Q) and denoting (x)
The following generalization of Theorem 3.5 holds true. x ∈ with x ⊂ and k, x ( f ) < ∞ we have Proof The proof is completely analogous to the proof of Theorem 3.5 with log replaced by Q.

Numerical examples
In this section we describe the implementation and an application for our approximation operator Q M . In Sect

Computation of Riemannian averages
In this section we briefly explain how the Riemannian average can be computed. For data points P = ( p i ) i∈I ⊂ M with weights (γ i ) i∈I ⊂ R we use the iteration φ : M → M defined by By (1.5.1)-(1.5.3) of [21] for data points close to each other this iteration converges linearly to the Riemannian average with convergence rate bounded by sup i, j∈I d 2 ( p i , p j ) times a factor that depends only on the sectional curvature of M.
Hence, the convergence rate is bounded by | f | 2 C 1 ( ,M) h(x) 2 times a constant depending only on the sectional curvature of M. Therefore, our approximation operator Q M satisfies (iv) from the introduction. To perform the iteration (24) we only need to know the exponential and logarithm map of the manifold. For the manifolds of practical interest there are explicit expressions for log and ex p available. For the sphere we have for example exp( p, v) = cos(|v|) p + sin(|v|) |v| v and log( p, q) = arccos( p, q ) With the usual metric on the space of SPD matrices (see e.g. [24]) the exponential and logarithm map are exp(P, Q) = P 1/2 Exp(P −1/2 Q P −1/2 )P 1/2 and log(P, Q) = P 1/2 Log(P −1/2 Q P −1/2 )P 1/2 .
where Exp denotes the matrix exponential and Log the matrix logarithm. See [20,25] for a survey of different methods for the computation of the Karcher mean of SPD matrices.
The exponential and logarithm map on the space of invertible matrices are An alternative way to compute the Karcher mean is to use Newton-like methods. See [22] for the computation of the Karcher mean on the sphere and the space of orthogonal matrices using Newton-like methods.

Interpolation of a sphere-valued and a SPD-valued function
In this section we present two examples for the interpolation of manifold-valued functions. For both interpolations we use the hat functions defined in Example 2.3. Then Q M f is a piecewise geodesic function. Figures 1 and 2 illustrate that the approximation error can be bounded by a constant times the squared local meshwidth as stated in Theorem 3.9.

Approximation of reduced order models (ROMs)
In [6], Amsallem and Farhat present a fast method for solving parameter dependent linear time-invariant (LTI) systems. They assume that the solution is known for a few sets of parameters and interpolate the matrices of a reduced order models to get the matrix of a reduced order model for a new set of parameters. Some of these matrices live in a nonlinear space. They choose the 'push-interpolate-pull' technique to interpolate the data. We present an adaptation of this method, based upon the theory in this paper, for approximating reduced order models (ROMs). As we will see in some cases our adaptation yields reliable result whereas the original ROM approximation method fails. We start by introducing linear time invariant systems. Then we present the ROM approximation method and our adaptation.

Linear time-invariant systems
In a parameter dependent LTI system as in [6] we assume that for each x ∈ there exists a unique solution z where A : → G L(n), with G L(n) the set of invertible matrices of size n × n, B : → R n× p , C : → R q×n , D : → R q× p and u : [0, T ] → R p . Typically z x is an output functional of a dynamical system with control function u and n p, q. Furthermore we assume that A, B, C and D are continuous. For k < n we define the compact Stiefel manifold by where I k denotes the k × k identity matrix. Let U, V : → St (n, k) define test and trial bases, respectively. The state vector w x (t) will be approximated as a linear combination of column vectors of V (x), i.e. w x (t) ≈ V (x)w x (t) wherew x will be defined by substituting w x by V (x)w x and multiplying Eq. (26) from the left by U (x) T . Hence we get the system of equations where all operations on matrix-valued functions are defined pointwise. Multiplying Eq. (28) from the left by (U T V (x)) −1 yields the new LTI system where A : → R k×k , B : → R k× p , C : → R q×k and D : → R q× p are As we are aiming for a fast method the running time should be independent of n. In addition to the matrices (A(ξ i ), B(ξ i ), C(ξ i )) we can also use precomputed matrices of size n.

The ROM approximation method
We sketch the method proposed by Amsallem and Farhat in [6]. The algorithm is divided into two steps. In the first step we construct a continuous function is invertible for all x ∈ . In [6] the matrix V 0 is chosen by V 0 := V (ξ i ) for some i ∈ I. Then we define f by To compute the closest point projection onto O(k) in a stable way, one can use the iteration (see e.g. [18]) The next lemma shows that f can be regarded as a map from into St (n, k)/O(k).
Proof By the definition of the equivalence relation (36) there exists Q : → O(k) such thatṼ = V Q. Note that it is enough to prove that for all x ∈ . By left invariance of the closest point projection with respect to orthogonal matrices we have Next we prove that f (V ) has the same smoothness as (V ). Replacing V by V c = f (V ) in (32) we can define continuous matrix-valued functions A c , B c , C c for the reduced order models by for all x ∈ . In step 2 the data A c (x) is approximated with respect to the space G L(k) [see (25) for the logarithm and exponential map on G L(k)].
The ROM approximation algorithm interpolates the data A c (ξ i ) using the 'pushinterpolate-pull' technique, i.e. it chooses an i 0 ∈ I and maps the data A c (ξ i ) by the logarithm map log with base point A c (ξ i 0 ) to the tangent space at A c (ξ i 0 ). Then the new data is approximated with a method for linear spaces and finally mapped back to the manifold by the exponential map exp.
The data B c (x) and C c (x) is approximated with a method for linear spaces.
We present an adaptation of the ROM approximation algorithm by using the approximation operator Q M defined in 3.2 to interpolate the data A(x) in G L(k).

Algorithm 1:
Step 2 of ROM approximation algorithm using the 'pushinterpolate-pull' technique

Numerical experiments
In Section 5.1 of [6] a simple academic example where the ROM approximation method yields good results is shown. In the next example we can only get a reasonable approximation if we use our adaptation.  in the Frobenius norm for the ROM-Approximation and its adapted algorithm are illustrated in Fig. 3. An illustration why the ROM approximation method has a large error is shown in Fig. 4. As the matrices A(x) are two dimensional rotation matrices we can see them as points on a circle. The Riemannian average of points A 1 and A 2 with weights λ 1 = λ 2 = 0.5 is M. However if we transform the points by the logarithm to the tangent space at A 0 , interpolate on this tangent space and transform back by the exponential map we get a different pointM = M. We set U (x) and V (x) equal to the first 2 columns of the orthogonal matrices of the singular value decomposition of A(x). We choose the data sites from Example 2.8 with k = 2. The L ∞ -norm of the error made in Step 2 of Algorithm 2 is shown in Fig.  5. As we can see the convergence rate is as predicted by Theorem 3.9.