Restarting iterative projection methods for Hermitian nonlinear eigenvalue problems with minmax property

In this work we present a new restart technique for iterative projection methods for nonlinear eigenvalue problems admitting minmax characterization of their eigenvalues. Our technique makes use of the minmax induced local enumeration of the eigenvalues in the inner iteration. In contrast to global numbering which requires including all the previously computed eigenvectors in the search subspace, the proposed local numbering only requires a presence of one eigenvector in the search subspace. This effectively eliminates the search subspace growth and therewith the super-linear increase of the computational costs if a large number of eigenvalues or eigenvalues in the interior of the spectrum are to be computed. The new restart technique is integrated into nonlinear iterative projection methods like the Nonlinear Arnoldi and Jacobi-Davidson methods. The efficiency of our new restart framework is demonstrated on a range of nonlinear eigenvalue problems: quadratic, rational and exponential including an industrial real-life conservative gyroscopic eigenvalue problem modeling free vibrations of a rolling tire. We also present an extension of the method to problems without minmax property but with eigenvalues which have a dominant either real or imaginary part and test it on two quadratic eigenvalue problems.


Introduction
In this work we consider a problem of computing a large number of eigenvalues in an open real interval J ⊂ R and the corresponding eigenvectors of the nonlinear eigenvalue problem (NEP) where T (λ) ∈ C n×n is a family of large and sparse Hermitian matrices for every λ ∈ J . We furthermore assume that the eigenvalues of (1) in J can be characterized as minmax values of a Rayleigh functional [35]. Such problems routinely arise in simulation of acoustic properties of e.g.vehicles or their parts in order to minimize the noise exposure to the passengers as well as to the environment. The problem of computing a moderate number of eigenpairs of a nonlinear eigenvalue problem at the beginning of the spectrum has been extensively studied. For minmax admitting problems a Nonlinear Arnoldi method was suggested in e.g. [29] and the Jacobi-Davidson method in [4]. For more general nonlinear eigenproblems iterative projection methods were considered in [1,7,11,[14][15][16][17][19][20][21]25,31,32,34]. However, the approach in [4,29] hits its limitations if a large number of eigenvalues (in particular in the interior of the spectrum) of (1) is needed. To algorithmically exploit the minmax property, one has to project the problem under consideration onto a sequence of search spaces, which dimension is growing with the number of the targeted eigenvalue. For a large number of eigenvalues (or eigenvalues in the interior of the spectrum) this naturally requires an excessive amount of storage and computing time.
In this work we propose a new restart technique which allows to project the NEP (1) only onto search spaces of a fixed, relatively small dimension throughout the iteration. The new restart technique can be integrated with iterative projection methods such as the Nonlinear Arnoldi or Jacobi-Davidson method making them capable of computation of a large number of eigenpairs, possibly in the interior of the spectrum. A preliminary version of the local restart technique was published in [18].
The paper is organized as follows. In Sects. 2 and 3 we recapitulate the variational characterization for nonoverdamped nonlinear eigenproblems and the iterative projection methods for their solution. The new restart technique is presented in Sect. 4 along with a strategy for dealing with spurious eigensolutions which are an intrinsic part of the interior eigenvalue computation. The resulting framework for restarting of nonlinear iterative projection methods for interior eigenvalue computation is summarized in Sect. 5. The performance of the restarted methods is demonstrated in Sect. 6 on a range of nonlinear eigenvalue problems with and without minmax property including a real-life industrial gyroscopic eigenvalue problem arising in modeling of the noise radiation from rotating tires. Section 7 concludes the paper with a summary and outlook for future research.

Variational characterization of eigenvalues
Definition 1 (Hermitian Nonlinear Eigenvalue Problem) Let T (λ) ∈ C n×n be a family of Hermitian matrices for every λ in an open real interval J . As in the linear case, T (λ) = λI − A, we call the parameter λ ∈ J an eigenvalue of T (·), whenever Eq. (1) T (λ)x = 0 has a nontrivial solution x = 0, which we call an eigenvector corresponding to λ.
It is well known that all eigenvalues of a linear Hermitian problem Ax = λx are real and if they are ordered, λ 1 ≤ λ 2 ≤ · · · ≤ λ n , it is possible to characterize them by the minmax principle of Poincaré Theorem 1 (Minmax principle of Poincaré) Let λ 1 ≤ λ 2 ≤ · · · ≤ λ n be the ordered eigenvalues of Ax = λx then where S k denotes the set of all k dimensional subspaces of C n and W 1 := {w ∈ W : w 2 = 1} is the unit sphere in W.
It turns out, that a similar result holds also for a certain type of nonlinear eigenvalue problems.
Definition 2 (Rayleigh functional) Let f (λ; x) := x * T (λ)x be a real function, continuous in J for every fixed x = 0. Assume that has at most one solution p(x) ∈ J , then (3) implicitly defines a functional p on some subset D of C n \{0}. We refer to p as a Rayleigh functional, since it generalizes the notation of the Rayleigh quotient in the variational characterization of the eigenvalues of the linear problem.
We furthermore require that f (λ; x)(λ − p(x)) > 0 for every λ ∈ J and x ∈ D with λ = p(x), (4) which is a natural generalization of the requirement that B is positive definite for a linear pencil (A, B).
Under these assumptions a variational characterization in terms of the Rayleigh functional has been considered by various authors. To mention a few, Duffin [9,10] and Rogers [24] proved the variational principle for the finite dimensional overdamped problems, i.e. problems for which the Rayleigh functional p is defined on the entire space C n \{0}. Nonoverdamped problems were considered by Werner and the second author [33,35].
The key to the variational principle is an adequate enumeration of the eigenvalues. In general, the natural enumeration, i.e. the first eigenvalue is the smallest one, followed by the second smallest one etc. is not appropriate (see [33,35]). Instead, the number of an eigenvalue λ of the nonlinear problem (1) is inherited from the number of the eigenvalue 0 of the matrix T (λ) based on the following consideration: Definition 3 (Minmax induced numbering of eigenvalues) Let λ ∈ J be an eigenvalue of the nonlinear problem (1), then μ = 0 is an eigenvalue of the linear problem T (λ)x = μx. Therefore there exists k ∈ N such that 0 = max W∈S k min w∈W 1 w * T (λ)w or equivalently that 0 is a kth largest eigenvalue of the matrix T (λ). In this case we call λ a kth eigenvalue of (1).

Remark 1
For T (λ) := λB − A, B > 0 it follows from the minmax characterization for linear eigenvalue problems that λ is a kth eigenvalue of T (·) if and only if λ is a kth smallest eigenvalue of the linear problem Ax = λBx.
x yields the same enumeration. This will be used later in Theorem 2.
With this enumeration the following minmax characterization of the eigenvalues of the nonlinear eigenproblem (1) was proved in [33,35]: Theorem 2 (Minmax characterization for eigenvalues of T (·)) For every x ∈ D ⊂ C n , x = 0 assume that the real Eq. (3) has at most one solution p(x) ∈ J , and let the definiteness condition (4) be satisfied.
Then the following assertions hold: (i) For every k ∈ N there is at most one kth eigenvalue of problem (1) which can be characterized by Hence, there are at most n eigenvalues of (1) in J .
then λ k is a kth eigenvalue of T (·) and (5) holds. (iii) Assume that for k < m the interval J contains the kth and the mth eigenvalue λ k and λ m , then J contains all the eigenvalues λ j ∈ J, j = k, . . . , m and moreover it holds λ k ≤ λ k+1 ≤ · · · ≤ λ m .
(iv) If λ ∈ J and k ∈ N such that problem (1) has a kth eigenvalue λ k ∈ J then it holds that

Iterative projection methods for nonlinear eigenproblems
For sparse linear eigenvalue problems, Ax = λx, iterative projection methods are well-established and recognized as a very efficient tool. The key idea is to reduce the dimension of the eigenproblem by projecting it to a subspace of a much smaller dimension. The reduced problem is then handled by a fast technique for dense problems. Of course, this idea can only be successful if the subspace used for projection has good approximating properties w.r.t. some of the wanted eigenpairs, which translates to eigenvalues of the projected matrix being good approximations to the wanted eigenvalues of the large sparse matrix. In iterative projection methods the search subspace is expanded iteratively in a way promoting the approximation of the wanted eigenpairs. The generalizations of iterative projection methods to nonlinear eigenvalue problems were discussed in [1,4,7,11,15,17,[19][20][21]25,29,31,32,34]. Two representative examples are the Nonlinear Arnoldi and Jacobi-Davidson methods. Both those methods extend the search subspace targeting a particular eigenvalue. In fact, there are no Krylov subspace methods (i.e. methods which as in linear case would admit a polynomial representation of the search subspace) working directly on the nonlinear eigenvalue problem without linearization. While applying iterative projection methods to general nonlinear eigenvalue problems with the objective to approximate more than one eigenpair, it is crucial to prevent the methods from converging to the same eigenpair repeatedly. In the linear case this is readily done by the Krylov subspace solvers or using partial Schur decomposition [12]. Unfortunately, a similar normal form does not exist for nonlinear eigenvalue problems. While this paper was in review, we became aware of a new approach to avoid repeated eigenpair convergence for general nonsymmetric eigenproblems based on minimal invariant pairs [11]. For nonlinear eigenvalue problems admitting a minmax characterization, in [4,29] it was proposed to use the induced eigenvalue ordering to remedy the problem. Algorithm 1 outlines a framework for iterative projection methods based on enumeration of the eigenvalues as discussed in Sect. 2.
There are many details that have to be considered when implementing an iterative projection method as outlined in Algorithm 1. The comprehensive review is out of scope of this work. Here, we restrict ourselves to only the essentials necessary for motivation and derivation of the local restart technique in Sect. 4. For more detailed discussion we refer the reader to [29,31,32].

Initialization
In order to preserve the numbering of the eigenvalues, the initial basis V has to contain at least j min linearly independent vectors. Let W be the invariant subspace of T (λ j min )

Algorithm 1 Iterative Projection Method
Require: 1: First wanted eigenvalue number j := j min 2: Initial basis V , V * V = I 3: Choose initial shift σ 4: Initial preconditioner K ≈ T (σ ) −1 , σ close to first wanted eigenvalue, λ j min Execute: 5: while j ≤ j max do 6: Compute the eigenpair (λ, y),λ is the jth eigenvalue of the projected problem T V (λ)y := V * T (λ)V y = 0 7: Compute the Ritz pair (λ,x := V y) and its residual r = T (λ)x 8: if (λ,x) converged then 9: return approximate eigenpair (λ j , x j ) := (λ,x) 10: j := j + 1 11: optionally choose a new shift σ and recompute K ≈ T (σ ) −1 , if indicated 12: optionally restart 13: Choose approximation (λ,x) to the next eigenpair, and compute r = T (λ)x 14: end if 15: Compute new direction v, e.g., v = K r 16: Orthonormalize and expand corresponding to its j min largest eigenvalues then it holds that z * T (λ j min )z ≥ 0 for every z ∈ W, and therefore by Theorem 2 p(z) ≤ λ j min for every z ∈ W ∩ D, and sup z∈W∩D p(z) = λ j min . Hence, λ j min is a j min th eigenvalue of the orthogonal projection of T (·) onto W, and a reasonable choice for the initial space V is the corresponding invariant subspace of T (λ) for someλ close to λ j min . Likewise, if T (·) is overdamped, then it holds that z * T (λ j min )z ≥ 0 for every z ∈ span{x 1 , . . . , x j min }, where x j denotes the eigenvector of T (·) corresponding to λ j , and the subspace spanned by x j , j = 1, . . . , j min − 1 and additionally an approximation to x j min is also a reasonable choice for the initial search space.

Solution of a projected nonlinear eigenvalue problem (PNEP)
For nonlinear eigenvalue problem (1) let the columns of V ∈ C n form a basis of the current search space V ⊂ C n . Then it is easily seen that the projected problem inherits the variational property, i.e. its eigenvalues in J are minmax values of the restriction of the Rayleigh functional p of T (·) to D ∩ V. Although, in general the enumeration of the eigenvalues of the original problem and the projected problem may differ.
There are many methods for solving small and dense nonlinear eigenvalue problems. For polynomial eigenvalue problems linearization is a natural choice, where the enumeration of eigenvalues in the sense of Sect. 2 can be deduced from the natural ordering of the real eigenvalues of the linearized problem. For general nonlinear eigenvalue problems safeguarded iteration [23] outlined in Algorithm 2 can be used for computing the kth eigenvalue of the nonlinear problem.

Subspace expansion
In general two approaches to subspace expansion can be found in the literature: Jacobi-Davidson [4] and Nonlinear Arnoldi [31] type expansion. Both schemes approximate inverse iteration, which is known to provide a direction with high approximating potential to the targeted eigenpair (cubical convergence for symmetric nonlinear eigenproblems if the eigenvalue approximation is updated with the Rayleigh functional).
Let (λ k ,x k ) be a currently available approximation to the eigenpair and r k = T (λ k )x k its residual. In Jacobi-Davidson the search subspace is expanded by an orthogonal direction t ⊥x k obtained from the following correction equation If (7) is solved exactly we can expect asymptotically cubic convergence. The convergence rates of inexact Newton and Newton-like methods were studied in [27], and it is a common experience that even very coarse solution of (7) is sufficient to maintain a reasonably fast convergence. The Nonlinear Arnodi method uses the direction of the residual inverse iteration where σ is a fixed parameter close to the wanted eigenvalue λ k . The Nonlinear Arnoldi method converges linearly, i.e. ifx i−1 k andx i k are two consecutive iterations with [22]). For Hermitian problems if the eigenvalue approximations are updated with the value of the Rayleigh functional and σ is updated with the previous approximation to λ k , σ =λ i−1 k , [26] the convergence is even quadratic. Moreover, if the linear system T (σ )v = T (λ k )x k is too expensive to solve for v we may choose as a new direction v = K T (λ k )x k with K ≈ T (σ ) −1 .

Standard restarting based on global numbering
As the subspace expands in the course of the algorithm, the increasing storage and computational cost of the solution of the projected eigenvalue problem may make it necessary to restart the algorithm and purge some of the basis vectors. To be able to continue determining subsequent eigenpairs the correct enumeration has to be enforced at the restart.
If J contains the first eigenvalue λ 1 = min x∈D p(x), then e.g. the safeguarded iteration for the projected nonlinear problem (6) converges globally, i.e. for any initial vector x ∈ V ∩ D, to the smallest eigenvalue of (6) [23]. Furthermore, if the eigenvectors x j of the original problem (1) corresponding to the eigenvalues λ j , j = 1, . . . , k, are contained in V, then λ j is a jth eigenvalue of the projected problem (6), as well. Hence, expanding the search space V iteratively, and determining the (k + 1)st eigenvalue of the projected problems, one gets a sequence of upper bounds to λ k+1 which (hopefully) converges to λ k+1 . Thus, the eigenvalues of (1) can be determined quite safely one after the other by the iterative projection method starting with an approximation to x 1 .
If inf x∈D p(x) / ∈ J we can modify this approach in the following way. Let k be the smallest integer such that λ k ∈ J where k is chosen according to Definition 3. The minimum in (5) is attained by the invariant subspace W of T (λ k ) spanned by the eigenvectors corresponding to its k largest eigenvalues. Hence, if the current search space V satisfies W ⊂ V then it is easily seen that λ k is the kth eigenvalue of the projected problem (6), i.e. again the numbering of the eigenvalues in the projected and in the original problem coincide, thus the eigenvalues can be determined successively.
In either case, for the numbering to be preserved, the search subspace after restart has to contain the eigenvectors corresponding to all the preceding eigenvalues in J and if inf x∈D p(x) / ∈ J also appropriate initial vectors, hence the restart requires global information. Notice that we only restart if an eigenvector has just converged since a restart destroys information on the eigenvectors and in particular on the currently iterated one.

Convergence criterion
In the course of our algorithm we accept an approximate eigenpair (λ k ,x k ) as converged, if the residual norm T (λ k )x k 2 / x k 2 is small enough. For a linear eigenvalue problem this is just the backward error of the eigenpair. For nonlinear holomorphic eigenvalue problems Szyld and Xue [28] performed a perturbation analysis of simple invariant pairs and derived an error estimate for their approximation. For algebraically simple eigenvalues (i.e. det T (λ k ) = 0) their result essentially states that a small residual norm indicates a small backward error, as long as the Jacobian of the augmented system is not ill-conditioned at the desired eigenpair (λ k , x k ). Here c ∈ C n denotes a fixed vector with c * x k = 0.

A local restart technique
To overcome the problem of the search subspace dimension growing with the number of the sought eigenvalue inherent to global restarts (see Sect. 3.4) we propose to replace the global numbering of the eigenvalues by a local one. As the local numbering is obtained w.r.t. some chosen eigenvalue, only the corresponding eigenvector has to be included into the search subspace after a restart rather than the entire set of preceding eigenvectors or the invariant subspace of T (λ k ).

Local numbering of eigenvalues
Assume that we are given an eigenpair (λ,x),λ ∈ J andx ∈ C n , of the nonlinear eigenproblem (1). We refer to such an eigenpair as an anchor. In the following to avoid unnecessary technicalities we assume thatλ is a simple eigenvalue, but all the results can be generalized to allowλ to be a multiple eigenvalue. Let V be a subspace of C n that containsx, and let the columns of V form a basis of V. Then, along with the original family of matrices T (·), its projection T V (·) := V * T (·)V satisfies the conditions of Theorem 2. Therefore the Ritz values of T (·) with respect to V, i.e. the eigenvalues of the projected eigenproblem (6) can be enumerated according to Definition 1. In particular, sincex ∈ V,λ is also an eigenvalue of the projected problem (6), andλ can be assigned a local number = (V) as follows: The remaining eigenvalues of T V (·) (i.e. the Ritz values of T (·) with respect to V) are given numbers relative to the anchor number, (V). We call such a relative numbering local. 10 } where x i is an eigenvector of (1) corresponding to the ith eigenvalue λ i . Then the projected problem T V (λ)y = 0 has exactly the eigenvalues λ i , i = 1, 3, 7, 8, 10 in J . For the anchorx := x 7 it holds that = 3, and the local numbers of the subsequent eigenvalues λ 8 and λ 10 are 4 and 5, respectively.

Remark 3
Numerically, the local number of the anchorλ, can be determined as the number of the eigenvalue of the linear problem T V (λ)y = μ(λ)y with the smallest absolute value: if μ 1 ≥ μ 2 ≥ · · · are its eigenvalues then

Spurious eigenvalues
In Example 1, the search subspace V has been chosen to contain eigenvectors only, and therefore successive eigenvalues of T (·) with eigenvectors in V have consecutive local numbers. However, in a course of iteration, the search subspace will also contain additional vectors besides the eigenvectors which can adversely affect the local numbering. Only for the sake of the following argument let us assume that the nonlinear eigenvalue problem (1) is overdamped such that the Rayleigh functional p is defined on C n \{0}. Hence the eigenvectors X := {x 1 , . . . , x n } of T (·) corresponding to the n eigenvalues arranged in the ascending order λ 1 ≤ λ 2 ≤ · · · ≤ λ n form a basis of C n . Let the current search subspace be V k , and the anchor pair (λ,x),x ∈ V k . Assume that from the last restart the method has already computed the eigenvalues However, with an anchor in the interior of the spectrum it is possible for the additional Ritz vector, x θ := V k+1 y θ , that its representation with respect to the eigenbasis X of T (·), x θ = i α i x i , contains components α i x i such that some of the corresponding eigenvalues λ i are smaller than λ V k and others are larger than λ V k + j (or larger equal if λ V k + j is a multiple eigenvalue of T (·)). We call such a Ritz value θ a spurious eigenvalue of T (·). The presence of a spurious eigenvalue obviously causes an increase of the local numbers of all the subsequent eigenvalues.
Remark 4 (The case θ =λ) Note that even if θ =λ (up to precision to which the eigenvalues are computed), x θ =x. Hence we can identify such a spurious pair (θ, x θ ) (recall we assumed the anchorλ to be simple) and enforce the ordering in which θ precedesλ so it does not interfere with the local ordering. Therefore, it is sufficient to consider the caseλ < θ.
Our argument took advantage of the existence of an eigenbasis of C n , which is a consequence of assuming that the nonlinear eigenvalue problem (1) is overdamped. It is clear, that the same can happen for nonoverdamped problems. The additional complication for nonoverdamped problems is that the linear combination can also contain vectors not in the domain of definition of the Rayleigh functional p, which can have the same effect.
Occurrence of spurious eigenvalues is inherent to interior eigenvalue computation. It also happens for linear problems, when no transformation is applied to the eigenproblem to map the eigenvalues from the interior to the lower or upper end of the spectrum. Hence, in order to algorithmically exploit the local numbering we need to find a way to recognize when the local numbering has been obscured by spurious eigenvalues and how to effectively restore it.

Local restart framework
Algorithm 3 outlines one local restart cycle, which we explain in detail below.

Algorithm 3 Restart Framework
Require: Determine the local number of the anchor, (V ) 10: Compute Compute new expansion direction v aiming at the ( + j)th eigenpair (λ + j , x + j ) 12: until eigenpair (λ + j , Vỹ l+ j ) =: (θ s , x s ) := ∅ 16: for m = 1, . . . , m do 17: Locate suspect eigenvalue θ m , and its Ritz vector x m Recover missed out eigenpair (θ m , x m θ ) and adjust numbering 20: Increase local offset j := j + 1 21: Lines (1)(2)(3)(4)(5): Initialization Here, the only difference between the local and and global restarts is the initialization subspace, V 0 := span{x, v}. For local restarts V 0 contains only the eigenvector corresponding to the anchor,x, along with v ∈ C n an approximation to the next eigenvector such that T V 0 has an eigenvalue ω ∈ J with ω >λ. Starting with V 0 we determine approximations to the eigenvalue subsequent to the anchorλ by projecting the nonlinear eigenvalue problem (1) to a sequence of Lines (7)(8)(9)(10)(11)(12): Computation of the ( + j)th eigenpair (λ + j , x + j ) Let V k be the current search space and (V k ) the local number ofλ. Note that the number (V k ) of the anchor may change in the course of the iteration hence its dependence on V k . Suppose that we have successively computed j eigenvalues of T (·) in J , and letX be the 1 ≤ q ≤ j dimensional eigenspace of T (·) corresponding toλ. We are now aiming at the next eigenvalue of T (·). To this end, we compute the eigenpair We expand the search space, V k to V k+1 , by a new search direction v aiming at the Ritz pair (ω, V k y ω ), e.g., v = K T (ω)V k y ω for the Nonlinear Arnoldi method (we hope that, by using such a strategy, as the iteration progresses the search subspace will contain an increasingly significant component of the eigenvector x + j ). We then solve the new projected eigenproblem T V k+1 (·) for the Ritz pair with the desired local number (V k+1 ) + j and we repeat this iterative process until the Ritz pair with the desired local number has converged (yields a sufficiently small eigenresidual norm, see Sect. 3.5).

Remark 5
If v ∈ C n in the initial subspace V 0 is a poor approximation to x +1 and T V 0 has an eigenvalue ω ∈ J, ω ≤λ i.e. (V 0 ) = dim(V 0 ) we return the eigenpair with the largest number in the subspace (here the anchor itself, (λ,x)). As (λ,x) is accurate up to the computational tolerance, its residual T (λ)x, similarly as a random vector, is expected to be a linear combination of many eigenvectors. The subspace expansion step, e.g. v = K T (λ)x, then amplifies those eigendirections corresponding to the eigenvalues close to the pole σ (K ≈ T −1 (σ )) and within a few steps we expect the projected problem T V to have an eigenvalue ω >λ.
Line 13: Check if a new eigenpair has been computed Ideally, the converged Ritz pair (ω, x ω ) is a new eigenpair of the original problem (1). However, it may happen that the algorithm returns a replicate eigenvalue with number smaller than i + j or a new eigenvalueλ <λ <λ = λ i+ j−1 (eigenvalue which has been previously missed out).
From the discussion in Sect. 4.2 we infer that such behaviour occurrs due to the local numbering being altered i.e. one or more additional eigenvalues exist in (λ,λ] correspondingly rising the local number ofλ. Henceforth we will refer to such eigenvalues as "suspect". All such suspect eigenvalues can be identified and the missed out eigenvalues can be accepted while the spurious eigenvalues can be treated in the way described below one after the other.

Lemma 1
If the angle between the eigenspaceX, dimX = q, and x ω is different from 0 (in the numerical practice, larger than a prescribed small threshold) or ifλ is the where V is the current search space, V ⊥ denotes a basis of V ⊥ the orthogonal complement ofX in V, and (V ⊥ ) the local number ofλ, thenλ is a multiple eigenvalue (with multiplicity at least q + 1).
Notice, that the number of columns of V ⊥ , dim(V) − q, is usually quite small and therefore it can be easily verified with safeguarded iteration whetherλ is a ( (V ⊥ ) + j − q)th eigenvalue of the projected eigenproblem (8) or not.
In the second case, there are two possible reasons for the current projected problem having an additional eigenvalue θ ∈ (λ,λ] such that the corresponding Ritz pair (θ, x θ ) = (λ i+ j , x i+ j ), j = 1, . . . , j − 1: 1. Missed out eigenvalue An eigenvalue of the original problem (1) in the interval (λ,λ] might have been previously missed out because the corresponding eigenvector x θ was not sufficiently present in the initial search space V 0 and might not have been amplified sufficiently in the course of the expansions of V until computingλ. Afterwards the component of x θ in the search space V has grown large enough to produce the additional eigenvalue θ ∈ (λ,λ], and Algorithm 3 yields the eigenvalueλ the (q + 1)st time with a different local number. 2. Spurious eigenvalue It might be the case that no eigenvalue of (1) is missing in (λ,λ] but the newly produced eigenvalue θ of the projected problem (6) is a spurious one, i.e. the corresponding Ritz vector x θ is a linear combination of eigenvectors of (1) corresponding to eigenvalues less thanλ and eigenvalues greater thanλ (or greater equal ifλ has a higher geometrical multiplicity than computed so far) and possibly some vectors outside of the domain of definition of the Rayleigh functional if the problem is not overdamped.
In both cases we identify the additional eigenvalue θ and its local number + j θ , and we expand the search space aiming at (θ, x θ ) (in other words, we add a new search direction v, which is either K T (θ )x θ for the Nonlinear Arnoldi method, or the approximate solution of the Jacobi-Davidson correction Eq. (7) with right-hand side −T (θ )x θ ). Then for the projected problem on such extended subspace V θ T V θ (λ)y = 0 ( 9 ) either of the following holds: • Problem (9) has exactly j − 1 eigenvalues in (λ,λ], i.e. the additional eigenvalue has left the interval of interest and the numbering in [λ,λ] has been restored. • There are still j eigenvalues in (λ,λ]. In this case we repeat the expansion of the subspace until the additional eigenvalue has been moved out from the interval [λ,λ] or the sequence of additional Ritz values has converged to a previously missed out regular eigenvalue, in which case we adjust the enumeration of the eigenvalues and increase j by 1.
After the enumeration has been restored we continue with the iterative projection method targeting the eigenvalue with the local number + j.

Remark 6
In particular if more than one additional eigenvalue exist in (λ,λ], the Algorithm 3 will first identify and recover all missed out eigenvalues. Then the first of the found spurious eigenvalues (i.e. with the smallest local number) will be targeted.
Lines 31-32: Targeting next eigenvalue After convergence of the eigenvalue we may continue the iterative projection method aiming at the ( (V k )+ j +1)st eigenvalue or we may replace the anchor with the newly converged eigenpair and target the eigenvalues subsequent to the new anchor. Since the current search space contains useful information about further eigenvalues it is advisable to continue expanding the search space until the convergence becomes too slow (notice that for the residual inverse iteration the convergence factor τ depends on the distance between the shift and the wanted eigenvalue) or the dimension exceeds a given bound.

Automated local restart
For certain problems, the cost to set up a restart, i.e. time for computing the preconditioner, generating the new search space and the projected problem, is relatively high in comparison to the remaining computations. We can further improve the performance allowing the algorithm to balance those time-consuming tasks automatically.
Let t r denote the time for the setup of a restart, and let t j e be the time needed for computing the ( + j)th eigenvalue of problem (1), i.e. j denotes the offset of the eigenvalue with respect to the anchor after a restart. Then the total time for computing the first j eigenvalues after the restart is t j t = t r + j k=1 t k e , and hence the running average time for computing one eigenvalue since last restart ist j e = t j t /j. Notice, that as we compute more and more eigenvalues the setup time per eigenvalue decreases.
Let α ≥ 1 and N α ∈ N 0 be parameters depending on the given problem, and we initialize n α := N α . After computing the jth eigenpair since a restart we adjust n α in the following way Whenever n α < 0 we restart the method. The presented strategy compares the time required for convergence of an eigenvalue with the running average time and triggers a restart when the eigenvalue convergence is repeatedly slower by factor α than in average. In particular, if N α = 0 and α = 1 we restart the algorithm straightaway when the time for convergence to an eigenvalue exceeds the average time for computing the eigenvalues since the last restart.

Framework for restarting nonlinear iterative projection methods
Integrating the local restart with the iterative projection methods, we arrive at the framework for restarting of nonlinear iterative projection methods summarized in Algorithm 4. An initial anchor pair can be determined for instance with an iterative projection method expanding the search space by K T (σ )x k whereσ = σ are both fixed shifts close to the wanted eigenvalues, K ≈ T (σ ) −1 is a preconditioner, andx k are the iterates of the projection method aiming at the eigenvalue closest toσ . Alternatively we could use a direction suggested by the Jacobi Davidson method for the linear eigenproblem T (σ )x = λT (σ )x aiming at its smallest eigenvalue in modulus. Obviously, no anchor eigenpair is required if inf x∈D p(x) ∈ J and one is looking for eigenvalues at the lower end of the spectrum as the natural enumeration can be used in the first interval. After a restart one of the just computed eigenpairs can serve as an anchor.
More general, for nonlinear eigenvalue problems where the minmax induced ordering and the natural (here ascending) ordering coincide on [a, b] ⊂ J (e.g. minmax admitting quadratic eigenvalue problem), it is also possible to use one of the bounds of the interval of interest [a, b] and enumerate the eigenvalues relatively to this bound instead of relatively to an anchor. In such case, we compute the eigenvalues of the projected nonlinear problem T V (·) larger or equal a until the first restart, when the anchor is reset to an already computed eigenpair. Corollary 1 is a direct consequence of Theorem 2 and it shows how to locate the first eigenvalue of the projected nonlinear problem T V (·) larger or equal a. Compute

Require
Compute new expansion direction v aiming at the ( + j)th eigenpair (λ + j , x + j ) 32: Global anchor number becomes i := i + j − 1 − n locked 56: Reset the anchor to a recently computed eigenvalue,λ := λ i 57: anchor_exists := true 58: Reset local offset j := n locked + 1 59: Choose new shift σ close to the approximation of the next eigenvalueλ i+ j 60: Recompute preconditioner K ≈ T (σ ) −1 61: Reset search subspace V := orthonormalize([x i , . . . , x i+n locked ,x i+ j ]), wherex i+ j is an approximation to the x i+ j th eigenvector 62: Compute new expansion direction v aiming at the ( + j)th eigenpair (λ + j , x + j ) 63: end while{λ i+ j < b} Exactly as before we can target the eigenvalue with the local number m and after it converged, the eigenvalue with the local number m + 1, etc. Theoretically, after the eigenvalue with the mth local number converged this could be used as an anchor straight away. However, there is a danger of accepting λ V m as an anchor (hence the first eigenvalue in [a, b]) prematurely i.e. λ V m is not the first eigenvalue in [a, b] of the original problem (1) because eigenvalues in [a, λ V m ) have been missed out. In this case the algorithm would continue to compute only the eigenvalues larger or equal λ V m permanently missing out the eigenvalues in [a, λ V m ). This is less likely to happen if the enumeration w.r.t. the bound a is used until the first restart until when the search subspace is large and hence hopefully it includes the first and further consecutive eigenvalues of (1) in [a, b].
We remark, that the missed out eigenvalues and the spurious eigenvalues have exactly the same effect regardless whether the interval bound a or the anchorλ is used to relatively enumerate the eigenvalues i.e. the local number of the eigenvalues following such missed out/spurious eigenvalue is raised. Hence, they can be dealt with in the same way as described in Sect. 4.3.
Obviously, a very similar strategy can be applied when the anchor pair is not available to the required precision, i.e. its residual norm is above a set tolerance. We incorporated both those important cases in the pseudocode in Algorithm 4.
We might want to keep n locked eigenpairs in addition to the anchor pair at the restart, to minimize the occurrence of spurious eigenvalues. However the benefits have to be traded off against increased cost of the solution of the projected problems due to larger search subspace dimensions.

Numerical experiments
In this section we demonstrate the performance of the local restarts on a range of nonlinear eigenvalue problems. All the tests were performed with the QARPACK MATLAB package [2] on a desktop machine with two quadcore Intel Xeon X5550, 2.67GHz processors and 48 GB RAM. The LU decompositions and the subsequent system solves for small problems (small gyroscopic problem, "wiresaw1(2000)", "wiresaw2(2000)") were performed using MATLAB built-in LU and for large problems (large gyroscopic problem, delay problem, rational problem, "acoustic wave 1d" problem) with the MAT-LAB's LU routine with five outputs. For all quadratic problems the projected problems were solved by linearization while for general nonlinear problems with safeguarded iteration.
The results are uniformly presented in terms of elapsed CPU times. We preconditioned the Nonlinear Arnoldi method with the LU factorization of the real part of T (σ ) where σ is a shift not too far away from the wanted eigenvalues. We chose to neglect the imaginary part of T (σ ) since its influence on the action of the preconditioner is small in our examples, not justifying the extra effort of using complex arithmetic. We updated the LU factorization at each restart. Table 1

A conservative gyroscopic eigenvalue problem
We consider a conservative gyroscopic eigenvalue problem describing for instance the free vibrations of a rolling tire. It is well known that all its eigenvalues are real and occur in pairs ±λ, the corresponding eigenvectors are complex conjugate, and the positive eigenvalues 0 < λ 1 ≤ · · · ≤ λ n satisfy the minmax characterization [10] where p(x) is the positive solution of the quadratic equation

Qualitative properties of the method
We start with showing some qualitative behavior of our method on a small example of a wheel, composed of solid elements with Mooney-Rivlin material, see Fig. 1. The wheel is pressed on the track and is rotating at a rate of 50Hz. It is discretized with 450 brick elements with 720 nodes yielding after application of boundary conditions, 1728 degrees of freedom. For the purpose of comparison we computed all the eigenpairs in the interval [0,16,820] by a globally restarted Nonlinear Arnoldi method. This corresponds to the smallest 200 eigenvalues. In all experiments an eigenvalue was regarded as converged if its relative residual norm was smaller than 10 −4 . The preconditioner was recomputed, whenever the ratio τ = T (λ s , of two successive residual norms in the last two step s − 1, s before convergence of the eigenpair (λ k , x k ) exceeded 0.1. Note, that large τ indicates that |σ − λ k | is to large (see Sect. 3.3). To prevent the search subspace from getting arbitrarily large we used global restarts which were triggered whenever the search subspace dimension exceeded 230, an absolute threshold on the subspace dimension. In the global restart technique we restart the Nonlinear Arnoldi method with an orthogonal basis of the subspace spanned by all eigenvectors computed so far. The total computing time, and Values for all problems except for the large tire problem are averaged over 10 runs  the time spent on solving the projected nonlinear problems for the wheel problem are summarized in Table 2.
In the next experiment we used the same global restart technique, but this time the restart was triggered whenever the dimension of the subspace exceeded the number of the last converged eigenvalue by 30, a relative threshold on the subspace dimension. In this way the average dimension of the search spaces and therefore the time for solving the projected problems were reduced, see Table 2. Plotting the total computing time and the time for solving the projected nonlinear problems in Fig. 2 reveals a superlinear growth. In fact, the CPU time spent on solving the projected eigenproblems itself grows super-linearly, determining the general trend.
Next, we computed the smallest 200 eigenvalues with Nonlinear Restarted Arnoldi. A restart was triggered whenever the search space dimension exceeded 80 or the convergence rate τ became larger than 0.5. Only the anchor and the current approximation were kept in the search subspace at restart (n locked = 0). The experiment was repeated 10 times and the averaged elapsed computing times are shown in Fig. 2. The superlinear time growth has been effectively eliminated through the local restart strategy. The zoom into the lower end of the spectrum reveals that the global restart can outperform the local restart with fixed search space dimension limit in the initial phase when all the eigenvalues from the beginning of the spectrum are computed (Fig. 3). However, as it can be seen in Figs. 4 and 5 the local restart with automatic balancing outperforms the global restart also in the initial phase.
The outstanding advantage of the local restart strategy is its ability of computing eigenvalues in an interval in the interior of the spectrum without the need of computing all the preceding eigenpairs. Using the same local restart strategy we computed all the   Fig. 2. When computing the eigenvalues at the beginning of the spectrum in the initial phase the global restart can outperform the local restart with fixed search subspace dimension limit eigenpairs in (λ 100 = 11,748, 16,820] which corresponds to λ 101 , . . . , λ 200 using the eigenpair (λ 100 , x 100 ) as the initial anchor. Again for reproducibility of the results we repeated the experiment 10 times. As expected the averaged computing time has been approximately halved from 700 s to 350 s, Fig. 6. To illustrate typical behavior of the method in more detail, in Fig. 7 for just one run of the experiment we plotted histograms of the computed eigenvalues and of the occurrences of the spurious eigenvalues in each of the intervals between the consecutive restarts. The corresponding eigenvalue   Fig. 4. We observe that the balancing effectively restores the superior performance of the local restart over the global restart also in the initial phase convergence history throughout first 500 iterations is depicted in Fig. 8. The dots not encircled pin down the occurrence of the spurious values during the iteration e.g. in iterations 98, 159, 213 or 312 in Fig. 8 (cf. histogram in Fig. 7).
The automated restart strategy described in Sect. 4.4, can be used to let the algorithm balance the limit on the search subspace size on fly. Using the automated restart with α = 1 and N α = 1 further reduced the average CPU time to about 130 s, see Fig. 6.   (12), respectively. Nonetheless, the NRA variant is still slightly faster in terms of the total CPU time, see Fig. 9. This is due to an JD expansion step being more expensive than an NA expansion step.
The here used preconditioner (LU decomposition of K − σ 2 M) remains of reasonable quality in the spectrum of interest. The results are in line with our general experience that Nonlinear Arnoldi method is faster whenever a good quality precondi- tioner is available, while Jacobi-Davidson method is more robust with respect to poor preconditioning [34].

NLEVP "wiresaw1" gyroscopic QEP
As a second example we solve the gyroscopic problem arising from the vibration analysis of a wiresaw, "wiresaw1" from the NLEVP collection [6] of dimension 2000 and with the NLEVP default value of the wire speed parameter v = 0.01. The gyroscopic matrix G for this problem is not sparse, hence the moderate choice of problem dimension. In formulation (10) all the eigenvalues are real, and are growing by approximately π increment from one eigenvalue to the next.
We computed all 100 eigenvalues in the interval [317, 629]. The algorithm was initialized using the lower bound of the interval rather than an anchor. The relative residual tolerance was chosen to 10 −4 , the maximal subspace dimension to 120, the number of locked eigenvectors after the restart n locked = 0 and the slowest admissible convergence rate τ = 0.5. We used automated restarts with α = 1 and N α = 1. Figure  10 shows the CPU time and the time for solution of the projected nonlinear problems averaged over 10 runs. The method took on average 1197 iterations with 22 restarts (23 LU decompositions) to compute the 100 eigenvalues. The average total CPU time was 497 s and the time for solution of the nonlinear projected problems 111 s (q.v. Table 1).

Large sparse gyroscopic QEP
Our third example is a tire 205/55R16-91H cf. [19] (see Fig. 11) provided by Continental AG. The tire is discretized with 39,204 brick elements and 42,876 nodes. The nodes at the wheel rim are fixed resulting in 124,992 degrees of freedom. To account We used Nonlinear Restarted Arnoldi method with MATLAB's five output LU routine as a preconditioner and set the tolerance for the relative residual norm to be 10 −6 and the maximal search subspace dimension to 300. After each restart, only the anchor vector and the next approximation were kept in the subspace (i.e. n locked = 0). The preconditioner was recomputed after at most 300 iterations, subject to residual norm ratio of at most τ = 0.9 and automatic balancing with N α = 1 and α = 1.
We computed all the eigenvalues in the interval [0, 20,000]. NRA needed 5165 iterations and 22 restarts (23 LU factorizations) to find all 388 eigenvalues in this interval. Figure 12 shows the total CPU time and the CPU time for solving projected nonlinear eigenvalue problems. We observe a slight increase in CPU time per eigenvalue, while we compute the eigenvalues at the lower end of the spectrum, which saturates at about 150th eigenvalue. Here, the reason is an increasing occurrence of spurious eigenvalues in proportion to the number of computed eigenvalues in the initial phase. For the eigenvalues higher in spectrum this effect settles, resulting in approximately constant time per eigenvalue computation. All but one restart were triggered through our automatic balancing strategy, demonstrating its effectiveness.
In this example we observed an increased occurrence of spurious values after the restarts. This leads us to believe that retaining some of the previously computed subspace after the restart may help to alleviate this effect, like for instance keeping further n locked eigenvectors along with the anchor in the local basis. Any benefits however, have to be traded off against increased computational cost due to larger search space dimensions.
We believe that the key to the optimal performance is to balance the subspace growth with the occurrence of spurious eigenvalues. An optimal strategy may include adapting the number of eigenvectors kept in the local basis along with the anchor, n locked , in dependence of e.g. frequency of occurrence of spurious eigenvalues in the previous interval.

General nonlinear eigenvalue problems
The following two problems are non-quadratic nonlinear eigenvalue problems. Thus the projected problems are solved by the safeguarded iteration (Algorithm 2). For a general nonlinear function, we do not have an explicit formula for its zeros (and hence for the Rayleigh functional) as it was the case for the quadratic eigenvalue problem but we have to revert to Newton iteration.
In both cases the method was initialized using the lower bound of the interval. The relative residual tolerance was 10 −6 , maximal subspace dimension 80, the slowest admissible convergence rate τ = 0.5 and the automatic restart parameters α = 1 and N α = 1. We report average performance values over 10 runs.
Due to the symmetry of the problem in ξ 1 , ξ 2 (Laplacian is symmetric on a square domain and a(ξ 1 , ξ 2 ) = a(ξ 2 , ξ 1 ), b(ξ 1 , ξ 2 ) = b(ξ 2 , ξ 1 )) the problem has double eigenvalues. To avoid missing out eigenpairs, at each restart we locked the preceding eigenvector along with the anchor in the search subspace, n locked = 1. We computed all 75 eigenvalues in the interval [150, 250]. Figure 13 shows the linear dependence of the CPU times on the number of computed eigenvalues. On average NRA method took 485.1 iterations with 8.7 restarts (9.7 LU decompositions) over 247.5 s, 58.2 s of which were spent on solution of the projected problems, Table  1.
For problems with double or higher multiplicity eigenvalues, it may be beneficial to consider extension of the restart strategy to block versions of the nonlinear Arnoldi and Jacobi-Davidson methods. Block methods by design are well suited for problems with multiple or clustered eigenvalues, as an entire subspace is iterated simultaneously. In principle, the local restarts can be used within block methods, when we simply iterate a number of eigenpairs, q, with consecutive local numbers, say k, k + 1, . . . , k + q − 1 instead of one. At each iteration, all the spurious eigenvalues which disturb the local ordering in the entire interval (λ , λ +k+q−1 ] would have to be removed which again could be done using block operations. A large number of numerical tests would be necessary to access whether such restarted block method has a significant advantage over the single vector version. A serious discussion of such extension is beyond the scope of the current paper.

Fluid structure interaction rational NEP
Our second problem is a rational eigenvalue problem where K , M ∈ R n×n are symmetric and positive definite, C j ∈ R n×n are matrices of small rank r j , and 0 < σ 1 < σ 2 < · · · < σ k are given poles. Problems of this type arise for example in free vibrations of tube bundles immersed in a slightly compressible fluid [8].
Using the search subspace with only the anchor locked i.e. n locked = 0, we computed all 84 eigenvalues in the interval [10,20]. Figure 14 shows the average total CPU time and CPU time for solution of the projected nonlinear problems with safeguarded iteration. The algorithm took on average 464 iterations with 11.6 restarts (12.6 LU decompositions) over average total CPU time of 275.6 s, 99.1 s of which were spent on solution of projected nonlinear problems.

Quadratic eigenvalue problems with eigenvalues with dominant real part
The described local restart procedure hinges upon the minmax property of the nonlinear eigenvalue problem. However, observe that if (λ, x) is an eigenpair of the nonlinear problem T (λ)x = 0 with a complex eigenvalue, λ ∈ C, it holds that μ = 0 is an eigenvalue of the linear problem T (λ)x = μx and also of T V (λ)y = μy if x ∈ V. As there is no natural order in the complex plain, in general we cannot infer the number of the eigenvalue. However, if the eigenvalues have a dominant real part (λ) (λ) (or equivalently up to a transformation dominant imaginary part), they can be ordered with respect to the dominant part. Furthermore, this ordering is inherited by the projected problem. If we can solve the projected nonlinear problem for the complex eigenpair (λ, y) (e.g. well known issues with convergence of Newton method for complex zeros) we can proceed as in the real case but where the eigenvalues are enumerated w.r.t. the ascending real part. In particular in the polynomial case, the projected polynomial problems can be effectively solved by linearization.
We used such ordering to solve two quadratic eigenvalue problems from the NLEVP collection [5] which eigenvalues have a dominant either real or imaginary part. The projected problems were solved using linerization and the method was initialized using the lower bound of the interval containing the dominant part of the wanted eigenvalues.

NLEVP "wiresaw2" QEP
We consider a quadratic eigenvalue problem arising from the vibration analysis of a wiresaw including the effect of viscous damping, "wiresaw2" problem from NLEVP collection [6] T (λ)x = λ 2 M x − iλC x − K x.
We chose the dimension of 2000 and NLEVP default values of the wire speed and damping parameters, v = 0.01 and η = 0.8, respectively. For this problem both the matrices C and K are not sparse, hence the relatively small problem dimension. The real parts of the eigenvalues are approximately equal to the corresponding eigenvalues of the "wiresaw1" problem with the same dimension and value of the parameter v, and the imaginary part of all eigenvalues is a constant equal to 0.8.
As for "wiresaw1" problem we computed all 100 eigenvalues with the real part in the interval [317, 629] using the same initialization and solver parameters. Figure 15 shows the total CPU time and the CPU time for solution of the projected quadratic problems. While the solution for each complex eigenvalue takes longer than for the corresponding eigenvalue of the real problem, the qualitative property that the method needs approximately equal CPU time per eigenvalue regardless of its location in the spectrum is preserved. On average the solver took 1060 iterations in 851 s, 217 s of which were spent solving projected quadratic problems with 11.1 restarts corresponding to 12.1 LU factorizations.

NLEVP "acoustic wave 1D" QEP
We consider a quadratic eigenvalue problem arising from a finite element discretization of a 1D acoustic wave equation with mixed Dirichlet and impedance boundary conditions, "acoustic wave 1d" problem from NLEVP [6]. The matrices K , M are real symmetric and C is a low rank complex diagonal matrix dependent on the impedance parameter. For the formulation (12) all the eigenvalues lie in the upper half of the complex plane and have a dominant real part.
Using NLEVP default value of the impedance parameter ζ = 1 we generated a matrix problem of dimension 30,000. We computed all 100 eigenvalues with the real part in the interval [0, 50] (see Fig. 16). The relative residual tolerance was 10 −6 , the maximal subspace dimension 120, the slowest admissible convergence rate τ = 0.5, n locked = 0 and the automated restart parameters α = 1 and N α = 1. Figure 17 shows the total CPU time and the time for solving of the projected quadratic eigenvalue problems. On average NRA method took 618.1 iterations and 11.1 restarts (12.1 LU factorizations) in 219.5 s, 67.4 s of which were spent on the solution of the projected linearized problems.

Conclusions
We presented a local restart technique for iterative projection methods for solution of nonlinear Hermitian eigenvalue problems admitting a minmax characterization of their eigenvalues. We showed how the proposed technique can effectively eliminate a  super-linear search subspace growth experienced when computing a large number of eigenvalues. Properly initialized, the method can be employed for computing eigenvalues in the interior of the spectrum. Iterative projection methods here considered work directly on the nonlinear eigenvalue problem without increasing its size and possibly destroying its structure by prior linearization. In this setting we do not have a transformation, like shift-invert for linear problems, mapping the eigenvalues close to a chosen shift to the exterior of the spectrum. In the absence of such transformation, spurious eigenvalues are intrinsic to interior eigenvalue computations and we proposed an effective strategy for dealing with such spurious values. We incorporated the proposed technique in the nonlinear iterative projection methods like the Nonlinear Arnoldi and Jacobi-Davidson methods. We illustrated various aspects of the local restart technique on numerical examples. The efficiency of the new restart framework was demonstrated on a range of nonlinear eigenvalue problems: three gyroscopic problems including a large gyroscopic eigenvalue problem modeling the dynamic behavior of a rotating tire, one exponential and one rational eigenvalue problem. Furthermore, we showed on two quadratic eigenvalue problems how the local restart technique can be extended to problems with complex eigenvalues with a dominant part (either real or imaginary). All the examples in this paper were solved using MATLAB toolbox QARPACK [2] containing an exemplary implementation of the locally restarted iterative methods (qra: quadratic, nra: general nonlinear solver). In the future we intend to extend the local restart technique to problems with more general distributions of the eigenvalues in the complex plane, close to a line or an a priori known curve.