Using interval unions to solve linear systems of equations with uncertainties

An interval union is a finite set of closed and disjoint intervals. In this paper we introduce the interval union Gauss–Seidel procedure to rigorously enclose the solution set of linear systems with uncertainties given by intervals or interval unions. We also present the interval union midpoint and Gauss–Jordan preconditioners. The Gauss–Jordan preconditioner is used in a mixed strategy to improve the quality and efficiency of the algorithm. Numerical experiments on interval linear systems generated at random show the capabilities of our approach.


Introduction
In traditional interval arithmetic, division by an interval containing zero overestimates the range when the latter is disconnected. Treating this using complements of intervals (see e.g, [23]) only postpones the problem a little, while interval union arithmetic, introduced by [24] as arithmetic on finite ordered sets of disjoint closed, possibly unbounded intervals, allow a mathematically and computationally natural approach to this problem. Indeed, the collection of interval unions (treated as closed sets in the obvious way) is closed under set-theoretic addition, subtraction, multiplication, division (after adding end points in case of an unbounded divisor), and all continuous elementary operations.
Many theoretical results from interval analysis remain valid for interval unions. For example, elementary operations and standard functions are inclusion isotone and the fundamental theorem of interval analysis also generalizes to interval unions. On the other hand, properties based on convexity (like the interval mean value theorem) do not apply to interval unions.
In this paper we study the rigorous solution of interval union linear systems of equations (IULS). We denote interval unions and vectors of interval unions by bold calligraphic letters (such as a, x), while matrices of interval unions are denoted by capital bold calligraphic letters (e.g., A, B). Let A and b be a matrix and a vector with interval union entries respectively. If x is a given initial interval union vector, we are interested in finding an enclosure of the solution set for the family of equations This problem has several applications in rigorous numerical analysis. Since interval linear systems are embedded into the interval union framework, any algorithm that relies on the rigorous solution of interval linear systems can benefit from the methods discussed in this paper. For example, constraint propagation methods [5] and the interval Newton operator [20,21] can be significantly improved with the use of interval union techniques. Moreover, interval union linear systems of equations can be used to define an interval union branch and bound framework for rigorous global optimization. This application will be detailed in a future work.
Related work: A closely related concept is that of multi-intervals, introduced independently by Yakovlev [28] and Telerman (see Telerman et al. [26]). According to [27], they are defined as a union of closed intervals that are not necessarily disjoint, making them slightly more general from the interval unions of the present paper.
Multi-interval arithmetic is (a not separately accessible) part of the publicly available software Unicalc [1,22] for solving constraint satisfaction problems and nonlinear systems of equations. Another implementation of multi-intervals is described in [25]. Parallel algorithms for interval and multi-interval arithmetic are the subject of [17]. Kreinovich et al. [18] use multi-intervals to study the existence of algorithms to solve algebraic systems. No systematic performance evaluation seems to be known. Multiintervals were also applied to the analysis of analog circuits [7], to the modeling of financial models under partial uncertainty [19], and to bit-width optimization [2].
Another variant of interval unions are the discontinuous intervals by Hyvönen [11], applied in [12,13] to simple constraint satisfaction problems and spreadsheet computations. They are disjoint unions of closed, half-open, or open intervals. In our opinion, the extra bookkeeping effort to distinguished between closed and open endpoints is not warranted in most applications.
Content: We organized this paper as follows: Sect. 2 summarizes the fundamentals of the interval union arithmetic. In Sect. 3, we define interval union matrices, vectors and linear systems of equations.
In Sect. 4, we introduce two forms of the interval union Gauss-Seidel procedure to solve (1): the partial form and the complete form. In the partial form, we update only the variable corresponding to the main diagonal entry of A at each iteration. In the complete form, we update all variables in each row.
Preconditioner heuristics are the subject of Sect. 5. Interval algorithms usually precondition the initial interval linear system to improve the quality of the solution. We extend the idea of preconditioning to interval unions and study two different preconditioning heuristics. The first one is the midpoint method: it takes the inverse of the midpoint of the hull matrix of the system A as the preconditioner. The second one is the Gauss-Jordan preconditioner which is based on the Gauss-Jordan elimination as discussed in [6].
Since solving large systems-due to the cost of the matrix multiplication required in the preconditioning heuristics-becomes intractable, we propose a mixed strategy that combines the original system with its preconditioned form.
Section 6 presents results of our numerical experiments. We consider randomly generated interval linear systems in order to compare traditional interval methods with the our new approach. We take linear systems with n ∈ {2, 3, 5, 10, 15, 20, 30, 50} where entries of A, b and x have radius r ∈ {0.1, 0.2, . . . , 2.9, 3.0}.
The experiment shows that interval union methods produce better enclosures than their interval counterparts. The interval union Gauss-Seidel procedure with and without preconditioners produce enclosures up to 25% sharper than those obtained by interval methods. Moreover, there are no significant differences between the execution time of intervals and interval union methods.
Notation: We denote the vector space of all m × n matrices A with real entries A ik (i = 1, . . . , m, k = 1, . . . , n) by R m×n . The vector space of all column vectors v of length n and entries v i is denoted by R n = R n×1 .
The n-dimensional identity matrix is given by I. We denote the set of induces 1, . . . , N by 1 : N and write A i: and A : j to denote the i-th row and j-th column of the matrix A respectively.
We assume that the reader is familiar with basic interval arithmetic. A comprehensive approach to this subject is given by [21]. For the interval arithmetic notation, we mostly follow [16]. Let a, a ∈ R with a ≤ a then a = [a, a] denotes an interval with inf(a) := min(a) := a and sup(a) := max(a) := a. The set of nonempty compact real intervals is given by We will allow the extremes of the intervals to assume the ideal points −∞ and ∞, and define IR as the set of closed real intervals and write The width of the interval a ∈ IR is given by wid(a) := a − a, its magnitude by |a| := max(|a|, |a|) and its mignitude by The midpoint of a ∈ IR isǎ := mid(a) := (a + a)/2 and the radius of a ∈ IR iŝ a := rad(a) := (a − a)/2. An interval is called degenerate if wid(a) = 0.
For any set S ⊆ R, the smallest interval containing S is called the interval hull of S and denoted by S. The notions of elementary operations between intervals and inclusion properties are the same as presented in [21]. If a, b ∈ IR then the extended division is defined as follows (see e.g, [23]) An interval vector x = [x, x] is the Cartesian product of the closed real intervals We denote the set of all interval vectors of dimension n by IR n .
We denote interval matrices by capital bold letters (A, B, …) and the set of all m × n interval matrices is given by IR m×n .
For some applications, the interval subtraction may over-estimate the range of the real computation. In order to cope with this situation we also define inner subtraction for intervals. If a, b ∈ IR then For a comprehensive review of inner operations, see [3].

Interval unions
This section introduces the basics of interval unions. For more details on the topics covered in this section see [24].
Definition 1 An interval union u of length l(u) := k is a finite set of k intervals of form We denote by U k the set of all interval unions of length ≤ k. The set of all interval unions is then U := k≥0 U k where we define U 0 := ∅.
If u ∈ U is an interval union with l(u) = k then for any x ∈ R we say

The relation above extends naturally for intervals and another interval unions, so that if v is an interval union then
Let S be a set of k intervals with k < ∞. The smallest interval union with respect to inclusion that satisfies a ⊆ u for all a ∈ S is called the union creator U(S) of S. The projection of the point x ∈ R into the interval union u ∈ U k is given by (i) The elementary interval union operation • : U × IR → U is given by (ii) The elementary interval union operation • : U × U → U is given by The following result gives basic properties of interval union arithmetic, see [24].

Interval union vectors, matrices and linear systems
Definition 3 An m × n interval union matrix is a rectangular array of interval unions with m rows and n columns. We denote interval union matrices by capital bold calligraphic letters (A, B, …) and the (i, j)-element of the interval union matrix A is given by A i j . The set of m × n interval union matrices is given by U m×n . In a similar way, n ×1 interval union matrices are called interval union vectors. We denote interval union vectors by bold calligraphic letters (u, x, …) and the set of all n-dimensional interval union vectors is given by U n . We denote the set of n-dimensional vectors u satisfying l(u i ) = k i by U n k 1 ,...,k n . Given a set of interval vectors {u 1 , . . . , u p }, the union creator vector is denoted by v := U({u 1 , . . . , u p }) where the union creator U defined in (4) is applied componentwise. Let u be an n-dimensional interval union vector satisfying l(u i ) = k i and p = n i=1 k i . If we denote the Cartesian product between two interval unions by × then the mapping S : U n k 1 ,...,k n → (IR n ) p given by splits the interval union u into a set of p disjoint interval vectors. Notice that interval union vectors can be used to represent p disjoint interval vectors storing only n i=1 k i elements. This is a clear advantage over traditional interval arithmetic, especially when n is large. The mapping S and the definition of union creator can be naturally extended to matrices.
Interval union matrices and vectors follow the usual definition of arithmetic operations. Formally, if A, B ∈ U m×n and C ∈ U n× p then and Proposition 2 Let A, A , B, B ∈ U m×n and C, C ∈ U n× p . Then Proof Follows from Relations (5)-(7) applied to Definitions (8) and (9).
An interval union linear system of equations (ILLS) with coefficients A ∈ U n×n and b ∈ U n is the family of linear equations This paper deals only with square systems though the generalization to systems of form m × n is straightforward. The solution set of (10) is defined by As in the interval case, (11) can be a non-convex or disconnected set. Let x 0 ∈ U n be an interval union vector. The truncated solution set of (10) is The following proposition states that (11) is identical to the union of solution sets from the interval components of A and b.

Proposition 3 Let
The result follows from the definition of S(A) and S(b).
Let A and b be an interval matrix and vector respectively. The problem of finding Σ(A, b) and Σ(A, b) ∩ x 0 is known to be N P-hard (see, e.g., [8,18]). Therefore, Proposition 3 implies that finding U(Σ (A, b)) and U(Σ(A, b) ∩ x 0 ) are also N P-hard problems. This paper focuses on algorithms to enclose U(Σ(A, b) ∩ x 0 ). Formally, we are interested in finding nontrivial vectors y (i.e y = x 0 ) satisfying Proposition 3 gives a natural approach to this problem. It consists in the application of the interval Gauss-Seidel procedure described in [9,14,21] to each system obtained by splitting A and b. Let proposed above requires the solution of p ·q ·r interval linear systems of equations and does not take the structure of the interval union matrix and vector into account. The next section presents extensions of the Gauss-Seidel procedure to interval unions. We show that even in problems where A ∈ U n×n 1 and b ∈ U n 1 , interval union algorithms give better results than their interval counterparts.
The interval union matrix A ∈ U n×n is said to be regular if every real matrix A ∈ A is nonsingular. The interval union inverse of a regular matrix A is given by Proposition 4 Let A ∈ U n×n be a regular matrix and b ∈ U n×1 . Then

The interval union Gauss-Seidel method
Let A ∈ U n×n , b ∈ U n and x 0 ∈ U n . In this section we introduce the interval union Gauss-Seidel procedure to rigorously enclose the solution set of We first discuss the univariate interval union Gauss-Seidel operator and show its properties using the definitions and results from [21].
For higher dimensions, we present two versions of the Gauss-Seidel procedure. In the first version, called the partial form, we update only the variable corresponding to A ii in the ith row. In the second, named complete, we consider all variables at each iteration.

Interval union Gauss-Seidel operator
Let a, b, x ∈ U. The interval union linear system in this case reduces to As in the Definition (12), the truncated solution set is given by The univariate interval union Gauss-Seidel operator is defined by Proof From Definition 2, we have and (15) follows from taking the intersection with x. To prove (16) note that Definitions (13) and (14) imply (17) follows immediately from (16).
for some a ∈ a, b ∈ b} is empty and Relation (18) holds. Relations (19) and (20) follow immediately from the extended division in Definition (2) and the inclusion property respectively.
Let A ∈ U n×n , b ∈ U n , A ∈ A and b ∈ b. If A ii = 0 andx ∈ x is an approximation of the solution of Ax = b then the Gauss-Seidel iteration is given by Since all elementary operations are inclusion isotone we havẽ Note that the right side of (21) truncated to x can be written in form of the Gauss-Seidel operator Γ . Denote by y i the improved interval union enclosure obtained from x i and let Finally, we denote by Γ (A, b, x) the Cartesian product of variables y 1 , . . . , y n and we have the following result Proposition 6 Let A ∈ U n×n , b ∈ U n and x ∈ U n . Then (A, b, x). (23) x Proof Relation (23) follows from the component-wise application of (20). Sincex ∈ Σ(A, b) ∩ x, there are A ∈ A and b ∈ b such that Ax = b. Relation (24) follows from (21) and Definition (22).

Partial form
We implement the partial Gauss-Seidel procedure that is based on the Gauss-Seidel operator (15) in Algorithm 1. We incorporate Relations (18) and (19) to the algorithm in order to avoid unnecessary divisions. We stop the algorithm when the following criteria are reached for Abs > 0 and Rel > 0 max wid(x) − max wid(y) < Abs and 1 − max wid(y) max wid(x) < Rel .  Fig. 1. Since A i j , b i , x i ∈ U 1 for every i and j, we can compare the performance of Algorithm 1 with the traditional interval Gauss-Seidel procedure. for i = 1, . . . , n do 3: y ← ∅; 6: return y; 7: end if 8: if 0 ∈ s, 0 ∈ A ii then 9: y i ← x i ; 10: continue; 11: end if 12:  The interval Gauss-Seidel procedure applied to the permuted matrix gives This is an improvement of 63% in volume and 54% in the maximum width compared to the initial box. We describe now the application of Algorithm 1 to the problem. In this case, the interval union Gauss-Seidel procedure solves the problem directly, without any permutation.
In and we finish the internal loop. The interval union Gauss-Seidel procedure produces 4 disjoint boxes representing an improvement of 76% in volume and 60% in maximum width compared to the initial box. There is no improvement in y 1 and y 2 if we set K = 2 in Algorithm 1.

Complete form
Algorithm 1 is said to be partial since it considers only the variable corresponding to the diagonal entry at each iteration. In the following, we present the complete Gauss-Seidel procedure. It applies the Gauss-Seidel operator to all variables at each iteration.
The solution set obtained by the complete Gauss-Seidel procedure is at least as good as those given by the partial version. On the other hand, the complete procedure requires more calculations and may be prohibitive in higher dimensions.
In order to improve the efficiency of the complete Gauss-Seidel, we apply inner subtraction to each row. Note that the Gauss-Seidel operator applied to the variable x j in the ith row is given by Considering the auxiliary variable s := b i − n k=1 A ik x k , the Gauss-Seidel operation becomes where is interval union generalization of the inner subtraction defined by Equation (3). Algorithm 2 gives the complete form of the interval union Gauss-Seidel procedure. It also implements Relations (18) and (19) to avoid unnecessary divisions. The stopping criteria adopted to this algorithm are the same as in the Algorithm 1. for i = 1, . . . , n do 3: Example 2 (Example 1 revisited) Let A, b and x be given as in Example 1. The solution sets obtained by the application of the complete form of the interval and interval union Gauss-Seidel procedures are given in Fig. 2. representing an improvement of 85% in volume and 72% in the maximum width compared to the initial box. Note that the complete form removes two interval boxes that do not contain any solution and that could not be deleted with the partial form (see Figs. 1 and 2). Again, there is no improvement in y 1 and y 2 if we set K = 2 in Algorithm 2.

Gap filling
The number of boxes produced by Algorithms 1 and 2 may increase exponentially by the number of divisions with intervals containing zero. A similar phenomenon was already observed by Hyvönen [11] for the propagation of discontinuous intervals; however, the remedy proposed there-simply to take the interval hull-unnecessarily discards useful information. As a more flexible remedy, [24] introduced the notion of gap filling. In this section we describe a gap filling strategy that (among several strategies tried) proved useful for the interval union Gauss-Seidel procedure.
A gap filling is a mapping g : U k → U k satisfying x ⊆ g(x) and x ≡ g(x) for any x ∈ U k . Two possible, trivial gap filling would be g(x) = x and g(x) = x. The gap filling g(x) = x however does not avoid the exponential increase on the number of boxes produced by Algorithms 1 and 2. In contrary, the gap filling g(x) = x do not lead an increased number of boxes, but also loses valuable gap information. Therefore in Algorithm 3 we propose a gap filling that controls the maximum number of gaps produced. Find the gap g of x with smallest width; 6: x ← x ∪ g; 7: end while Algorithm 3 can be modified to also handle interval union vectors and matrices. In this case we look for the gap with the smallest width in the whole vector or matrix and fill it in the while loop (Lines 4-7) of the algorithm. Note that using a multi-map data structure in the implementation of the gap filling for vectors and matrices allows faster access to the smallest gaps, improving the overall speed of the algorithm.

Preconditioners
In this section we present the midpoint and Gauss-Jordan preconditioners for interval union linear systems. It is usually necessary to precondition interval union linear systems of equations to obtain meaningful bounds on the solution set. A preconditioner is any real non-singular matrix C.
Given A ∈ U n×n , b ∈ U n and x 0 ∈ U n , we are interested in preconditioners satisfying Since any non-singular matrix can be chosen as preconditioner, there are several heuristics to determine C according to the application. In the interval case, the midpoint preconditioner is the common choice in a number of problems. Optimal linear programming preconditioners are designed by [14] in the context of the interval Newton operator and the Gauss-Jordan preconditioner is proposed by [6]. See also [10] and [15] for recent methods on optimal preconditioning. The midpoint preconditioner in the interval union framework takes the form where the midpoint and proj operators are applied component-wise.
The Gauss-Jordan preconditioner is based on the real Gauss-Jordan elimination algorithm with pivot search. Given a square matrix A ∈ R n×n , the algorithm computes C and a permutation matrix P ∈ R n×n such that C AP = I.
In this paper we take A = proj (mid( A), A). It is worth to note that due to the permutation matrix we apply the Gauss-Seidel procedure to the modified problem My = r (M ∈ CA P, r ∈ Cb, y ∈ x 0 P). The new enclosure represents an improvement of 34% in volume compared to the initial box. Note that we must apply the inverse permute to y 1 and y 2 in order to obtain the correct enclosure. In this example, the same result would be obtained by applying the midpoint preconditioner.
The matrix C is dense in general. Therefore, preconditioner strategies may be prohibitive in large linear systems of equations. Moreover, systems of form (26)  We introduce a mixed strategy that combines the original linear system with its preconditioned form. Given A ∈ U n×n , b ∈ U n and x ∈ U n we alternate between the solution of the original system and the preconditioned form (26) until one of the following: (1) we prove that there is no solution in x, (2) the maximum number of iterations is reached, or (3) we have not enough gain in the last solution of both the original and preconditioned systems.
Algorithm 4 implements the mixed strategy using the partial or complete forms of the interval union Gauss-Seidel procedure. The boolean variables gainUnprec and gainGS control the next iteration of the algorithm. If both are false then neither the Gauss-Seidel procedure without preconditioning nor the same procedure with preconditioning gave a substantial improvement on the current box and the mixed algorithms stops. Algorithm 4 can be modified to apply the midpoint preconditioner instead of the Gauss-Jordan method.

Numerical experiments
In this section we perform numerical experiments to compare the interval union Gauss-Seidel procedure with its interval counterpart. We consider the partial and complete forms of the Gauss-Seidel procedure as well as the midpoint and the Gauss-Jordan if iterateGJ == true then 14: y ← GS(M, r, y P, Abs , Rel , 1); 15: y ← y P −1 ; 16: iterateGJ ← false; 17: if Relations (25)  preconditioners. In this test, we take only interval linear systems of equations into account. The experiment is described in Algorithm 5.
In this section, we set the parameters of the Algorithms 1 and 2 as Abs = Rel = 10 −4 and K = 2 for the partial form and K = 1 for the complete form. In the gap filling Algorithm 3, we set the maximum number of gaps in an interval union as g = 2 and the maximum number of boxes for interval union vectors to 64.
In Algorithm 5, we set R := {0.1, 0.2, . . . , 2.9, 3.0}, N := {2, 3, 5, 10, 15, 20, 30, 50} and T = 100. The entries of A, b and x have radius given by r ∈ R and satisfy the rules described in Table 1. Figures 3, 4 and 5 summarize the results of the experiment. For each point in these graphs we have the average of the maximum width gained with the methods in a set of 4000 problems taken at random (100 for each n ∈ N and for each one of the 5 cases

Algorithm 5 Performance analysis
Input: The set of radii R, the set of sizes N and the number of trials T . Output: The average maximum width gained and elapsed time for each combination of Gauss-Seidel procedure form and preconditioner. 1: for r ∈ R do 2: for n ∈ N do 3: for i = 1 : T do 4: Generate the random matrix A of size n and such that rad(A) = r ; 5: Generate vectors b and x of size n and such that rad(b) = rad(x) = r ; 6: Run the instance with all variants of the Gauss-Seidel procedure; 7: Save the data; 8: end for 9: end for 10: end for Table 1 Description of the processes that generate matrices and vectors A, b and x The number n stands for the dimension of the linear system  Table 1). Tables 2 and 3 show the average elapsed time for each method. All the algorithms were implemented in JGloptlab [4], a Java implementation of the state of the art global optimization algorithms. We run the experiment in a corei7 processor with 6 Gb of RAM memory. It is clear that the interval union Gauss-Seidel procedure produces better enclosures than the interval method. Tables 2 and 3 show that there are no significant differences   Unprec. stands for algorithms without preconditioning between the execution time of the Gauss-Seidel procedure with intervals and interval unions. Figure 6 show the effect of the dimension on the quality of the computed enclosures considering the Gauss-Jordan preconditioner.   The exponential increase in the number of boxes produced by Algorithms 1 and 2 is one of the main concerns regarding the use of the interval union arithmetic. We note that the maximum number of boxes produced in during the interval union Gauss-Seidel procedure is, in average, never greater than 3 as showed by Figs. 7 and 8.  Moreover, we reach the maximum number of boxes prescribed in Algorithm 3 during the execution of the procedure only in 10% of the 120,000 instances with the complete form. We never reach the maximum number of boxes with the partial form.

Mixed preconditioner strategy
It is clear from Tables 2 and 3 that the interval union Gauss-Seidel procedure without preconditioner is several times faster than the same method with preconditioners. Moreover, there are problems where the preconditioner leads to poorer bounds than the solution of the original system.
We finish this section comparing Algorithms 1 and 2 with the mixed strategy proposed in Algorithm 4. In this experiment we set the parameters of all algorithms as  Unprec. is the interval union Gauss-Seidel without preconditioner and Mixed is the strategy described in Algorithm 4 Abs = Rel = 10 −4 and K = 2. We perform the experiment in the same test set described previously. Figures 9 and 10 show the results of the experiment. Table 4 compares the average elapsed time for each method.
The figures show that the mixed strategy produces bounds that are, in average, sharper than those obtained with simple methods. It can be explained by the observation that there is no dominant preconditioner strategy. The Gauss-Jordan preconditioner is better suited to cope with some problems (for example, ill conditioned problems) while the original system provides better solutions in other classes of interval linear systems (for example, diagonally dominant). On the other hand, Table 4 shows that the mixed strategy is not faster than the Gauss-Jordan preconditioner. It is due to the fact that in many problems the second iteration of the Algorithm 4 is needed.

Concluding remarks
In this paper, we introduce the interval union Gauss-Seidel procedure to rigorously enclose the solution set of The Gauss-Seidel procedure is presented in two forms; the partial one (Algorithm 1) and the complete one (Algorithm 2). At each iteration, in the former we update only the variable corresponding to the main diagonal of the matrix A, whereas in the latter every variable is updated.
We also studied two preconditioner heuristics for the interval union Gauss-Seidel procedure. The midpoint preconditioner takes the inverse of the midpoint of the interval hull of A and the Gauss-Jordan preconditioner that is based on the interval version of this method discussed by [6]. We also propose a mixed strategy that combines the original system and the Gauss-Jordan preconditioner to improve the efficiency and the quality of solutions, see the Algorithm 4.
Numerical experiments show that the interval union Gauss-Seidel procedure produces better enclosures than its interval counterparts. We performed tests on 120,000 problems generated at random as described by Table 1. Figures 3, 4 and 5 demonstrate that interval union procedures produce bounds that are up to 25% sharper than those obtained by the interval implementation of the method. Tables 2 and 3 show that there is no disadvantage in computation time when using interval union methods as compared to interval ones.
The potential increase in the number of boxes produced by Algorithms 1 and 2 is one of the main concerns in the use of interval union methods. We propose a gap filling strategy based on the ideas described by [24]. The resulting method is given by 3. We show that the maximum number of boxes produced by the complete form of the Gauss-Seidel procedure is reached only in 10% of instances. We never reach the maximum number of boxes with the partial form. The average number of boxes generated in this experiment is given by Figs. 7 and 8.
We note that the mixed strategy described in Algorithm 4 is faster and more accurate than the interval union Gauss-Seidel procedure with Gauss-Jordan preconditioner. It also produces better enclosures than those obtained with the method without preconditioner. On the other hand, if the maximum radius of A, b and x are small enough then it is more efficient to turn off the preconditioning as suggested by Figs. 9 and 10.