Epidemiological Forecasting with Model Reduction of Compartmental Models. Application to the COVID-19 Pandemic

Simple Summary Using tools from the reduced order modeling of parametric ODEs and PDEs, including a new positivity-preserving greedy reduced basis method, we present a novel forecasting method for predicting the propagation of an epidemic. The method takes a collection of highly detailed compartmental models (with different initial conditions, initial times, epidemiological parameters and numerous compartments) and learns a model with few compartments which best fits the available health data and which is used to provide the forecasts. We illustrate the promising potential of the approach to the spread of the current COVID-19 pandemic in the case of the Paris region during the period from March to November 2020, in which two epidemic waves took place. Abstract We propose a forecasting method for predicting epidemiological health series on a two-week horizon at regional and interregional resolution. The approach is based on the model order reduction of parametric compartmental models and is designed to accommodate small amounts of sanitary data. The efficiency of the method is shown in the case of the prediction of the number of infected people and people removed from the collected data, either due to death or recovery, during the two pandemic waves of COVID-19 in France, which took place approximately between February and November 2020. Numerical results illustrate the promising potential of the approach.


Introduction
Providing reliable epidemiological forecasts during an ongoing pandemic is crucial to mitigate the potentially disastrous consequences for global public health and the economy. As the ongoing pandemic of COVID-19 sadly illustrates, this is a daunting task in the case of new diseases due to the incomplete knowledge of the behavior of the disease and the heterogeneities and uncertainties in the health data count. Despite these difficulties, many forecasting strategies exist, and we can cast them into two main categories: the first type is purely data-based and involves statistical and learning methods such as time series analysis, multivariate linear regression, grey forecasting or neural networks [1][2][3][4][5]; the second approach uses epidemiological models, which are appealing since they provide an interpretable insight of the mechanisms of the outbreak. They also provide high flexibility in the level of detail to describe the evolution of a pandemic, ranging from simple compartmental models that divide the population into a few exclusive categories to highly detailed descriptions involving numerous compartments or even agent-based models (see, e.g., [6][7][8] for general references on mathematical epidemiological models and [9][10][11] for some models focused on . One salient drawback of using epidemiological models for forecasting purposes lies in the very high uncertainty in the estimation of the relevant parameters. This is due to the fact that the parameters cannot often be inferred from real observations, and the available data are insufficient or too noisy to provide any reliable estimation. The situation is aggravated by the fact that the number of parameters can quickly become large even in moderately simple compartmental models [10]. As a result, forecasting with these models involves making numerous a priori hypotheses which can sometimes be difficult to justify by data observations. In this paper, our goal is to forecast the time-series of infected, removed and dead patients with compartmental models that involve as few parameters as possible in order to infer these series solely from the data. The available data are only given for hospitalized people; one can nevertheless estimate the total number of infected people through an adjustment factor taken from the literature. Such a factor takes into account the proportion of asymptomatic people and infected people who do not go to hospital. The model that includes the least number of parameters is probably the susceptible-infected-removed (SIR) model [12], which is based on a partition of the population into the following groups: • Uninfected people, called susceptible (S); • Infected and contagious people (I), with more or less marked symptoms; • People removed (R) from the infectious process, either because they were cured or unfortunately died after being infected.
If N denotes the total population size that we assume to be constant over a certain time interval [0, T], we have N = S(t) + I(t) + R(t), ∀t ∈ [0, T], and the evolution from S to I and from I to R is given for all t ∈ [0, T] by The SIR model has only two parameters: • γ > 0 represents the recovery rate. In other words, its inverse γ −1 can be interpreted as the length (in days) of the contagious period; • β > 0 is the transmission rate of the disease. It essentially depends on two factors: the contagiousness of the disease and the contact rate within the population. The larger this second parameter is, the faster the transition from susceptible to infectious will be. As a consequence, the number of hospitalized patients may increase very quickly and may lead to a collapse of the health system [13]. Strong distancing measures such as confinement can effectively act on this parameter [14,15], helping to keep it low.
Our forecasting strategy is motivated by the following observation: by allowing the parameters β and γ to be time-dependent, we can find optimal coefficients β * (t) and γ * (t) that exactly fit any series of infected and removed patients. In other words, we can perfectly fit any observed health series with an SIR model with time-dependent coefficients.
As we explain below, the high fitting power stems from the fact that the parameters β and γ are searched in L ∞ ([0, T], R + )-the space of essentially bounded measurable functions. For our forecasting purposes, however, this space is too large to give any predictive power, and we need to find a smaller manifold that simultaneously has good fitting and forecasting properties. To this end, we developed a method based on model order reduction. The idea of the method was to find a space with a reduced dimension that can host the dynamics of the current epidemic. This reduced space is learnt from a series of detailed compartmental models based on precise underlying mechanisms of the disease. One major difficulty in these models is the fitting of the correct parameters. In our approach, we do not seek to estimate these parameters; instead, we consider a large range of possible parameter configurations with a uniform sampling that allows us to simulate virtual epidemic scenarios in a longer range than the fitting window [0, T]. We next cast each virtual epidemic from the family of detailed compartmental models into the family of SIR models with time-dependent coefficients, as explained below. This procedure yields time-dependent parameters β and γ for each detailed virtual epidemic. The set of all such β (or γ) is then condensed into a reduced basis with a small dimension. We finally use the available health data on the time window [0, T] to find the functions β and γ from the reduced space that best fit the current epidemic over [0, T]. Since the reduced basis functions are defined over a longer time range [0, T + τ] with τ > 0 (e.g., two weeks), the strategy automatically provides forecasts from T to T + τ. Its accuracy will be related to the pertinence of the mechanistic mathematical models that have been used in the learning process.
We note that an important feature of our approach is that all virtual simulations are considered equally important in the first stage, and the procedure automatically learns what the best scenarios (or linear combinations of scenarios) to describe the available data are. Moreover, the approach even mixes different compartmental models to accommodate these available data. This is in contrast to other existing approaches which introduce a strong a priori belief regarding the quality of a certain particular model. Note also that we can add models related to other illness and use the large manifold to fit a possible new epidemic. It is also possible to mix the current approach with other purely statistical or learning strategies by means of expert aggregation. One salient difference with these approaches which is important to emphasize is that our method hinges on highly detailed compartmental models which have clear epidemiological interpretations. Our collapsing methodology into the time-dependent SIR is a way of "summarizing" the dynamics into a few terms. One may expect that reducing the number of parameters comes at the cost of losing the interpretability of parameters, and this is true in general. Nevertheless, the numerical results of the present work show that a reasonable tradeoff between the "reduction of the number of parameters" and "interpretability of these few parameters" can be achieved.
The paper is organized as follows. In Section 2, we present the forecasting method in the case of a single region with a constant population. For this, in Section 2.1, we briefly introduce the epidemiological models involved in the procedure, namely the SIR model with time-dependent coefficients and more detailed compartmental models used for the training step. In Section 2.2, after proving that the SIR model with time-dependent coefficients in L ∞ ([0, T]) is able to fit any admissible epidemiological evolution (as explained below), we present the main steps of the forecasting method. The method involves a collapsing step from detailed models to SIR models with time-dependent coefficients and model reduction techniques. We detail these points in Sections 2.3 and 2.4. In Section 3, we explain how the method can easily be extended to a multi-regional context involving population mobility and regional health data observations (provided, of course, that mobility data are available). In Section 3.1, we begin by clarifying that the nature of the mobility data will dictate the kind of multi-regional SIR model to use in this context. In Section 3.2, we outline how to adapt the main steps of the method to the multi-regional case. Finally, in Section 4, we present numerical results for the the two pandemic waves of COVID-19 in France in 2020, which took place approximately between February and November 2020. Concluding comments are given in Section 5, followed by two Appendices A and B that present details about the processing of the data noise and the forecasting error.

Methodology for a Single Region
For the sake of clarity, we first consider the case of a single region with a constant population and no population fluxes with other regions. Here, the term region is generic and may be applied to very different geographical scales, ranging from a full country to a department within a country or even smaller partitions of a territory.

Compartmental Models
The final output of our method is a mono-regional SIR model with time-dependent coefficients as explained below. This SIR model with time-dependent coefficients is evaluated with reduced modeling techniques involving a large family of models with finer compartments proposed in the literature. Before presenting the method in the next section, we here introduce all the models that we use in this paper along with useful notations for the rest of the paper.

SIR Models with Time-Dependent Parameters
We fit and forecast the series of infected and removed patients (dead and recovered) with SIR models where the coefficients β and γ are time-dependent: In the following, we use bold-faced letters for past-time quantities. For example, f := { f (t) : 0 ≤ t ≤ T} for any function f ∈ L ∞ ([0, T]). Using this notation, for any given

Detailed Compartmental Models
Models involving many compartments offer a detailed description of epidemiological mechanisms at the expense of involving many parameters. In our approach, we use them to generate virtual scenarios. One of the initial motivations behind the present work is to provide forecasts for the COVID-19 pandemic; thus, we have selected the two following models which are specific for this disease, but note that any other compartmental model [9,10,16] or agent-based simulation could also be used.

•
First model, SEI5CHRD: This model is inspired by the one proposed in [10]. It involves 11 different compartments and a set of 19 parameters (see Table 1). The dynamics of the model are illustrated in Figure 1, and the system of equations reads as follows: The different parameters involved in the model are described in Table 2 and detailed in the appendix of [10].  [10]. It involves 11 different compartments and a set of 19 parameters. The dynamics of the model is illustrated in Figure 1 and the system of equations reads dS dt (t) = − 1 N S(t) β p I p (t) + β a I a (t) + β ps I ps (t) + β ms I ms (t) + β ss I ss (t) the parameter-to-solution map where u = (S, E, I p , I a , I ps , I ms , I ss , C, H, R, D).
• Second model, SE2IUR: This model is a variant of the one proposed in [9]. It involves five different compartments (see Table 3) and a set of six parameters. The dynamics of the model are illustrated in Figure 2 and the set of equations is as follows: ersion December 10, 2020 submitted to Biology Infected and pre-symptomatic (already infectious) I Infected and symptomatic U Un-noticed R dead and removed Table 3. Description of the compartments in Model SE2IUR • Second model, SE2IUR: This model is a variant of the one proposed in [9]. It involve compartments and a set of 6 parameters. The dynamics of the model is illustrated and the set of equations is We denote by u = SE2IUR(u 0 , β, δ, σ, ν, γ 1 , γ 2 , [0, T]) the parameter-to-solution map where u = (S, E1, E2, I, U, R). The different parameters involved in the model are described in Table 4. Table 3. Description of the compartments in model SE2IUR.

Compartment Description
S Susceptible E 1 Exposed (non infectious) E 2 Infected and pre-symptomatic (already infectious) I Infected and symptomatic U Un-noticed R dead and removed Table 4. Description of the parameters involved in model SE2IUR.

Parameter Description
β Relative infectiousness of I, U, E 2 δ −1 Latency period σ −1 Duration of prodromal phase ν Proportion of I among I + U γ 1 If I, daily rate entering in R γ 2 If U, daily rate entering in R • Generalization: In the following, we abstract the procedure as follows. For any Detailed_Model with d compartments involving a vector µ ∈ R p of p parameters, we denote by the parameter-to-solution map, where T is some given time simulation that can be as large as desired because this is a virtual scenario. Note that, because the initial condition of the illness at time 0 is not known, we include the initial condition u 0 in the parameter set.

Forecasting Based on Model Reduction of Detailed Models
We assume that we are given health data in a time window [0, T], where T > 0 is assumed to be the present time. The observed data are the series of infected people, denoted I obs , and removed people, denoted R obs . They are usually given at a national or a regional scale and on a daily basis. For our discussion, it is useful to work with time-continuous functions, and t → I obs (t) denotes the piecewise constant approximation in [0, T] from the given data (and similarly for R obs (t)). Our goal is to give short-term forecasts of the series in a time window τ > 0 whose size is about two weeks. We denote by I(t) and R(t) the approximations to the series I obs (t) and R obs (t) at any time t ∈ [0, T].
As mentioned above, we propose to fit the data and provide forecasts with SIR models with time-dependent parameters β and γ. The main motivation for using such a simple family is that it possesses optimal fitting and forecasting properties for our purposes, as explained above. We define the cost function such that (S, I, R) = SIR(β, γ, [0, T]), and the fitting problem can be expressed at the continuous level as the optimal control problem of finding The following result ensures the existence of a unique minimizer under very mild constraints. Proposition 1. Let N ∈ N * and T > 0. For any real-valued functions S obs , I obs , R obs of class C 1 , defined on [0, T] satisfying (i) S obs (t) + I obs (t) + R obs (t) = N for every t ∈ [0, T], (ii) S obs in nonincreasing on [0, T], (iii) R obs is nondeacreasing on [0, T], there exists a unique minimizer (β * obs , γ * obs ) to Equation (2).
Note that one can equivalently set This simple observation means that there exists a time-dependent SIR model which can perfectly fit the data of any (epidemiological) evolution that satisfies properties (i), (ii), and (iii). In particular, we can perfectly fit the series of sick people with a time-dependent SIR model (with a smoothing of the local oscillations due to noise). Since the health data are usually given on a daily basis, we can approximate β * obs , γ * obs by approximating the derivatives by classical finite differences in Equation (3).
The fact that we can build β * obs and γ * obs such that J (β * obs , γ * obs ) = J * = 0 implies that the family of time-dependent SIR models is rich enough not only to fit the evolution of any epidemiological series but also to deliver perfect predictions of the health data. It is however important to note that since the β * obs , γ * obs are derived exclusively from the data and depend on time, we lose the direct interpretations of these coefficients in terms of the length of the contagious period or the transmission rate that these coefficients present when they are considered constant. The great approximation power comes also at the cost of defining the parameters β and γ in L ∞ ([0, T]) which is a space that is too large to be able to define any feasible prediction strategy.
In order to pin down a smaller manifold where these parameters may vary without sacrificing much in terms of the fitting and forecasting power, our strategy is the following: 1.
Learning phase: The fundamental hypothesis of our approach is that the specialists of epidemiology have understood the mechanisms of infection transmission sufficiently well. The second hypothesis is that these accurate models suffer from the proper determination of the parameters they contain. We thus propose to generate a large number of virtual epidemics, following these mechanistic models, with parameters that can be chosen in the vicinity of the suggested parameter values, including the different initial conditions.
(a) Generate virtual scenarios using detailed models with constant coefficients: • Define the notion of a Detailed_Model which is most appropriate for the epidemiological study. Several models could be considered. For our numerical application, the detailed models are defined in Section 2.1.

•
Define an interval range P ⊂ R p where the parameters µ of Detailed_Model vary. We call the solution manifold U the set of virtual dynamics over [0, T + τ], namely • Draw a finite training set P tr = {µ 1 , . . . , µ K } ⊆ P of K 1 parameter instances and compute u(µ) = Detailed_Model (µ, [0, T + τ]) for µ ∈ P tr . Each u(µ) is a virtual epidemiological scenario. An important detail for our prediction purposes is that the simulations are done in [0, T + τ]; that is, we simulate not only in the fitting time interval but also in the prediction time interval. We call U tr = {u(µ) : µ ∈ P tr } the training set of all virtual scenarios.
(c) Compute reduced models: We apply model reduction techniques using B tr and G tr as training sets in order to build two bases which are defined over [0, T + τ]. The space B n is such that it approximates all functions β(µ) ∈ B tr well (or all γ(µ) ∈ G tr can be well approximated by elements of G n ). In Section 2.4, we outline the methods we have explored in our numerical tests.

2.
Fitting on the reduced spaces: We next solve the fitting problem (2) in the interval [0, T] by searching β (or γ) in B n (or in G n ) instead of in L ∞ ([0, T]); that is, Note that the respective dimensions of B n and G n can be different; for simplicity, we consider them to be equal in the following. Obviously, since B n and G n ⊂ L ∞ ([0, T]), we obtain J * ≤ J * (B n ,G n ) , but we numerically observe that the function n → J * (B n ,G n ) decreases very rapidly as n increases, which indirectly confirms the fact that the manifold generated by the two above models accommodates the COVID-19 epidemic well. The solution of problem (5) gives us the coefficients (c achieve the minimum (5).

3.
Forecast: For a given dimension n of the reduced spaces, we propagate in [0, T + τ] the associated SIR model, as follows: The values I * n (t) and R * n (t) for t ∈ [0, T[ are by construction close to the observed data I obs , R obs (up to some numerical optimization error). The values I * n (t) and R * n (t) for t ∈ [T, T + τ] are then used for prediction.

4.
Forecast combination/aggregation of experts (optional step): By varying the dimension n and using different model reduction approaches, we can easily produce a collection of different forecasts, and thus the question of how to select the best predictive model arises. Alternatively, we can also resort to forecast combination techniques [17]: denoting (I 1 , R 1 ), . . . , (I P , R P ) as the different forecasts, the idea is to search for an appropriate linear combination and perform a similar operation for R. Note that these combinations do not need to involve forecasts from our methodology only; other approaches such as time series forecasts could also be included. One simple forecast combination is the average in which all alternative forecasts are given the same weight w p = 1/P, p = 1, . . . P.
More elaborate approaches consist in estimating the weights that minimize a loss function involving the forecast error.
Before going into detail regarding some of the steps, three points should be highlighted: 1.
To bring out the essential mechanisms, we have idealized some elements in the above discussion by omitting certain unavoidable discretization aspects. To start with, the ODE solutions cannot be computed exactly but only up to a certain level of accuracy given by a numerical integration scheme. In addition, the optimal control problems (2) and (5) are non-convex. As a result, in practice, we can only find a local minimum. Note, however, that modern solvers find solutions which are very satisfactory for all practical purposes. In addition, note that solving the control problem in a reduced space as in (5) could be interpreted as introducing a regularizing effect with respect to the control problem (2) in the full L ∞ ([0, T]) space. It is to be expected that the search of global minimizers is facilitated in the reduced landscape.

2.
routine-IR and routine-βγ: A variant for the fitting problem (5) as studied in our numerical experiments is to replace the cost function J (β, γ | I obs , R obs , [0, T]) by the cost function In other words, we use the variables β * obs and γ * obs from (3) as observed data instead of working with the observed health series I obs , R obs . In Section 4, we refer to the standard fitting method as routine-IR and to this variant as routine-βγ.

3.
The fitting procedure works both on the components of the reduced basis and the initial time of the epidemics to minimize the loss function; however, for simplicity, this last optimization is not reported here.

Details on Step 1-(b): Collapsing the Detailed Models into SIR Dynamics
Let be the solution in [0, T + τ] to a detailed model for a given vector of parameters µ ∈ R d . Here, d is possibly large (d = 11 in the case of the SEI5CHRD model and d = 5 in the case of SE2IUR's model). The goal of this section is to explain how to collapse the detailed dynamics u(µ) into SIRdynamics with time-dependent parameters. The procedure can be understood as a projection of a detailed dynamics into the family of dynamics given by SIR models with time-dependent parameters.
For the SEI5CHRD model, we collapse the variables by setting Similarly, for the SE2IUR model, we set Note that S col , I col and R col depend on µ, but we have omitted this dependence to simplify the notation.
Once the collapsed variables are obtained, we identify the time-dependent parameters β and γ of the SIR model by solving the fitting problem where Note that problem (7) has the same structure as problem (2), with the difference arising from the fact that the collapsed variables I col , R col in (7) play the role of the health data I obs , R obs in (2). Therefore, it follows from Proposition 1 that problem (7) has a very simple solution as it suffices to apply formula (3) to solve it. Note here that the exact derivatives of S col , I col , and R col can be obtained from the Detailed_Model.
Since the solution (β, γ) to (7) depends on the parameter µ of the detailed model, repeating the above procedure for every detailed scenario u(µ) for any µ ∈ P tr yields the two families of time-dependent functions B tr = {β(µ) : µ ∈ P tr } and G tr = {γ(µ) : µ ∈ P tr } defined in the interval [0, T + τ] as introduced in Section (4).

Details of Model Order Reduction
Model order reduction is a family of methods aiming at the approximation of a set of solutions of parametrized PDEs or ODEs (or related quantities) with linear spaces, which are called reduced models or reduced spaces. In our case, the sets to approximate are where each µ is the vector of parameters of the detailed model which take values over P, and β(µ) and γ(µ) are the associated time-dependent coefficients of the collapsed SIR evolution. In the following, we view B and G as subsets of L 2 ([0, T]), and we denote by · and ·, · its norm and inner product. Indeed, in view of Proposition 1, B and Continuing the discussion if B (the same will hold for G), of we measure performance in terms of the worst error in the set B, the best possible performance that reduced models of dimension n can achieve is given by the Kolmogorov n-width: where P Y is the orthogonal projection onto the subspace Y. In the case of measuring errors in an average sense, the benchmark is given by where ν is some given measure on P.
In practice, building spaces that meet these benchmarks is generally not possible. However, it is possible to build sequences of spaces for which the error decay is comparable to that given by As a result, when the Kolmogorov width decays quickly, the constructed reduced spaces will deliver a very good approximation of the set B with few modes (see [18][19][20][21]). We next present the reduced models used in our numerical experiments. Other methods could, of course, be considered, and we refer readers to [22][23][24][25] for general references on model reduction. We continue the discussion in a fully discrete setting in order to simplify the presentation and keep it as close to the practical implementation as possible. All the claims below could be written in a fully continuous sense at the expense of introducing additional mathematical objects such as certain Hilbert-Schmidt operators to define the continuous version of Singular Value Decomposition (SVD).
We build the reduced models using the two discrete training sets of functions where K denotes the number of virtual scenarios considered. The sets have been generated in step 1-(b) of our general pipeline (see Section 2.2).
We consider a discretization of the time interval [0, T + τ] into a set of Q ∈ N * points as follows: {t 1 = 0, · · · , t P = T, · · · , t Q = T + τ} where P < Q. Thus, we can represent each function β i as a vector of Q values and thus assemble all the functions of the family The same remark applies for the family is an orthogonal matrix and Λ ∈ R K×K is a diagonal matrix with non-negative entries, which we denote as λ i and present in decreasing order. The 2 (R Q )-orthogonal basis functions {b 1 , . . . , b K } are then given by the linear combinations and the average approximation error is given by the sum of the tail of the eigenvalues. Therefore, the SVD method is particularly efficient if there is a fast decay of the eigenvalues, meaning that the set B tr = {β i } K i=1 can be approximated by only few modes. However, note that, by construction, this method does not ensure positivity in the sense that P B n β i (t) may become negative for some t ∈ [0, T] although the original function β i (t) ≥ 0 for all t ∈ [0, T]. This is due to the fact that the vectors b i are not necessarily nonnegative. As we will see later, in our study, ensuring positivity especially for extrapolation (i.e., forecasting) is particularly important and motivates the next methods.

2.
Nonnegative Matrix Factorization (NMF, see [26,27]): NMF is a variant of SVD involving nonnegative modes and expansion coefficients. In this approach, we build a family of non-negative functions {b i } n i=1 and we approximate each β i with a linear combination where for every 1 ≤ i ≤ K and 1 ≤ j ≤ n, the coefficients a i,j ≥ 0 and the basis function b j ≥ 0. In other words, we solve the following constrained optimization problem: We refer readers to [27] for further details on the NMF and its numerical aspects.

3.
The Enlarged Nonnegative Greedy (ENG) algorithm with projection on an extended cone of positive functions: We now present our novel model reduction method, which is of interest in itself as it allows reduced models that preserve positivity and even other types of bounds to be built. The method stems from the observation that NMF approximates functions in the cone of positive functions of span{b i ≥ 0} n i=1 since it imposes a i,j ≥ 0 in the linear combination (8). However, note that the positivity of the linear combination is not equivalent to the positivity of the coefficients a i,j since there are obviously linear combinations involving very small a i,j < 0 for some j which may still deliver a nonnegative linear combination ∑ n j=1 a i,j b j . We can thus widen the cone of linear combinations yielding positive values by carefully including these negative coefficients a i,j . One possible strategy for this is proposed in Algorithm 1, which describes a routine that we call Enlarge_Cone. The routine takes a set of nonnegative functions {b 1 , . . . , b n } as an input and modifies each function b i by iteratively adding negative linear combinations of the other basis functions b j for j = i (see line 11 of the routine). The coefficients are chosen in an optimal way so as to maintain the positivity of the final linear combination while minimizing the L ∞ -norm. The algorithm returns a set of functions Note that the algorithm requires the setting of a tolerance parameter ε > 0 for the computation of the σ i j . Once we have run Enlarge_Cone, the approximation of any function β is then sought as We emphasize that the routine is valid for any set of nonnegative input functions. We can thus apply Enlarge_Cone to the functions {b i ≥ 0} n i=1 from NMF but also to the functions selected by a greedy algorithm such as the following: • At step n > 1, we have selected the set of functions {b 1 , . . . , b n−1 }. We next find b n = arg max In our numerical tests, we call the Enlarged Nonnegative Greedy (ENG) method the routine involving the above greedy algorithm combined with our routine Enlarge_Cone.
We remark that, if we work with positive functions that are upper bounded by a constant L > 0, we can ensure that the approximations, denoted as Ψ, and written as a linear combination of basis functions, will also be between these bounds 0 and L by defining on the one hand, and as we have just done, a cone of positive functions generated by the above family {ψ i } i , and on the other hand, considering the base of the functions L − ϕ, ϕ to be above the set all greedy elements of the reduced basis, to which we also apply the enlargement of these positive functions. We then require the approximation to be written as a positive combination of the first (positive) functions and for L − Ψ to also be written as a combination with positive components in the second basis. In this frame, the approximation appears under the form of a least-square approximation with 2n linear constraints on the n coefficients, expressing the fact that the coefficients are nonnegative in the two above transformed bases.
In addition to the previous basis functions, it is possible to include more general/generic basis functions such as polynomial, radial, or wavelet functions in order to guarantee simple forecasting trends. For instance, one can add affine functions in order to include the possibility of predicting with a simple linear extrapolation to the range of possible forecasts offered by the reduced model. Given the overall exponential behavior of the health data, we have added an exponential function of the form b 0 (t) = ξ exp(−ξ t) (or g 0 (t) = ψ exp(−ψ t)) to the reduced basis functions {b 1 , . . . , b n } (or {g 1 , . . . , g n }) where the real-valued nonnegative parameters ξ, ξ , ψ, ψ are obtained through a standard exponential regression from β * obs (or γ * obs ) associated with the targeted profiles of infectious people; that is, the profiles defined in the time interval [0, T] that should be extrapolated to ]T, τ].
In other words, the final reduced models that we use for forecasting are Indeed, including the exponential functions in the reduced models gives easy access to the overall behavior of the parameters β and γ; the rest of the basis functions generated from the training sets catch the higher-order approximations and allow then to improve the extrapolation. Remark 1. Reduced models on I = {I(µ) : µ ∈ P } and R = {R(µ) : µ ∈ P } Instead of applying model reduction to the sets B and G, as we do in our approach, we could apply the above techniques directly to the sets of solutions I and R of the SIR models with time-dependent coefficients in B and G. In this case, the resulting approximation would however not follow SIR dynamics.

Methodology for Multiple Regions Including Population Mobility Data
The forecasting method of Section 2.2 for a single region can be extended to the treatment of multiple regions involving population mobility. The prediction scheme is based on a multi-regional SIR with time-dependent coefficients. Compared to other more detailed models, the main advantage of our approach is that it drastically reduces the number of parameters to be estimated. Indeed, detailed multi-regional models such as multi-regional extensions of the above SEI5CHRD and SE2IUR models from Section 2.3 require a number of parameters that quickly grows with the number P of regions involved. Their calibration thus requires large amounts of data which, in addition, may be unknown, very uncertain, or not available. In a forthcoming paper, we will apply the fully multiregional setting for the post-lockdown period.
The structure of this section is the same as above for the case of a single region. In Section 3.1, we begin by introducing the multi-regional SIR model with time-dependent coefficients and associated detailed models. As with any multi-regional model, mobility data are required as input data, and the nature and level of detail of the available data motivates certain choices regarding the modeling of the multi-regional SIR (as well as the other detailed models). We then present in Section 3.2 the general pipeline, in which we emphasize the high modularity of the approach.

Multi-Regional Compartmental Models
In the spirit of fluid flow modeling, there are essentially two ways of describing mobility between regions: • In a Eulerian description, we take the regions as fixed references for which we record incoming and outgoing travels; • In a Lagrangian description, we follow the motion of people living in a certain region and record their travels in the territory. We can expect this modeling to be more informative regarding the geographical spread of the disease, but it comes at the cost of additional details regarding the home region of the population.
Note that both descriptions hold at any coarse or fine geographical level, in the sense that what we call the regions could be taken to be full countries, departments within a country, or very small geographical partitions of a territory. We next describe the multiregional SIR models with the Eulerian and Lagrangian descriptions of population fluxes, which form-the output of our methodology.

Multi-Regional SIR Models with Time-Dependent Parameters
Eulerian description of population flux: Assume that we have P regions and the number of people in region i is N i for i = 1, . . . , P. Due to mobility, the population in each region varies, so N i depends on t. However, the total population is assumed to be constant and equal to N; that is, For any t ≥ 0, let λ i→j (t) ∈ [0, 1] be the probability that people from i travel to j at time t. In other words, λ i→j (t)N i (t)δt is the number of people from region i that have travelled to region j between time t and t + δt. Note that we have Since, for any δt ≥ 0, dividing by δt and taking the limit δt → 0 yields which is consistent with our assumption that the total population is constant.
The time evolution of the N i is known in this case if we are given the λ i→j (t) from Eulerian mobility data. In addition to these mobility data, we also have the data of the evolution of infected and removed people, and our goal is to fit a multi-regional SIR model that is in accordance with this data. Thus, we propose the following model.
Denoting S i , I i and R i as the number of susceptible, infectious and removed people in region i at time t, we first have the relation Note that from the second relation, it follows that To model the evolution between compartments, one possibility is the following SIR model: The parameters β i , γ i , N i depend on t, but we have omitted this dependence for ease of reading. Introducing the compartmental densities the system equivalently reads d dt Before going further, some points should be highlighted: • The model is consistent in the sense that it satisfies (9), and when P = 1, we recover the traditional SIR model; • Under lockdown measures, λ i→j ≈ δ i,j and the population N i (t) remains practically constant. As a result, the evolution of each region is decoupled from the others, and each region can be addressed with a mono-regional approach; • The use of β j in Equation (11) is debatable. When people from region j arrive in region i, it may be reasonable to assume that the contact rate is β i ; • The use of λ j→i in Equation (11) is also very debatable. The probability λ j→i was originally defined to account for the mobility of people from region j to region i without specifying the compartment. However, in Equation (11), we need the probability of mobility of infectious people from region j to region i, which we denote by µ j→i in the following. It seems reasonable to think that µ j→i may be smaller than λ j→i , because as soon as people become symptomatic and suspect an illness, they will probably not move. Two possible options would be as follows: -We could try to estimate µ j→i . If symptoms arise, for example, 2 days after infection and if people recover in 15 days on average, then we could say that µ j→i = 2/15λ j→i . -As the above seems to be quite empirical, another option would be to use λ j→i and absorb the uncertainty in the values of the β j that can be fitted.
• We choose not to add mobility in the R compartment as this does not modify the dynamics of the epidemic spread; only adjustments in the population sizes are needed.
Lagrangian description of population flux: We call the above description Eulerian because we have fixed the regions as a fixed reference. Another possibility is to follow the trajectories of inhabitants of each region, in the same spirit as when we follow the trajectories of fluid particles.
Let S i , I i , and R i now be the number of susceptible, infectious and removed people who are resident in region i, i = 1, . . . P. It is reasonable to assume that S i (t) + I i (t) + R i (t) is constant in time. However, not all the residents of region i may be in that region at time t. Let λ (i) j→k (t) be the probability that susceptible people resident in i travel from region j to region k at time t. With this notation, λ We can thus write the evolution over S i , I i , R i as We emphasize that, to implement this model, the Lagrangian mobility data λ (i) j→k are required for all (i, j, k) ∈ {1, . . . , P} 3 .
Notation: In the following, we gather the compartmental variables in vectors as well as the time-dependent coefficients For any β and γ ∈ (L ∞ ([0, T])) P , we denote by the output of any of the above multi-regional SIR models. For simplicity, in what follows, we omit the initial condition in the notation.

Detailed Multi-Regional Models with Constant Coefficients
In the spirit of the multi-regional SIR, one can formulate detailed multi-regional versions of more detailed models such as those introduced in Section 2.1. We omit the details for the sake of brevity.

Forecasting for Multiple Regions with Population Mobility
Similar to the mono-regional case, we assume that we are given health data in [0, T] in all regions. The observed data in region i are the series of infected people, denoted as I obs i , and recovered people, denoted as R obs i . They are usually given at a national or a regional scale and on a daily basis.
We propose to fit the data and provide forecasts with SIR models with time-dependent parameters β i and γ i for each region i. As in the mono-regional case, we can prove that such a simple family possesses optimal fitting properties for our purposes. In the current case, the cost function reads such that S, I, R = Multiregional_SIR β, γ, [0, T] , and the fitting problem is the optimal control problem of finding The following proposition ensures the existence of a unique minimizer under certain conditions. To prove this, it is useful to remark that any of the above multi-regional SIR models (see (11) and (12)) can be written in the general form where, by a slight change of notation, the vectors S, I and R are the densities of population in the case of the Eulerian approach (see Equation (11)). They are classical population numbers in the case of the Lagrangian approach (see Equation (12)). diag(I(t)) is the P × P diagonal matrix with diagonal entries given by the vector I(t). M(Λ(t), S(t), I(t)) is a matrix of size P × P that depends on the vectors of susceptible and infectious people S(t), I(t) and on the mobility matrix Λ. In the case of the Eulerian description, Λ(t) = (λ i,j (t)) 1≤i,j≤P and in the case of the Lagrangian approach Λ(t) is the P × P × P tensor Λ(t) = (λ (i) j,k (t)) 1≤i,j,k≤P . For example, in the case of the Eulerian model (12), the matrix M reads M(Λ(t), S(t), I(t)) = − diag( S(t))Λ T diag( I(t)) = −(S i λ i→j I j ) 1≤i,j≤P (14)

Proof. Since we assume that M(Λ(t), S(t), I(t))
is invertible for every t ∈ [0, T], we can set Before continuing, let us comment on the invertibility of M(Λ(t), S(t), I(t)) which is necessary in Proposition 2. A sufficient condition to ensure this is if the matrix is diagonally dominant row-wise or column-wise. This yields certain conditions on the mobility matrix (14), the matrix is diagonally dominant in each row if for every 1 ≤ i ≤ P,

Λ(t) with respect to the values of S(t), I(t). For example, if M is defined as in Equation
Similarly, if for every 1 ≤ j ≤ P, then the matrix is diagonally dominant for each column and guarantees invertibility. Note that any of the above conditions is satisfied in situations with little or no mobility where λ i→i ≈ δ i,j . Now that we have exactly defined the set-up for the multi-regional case, we can follow the same steps in Section 2.2 to derive forecasts involving model reduction for the time-dependent variables β and γ.

Numerical Results
In this section, we apply our forecasting method to the ongoing COVID-19 pandemic, which spread in the year 2020 in France and started approximately in February. Particular emphasis is placed on the first pandemic wave, for which we consider the period from 19 March to 20 May 2020. Due to the lockdown imposed between 17 March and 11 May, inter-regional population mobility was drastically reduced in that period. Studies using anonymized Facebook data have estimated the reduction to be 80% (see [28]). As a result, it is reasonable to treat each region independently from the rest, and we apply the monoregional setting in Section 2. Here, we focus on the case of the Paris region, and we report different forecasting errors obtained using the methods described in Section 2. Some forecasts are also shown for the second wave for the Paris region between 24 September and 25 November.
The numerical results are presented as follows. Section 4.1 explains the sources of health data. Sections 4.2.1 and 4.2.2 explore the results in detail and present a study of the forecasting power of the methods in a two-week horizon. Section 4.2.3 displays forecasts for the second wave. Section 4.2.4 aims to illustrate the robustness of the forecasting over longer periods of time. A discussion of the fitting errors of the methods is given in Appendix A. Additional results highlighting the accuracy of the forecasts are shown in Appendix B.

Data
We use public data from Santé Publique France (https://www.data.gouv.fr/en/ datasets/donnees-hospitalieres-relatives-a-lepidemie-de-covid-19/) to get the numbers I obs (t) of infected and R obs (t) of removed people. As shown in Figure 3, the raw data present some oscillations at the scale of a week, which are due to administrative delays for the cases to be officially reported by hospitals. For our methodology, we have smoothed the data by applying a 7 day moving average filter. In order to account for the total number of infected people, we also multiply the data by an adjustment factor f adj = 15 as stated in the literature (indeed, it is said in [29] that "of the 634 confirmed cases, a total of 306 and 328 were reported to be symptomatic and asymptomatic", and in [10], it is claimed that the probability of developing severe symptoms for a symptomatic patient is 0 for children, 0.1 for adults and 0.2 for seniors; thus, if one takes p = 0.13 as an approximate value of these probabilities, one may estimate the adjustment factor as f adj = 634 328 × 1 p ≈ 15). Obviously, this factor is uncertain and could be improved in the light of further retrospective studies of the outbreak. However, note that when S N, which is the case at the start of the epidemic, the impact of this factor is negligible in the dynamics as can be understood from (3). In addition, since we use the same factor to provide a forecast of hospitalized people, the influence on the choice is minor.

Results
Using the observations I obs (t) and R obs (t), we apply a finite difference scheme in Formula (3) to derive β * obs (t) and γ * obs (t) for t ∈ [0, T]. Figure 4 shows the values of these parameters as well as the basic reproduction number R * 0,obs = β * obs /γ * obs for the first pandemic wave in Paris. (c) R * 0,obs . Figure 4. β * obs , γ * obs , R * 0,obs = β * obs /γ * obs deduced from the data from t 0 = 19/03/2020 to T = 20/05/2020 . We next follow the steps presented in Section 2.2 to obtain the forecasts. In the learning phase (step 1), we use two parametric detailed models of SE2IUR and SEI5CHRD types to generate training sets B tr and G tr composed of K = 2618 training functions β(µ) and γ(µ) where µ are uniformly sampled in the set of parameters P tr in the vicinity of the parameter values suggested in the literature [9,10]. Based on these training sets, we finish step 1 by building three types of reduced models: SVD, NMF and ENG (see Section 2.4).
Given the reduced bases B n and G n , we next search for the optimal β * n ∈ B n and γ * n ∈ G n that best fit the observations (step 2 of our procedure). For this fitting step, we consider two loss functions:
routine-βγ: loss function J (β, γ | β * obs , γ * obs , [0, T]) from (6) We study the performance of each of the three reduced models and the impact of the dimension n of the reduced model in terms of the fitting error. The presentation of these results is presented in Appendix A in order not to overload the main discussion. The main conclusion is that the fitting strategy using SVD-reduced bases provides smaller errors than NMF and ENG, especially when we increase the number of modes n. This is illustrated in Figure 5 where we show the fittings obtained with routine-βγ and n = 10 for the first wave (from t 0 = 19/03/2020 to T = 20/05/2020). We observe that SVD is the best at fitting β * obs and γ * obs , while ENG produces a smoother fitting of the data. Although the smoother fitting with ENG yields larger fitting errors than SVD, we see in the next section that it yields better forecasts.

Forecasting for the First Pandemic Wave with a 14 Day Horizon
In this section, we illustrate the short-term forecasting behavior of our method. We consider a forecasting window of τ = 14 days and we examine several different starting days in the course of the first pandemic wave. The results are shown in  Recall that that the forecasting uses the coefficients of the reduced bases obtained by the fitting procedure but also the optimal initial condition of the forecast that minimizes the error on the three days prior to the start of the forecast. For each given fitting strategy (routine-IR, routine-βγ) and each given type of reduced model (SVD, NMF, ENG), we have chosen to plot an average forecast computed with predictions using reduced dimensions n ∈ {5, 6, 7, 8, 9, 10}. This choice is a simple type of forecast combination, but of course other more elaborate aggregation options could be considered. The labels of the plots correspond to the following: • I SVD , I N MF , I ENG , R SVD , R N MF , R ENG are the average forecasts obtained using routine-βγ. • I * SVD , I * N MF , I * ENG , R * SVD , R * N MF , R * ENG are the average forecasts obtained using routine-IR. The main observation from Figures 6-14 is that the ENG-reduced model is the most robust and accurate forecasting method. Fitting ENG with routine-IR or routine-βγ does not seem to lead to large differences in the quality of the forecasts, but routine-βγ seems to provide slightly better results. This claim is further confirmed by the study of the numerical forecasting errors of the different methods shown in Appendix B. Figures 6-14 also show that the SVD-reduced model is very unstable and provides forecasts that blow up. This behavior illustrates the dangers of overfitting, namely that a method with high fitting power may present poor forecasting power due to instabilities. In the case of SVD, the instabilities stem from the fact that approximations are allowed to take negative values. This is the reason why NMF, which incorporates the nonnegative constraint, performs better than SVD. One of the reasons why ENG outperforms NMF is the enlargement of the cone of nonnegative functions (see Section 2.4). It is important to note that, with ENG, the reduced bases are directly related to well-chosen virtual scenarios, whereas SVD and NMF rely on matrix factorization techniques that provide purely artificial bases. This makes forecasts from ENG more realistic and therefore more reliable.           Figure 11. Fourteen-day forecasts starting from T = 11/04.

Focus on the Forecasting with ENG
For our best forecasting method (routine-βγ using ENG), we plot in Figures 15-23 the forecasts for each dimension n = 5 to 10. The plots give the forecasts on a 14 day-ahead window for β, γ, and the resulting evolution of the infected I and removed R. We see that the method performs reasonably well for all values of n, proving that the results of the previous section with the averaged forecast are not compensating for spurious effects which could occur for certain values of n. We have chosen to display the inaccurate forecasts from 3 April, 7 April, and 11 April as they are among the worst predictions obtained using this method; however, it is important to mention that, despite the lack of accuracy in these cases, plausible epidemic behaviors remain, with different but realistic evolutions for β and γ compared to the actual evolution. Note that the method was able to predict the peak of the epidemic several days in advance (see Figure 15). We also observe that the prediction on γ is difficult at all times due to the fact that γ * obs presents an oscillatory behavior. Despite this difficulty, the resulting forecasts for I and R are very satisfactory in general.    Iobs  I5  I6  I7  I8  I9  I10  IENG data limit (c) Infected.       Iobs  I5  I6  I7  I8  I9  I10  IENG data limit (c) Infected.   Iobs  I5  I6  I7  I8  I9  I10  IENG data limit (c) Infected.     Iobs  I5  I6  I7  I8  I9  I10  IENG data limit (c) Infected.       Iobs  I5  I6  I7  I8  I9  I10  IENG data limit (c) Infected.

Forecasting of the Second Wave with ENG
The review took place during the month of November 2020 as the second COVID-19 pandemic wave hit France. We took advantage of this to enlarge the body of numerical results, and we provide some example forecasts with ENG for this wave in  As the figures illustrate, the method provides very satisfactory forecasts in a 14 day-ahead window. We again observe a satisfactory prediction of the second peak (Figures 24-26) and the same difficulty in forecasting γ due to the oscillations in γ obs , but this has not greatly impacted the quality of the forecasts for I and R.  Iobs  I5  I6  I7  I8  I9  I10  IENG data limit (c) Infected.   Figure 24. ENG forecast from T = 28/10.     Iobs  I5  I6  I7  I8  I9  I10  IENG data limit (c) Infected.

Forecasts with ENG with a 28 Day-Ahead Window
To conclude this section, we extend the forecasting window to 28 days instead of 14 and study whether the introduced ENG method still provides satisfactory forecasts. As shown in Figures 27-32, the results of the methods are quite stable for large windows. This shows that, in contrast to standard extrapolation methods using classical linear or affine regressions, the reduced basis catches the dynamics of β and γ not only locally but also at extended time intervals.

Conclusions
We have developed an epidemiological forecasting method based on reduced modeling techniques. Of the different strategies that have been explored, the one that outperforms the rest in terms of robustness and forecasting power involves reduced models that are built with an Enlarged Nonnegative Greedy (ENG) strategy. This method is novel and of interest in itself as it allows reduced models that preserve positivity and even other types of bounds to be built. Despite the fact that ENG does not have optimal fitting properties (i.e., interpolation properties), it is well suited for forecasting since, due to the preservation of the natural constraints of the coefficients, it generates realistic dynamics with few modes. The results have been presented for a mono-regional test case, and we plan to extend the present methodology to a multi-regional setting using mobility data.
Last but not least, we would like to emphasize that the developed strategy is very general and could be applied to the forecasting of other types of dynamics. The specificity of each problem may, however, require adjustments in the reduced models. This is exemplified in the present problem through the construction of reduced models that preserve positivity. Our set of virtual epidemiological dynamics is U . After the collapsing step, the manifolds for β and γ are B and G. These sets are potentially infinite, and we have used finite training subsets B tr ⊆ B and G tr ⊆ G to build the reduced models B n and G n .
First, we consider the eigenvalues for β and γ when performing an SVD decomposition for the virtual scenarios. Figure A1 shows a rapid decay of the eigenvalues obtained by SVD decomposition and shows that we can obtain a very good approximation of elements of B tr ⊆ B and G tr ⊆ G with only a few modes. Among the sources of noise and model error given at the beginning of Appendix A, we can study the impact of the sampling error (point 2-b) as follows. For this, we start by examining the fitting error on an interval of 45 days for two functions β and γ which belong to the manifold B and G and are in the training sets B tr and G tr . This error will serve as a benchmark, which we use to compare the fitting errors of functions that are not in the training sets. Figure A2 shows relative fitting errors β * n − β * obs L 2 ([t 0 ,T]) / β * obs L 2 ([t 0 ,T]) and γ * n − γ * obs L 2 ([t 0 ,T]) / γ * obs L 2 ([t 0 ,T]) using SVD, NMF and ENG-reduced bases with n = 2, . . . , 20, where the notationx denotes the mean of the quantity x over [0, T]. We observe that, for SVD, the fitting errors behave similarly to the error decay of the training step ( Figure A1).
(b) L 2 relative error of γ vs. n. Figure A2. Study of sampling errors: Fitting errors of functions β and γ from B and G that belong to the training sets B tr and G tr . Figure A3 shows the L 1 and L ∞ errors obtained after the propagation of the fittings of β and γ. In both metrics, the error for I and R obtained using SVD decreases below 10 −12 . When NMF and ENG are used, the error decreases and stagnates at 10 −2 for both I and R.   Figure A3. Study of sampling errors: Errors on I and R obtained by the susceptible-infected-removed (SIR) model using the fitted β and γ. Now, we consider the fitting error for two functions β and γ which belong to the manifold B and G but which are not in the training sets B tr and G tr . Figure A4 shows the fitting error on the virtual scenario considered using SVD, NMF and ENG-reduced bases for n = 2, . . . , 20. We note that the quality of the fittings in each method is very similar to that of Figure A3 where the functions were taken in the training sets. This illustrates that the sampling error does not play a major role in our experiments.
(b) L 2 relative error of γ vs. n. Figure A4. Study of sampling errors: Fitting errors of functions β and γ from B and G but which do not belong to the training sets B tr and G tr . Figure A5 shows the L 1 and L ∞ errors obtained after the propagation of the fittings of β and γ. In both metrics, the error for I and R obtained using SVD decreases to 10 −14 . When NMF and ENG are used, the error decreases and stagnates at 10 −3 and 10 −4 for both I and R, respectively.   Figure A5. Study of sampling errors: Errors on I and R obtained by the SIR model using the fitted β and γ.

Appendix A.2. Study of the Impact of Noisy Data and Intrinsic Model Error
To investigate the impact of noise in the observed data, we now add noise to the two previously chosen functions β ∈ B and γ ∈ G, and we study the fitting error for this noisy data. The level of the noise has been chosen to be the same level as that estimated for the real dynamics. In order to estimate the noise, we performed a fitting of the real data using SVD-reduced bases; the level of noise is defined as the difference between this fitting and the real data. This level of noise is then added to the virtual scenario considered here. Figure A6 shows the fitting error on β and γ using SVD, NMF and ENG-reduced bases for n = 2, . . . , 20.
(a) L 2 relative error of β vs. n.
(b) L 2 relative error of γ vs. n. Figure A6. Fitting errors of β and γ in a noisy virtual scenario.
We observe that the noise strongly deteriorates the fitting error obtained using NMF, and the error becomes oscillatory and very unstable. For ENG, the error remains low and consistently around 10 −2 for β. We observe the same behavior for γ with instabilities arising for n > 10. For SVD, the error is lower than in the ENG case and slowly decreases as n increases. Figure A7 shows the L 1 and L ∞ errors obtained after the propagation of the fittings of β and γ. In line with the observations from Figure A6, the error obtained using NMF is very unstable. Using ENG, we observe a decay from n = 2 to n = 7 and oscillations that remain around 10 −2 for I and 10 −3 for R. The decay observed for SVD is slow and steady; the error nearly reaches 10 −4 for I and 10 −5 for R when n = 20.   Figure A7. Fitting errors of I and R on noisy data.
Finally, it is necessary to add the intrinsic model error on top of the previous sampling error and observation noise. In so doing, the main change is that the previous fitting error plots from Figure A6 have essentially the same behavior but the error values are increased depending on the degree of model inaccuracy. We have therefore disentangled all the effects of model error and noisy data, and all the observations from this section thus give a better insight regarding the fitting on the real data. Figure A8 summarizes the fitting results for an example fitting period taken from t 0 = 19/03 to T = 03/05. Figure A8a,b shows the fitting error on β and γ using SVD, NMF and ENG-reduced bases for n = 2, . . . , 20. Figure A8c,d shows the L 1 and L ∞ relative errors on I n and R n after the propagation of the fittings of β * n and γ * n .     Figure A8. Fitting errors from t 0 = 19/03/2020 to T = 03/05/2020. From Figure A8a,b, we observe that the fitting error with SVD decreases at a moderate rate as the dimension n of the reduced basis is increased. The error with NMF and ENG does not decrease and oscillates around a certain constant error value of the order 10 −1 . Note that this value is small and yields small errors in the approximation of I and R, as Figure A8c,d illustrates.

Appendix B. Study of Forecasting Errors
In this section, we present a detailed study of the forecasting errors for each different fitting strategy (routine-IR, routine-βγ), reduced model (SVD, NMF, ENG) and starting date T. We anticipate the main conclusion announced in Section 4.2.1: ENG fitted with routine-βγ outperforms the other reduced models and is the most robust and accurate reduced model to use in a forecasting strategy. Figure A9 shows the relative errors of a 14 day forecast from T = 01/04 for each forecasting method and each reduced basis. Similarly Figures A10-A17 show the forecasts' relative errors from different times, respectively.                         We observe that the quality of the forecast depends on the reduced basis but also strongly on the starting day T from which the forecast is produced. The forecasts using routine-βγ with SVD and NMF are not accurate and most often blow up. When routine-IR is used with SVD and NMF, the forecasts are more robust as there is no observed explosion of error. Reduced bases from ENG consistently lead to the the best forecast being obtained using either routine-βγ and routine-IR; by inspecting the error on Figures A9-A17 and the averaged forecasts obtained in Section 4.2.1, we conclude that routine-βγ with ENG-reduced bases provides slightly better forecasts.