Statistical Explorations and Univariate Timeseries Analysis on COVID-19 Datasets to Understand the Trend of Disease Spreading and Death

“Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2)”, the novel coronavirus, is responsible for the ongoing worldwide pandemic. “World Health Organization (WHO)” assigned an “International Classification of Diseases (ICD)” code—“COVID-19”-as the name of the new disease. Coronaviruses are generally transferred by people and many diverse species of animals, including birds and mammals such as cattle, camels, cats, and bats. Infrequently, the coronavirus can be transferred from animals to humans, and then propagate among people, such as with “Middle East Respiratory Syndrome (MERS-CoV)”, “Severe Acute Respiratory Syndrome (SARS-CoV)”, and now with this new virus, namely “SARS-CoV-2”, or human coronavirus. Its rapid spreading has sent billions of people into lockdown as health services struggle to cope up. The COVID-19 outbreak comes along with an exponential growth of new infections, as well as a growing death count. A major goal to limit the further exponential spreading is to slow down the transmission rate, which is denoted by a “spread factor (f)”, and we proposed an algorithm in this study for analyzing the same. This paper addresses the potential of data science to assess the risk factors correlated with COVID-19, after analyzing existing datasets available in “ourworldindata.org (Oxford University database)”, and newly simulated datasets, following the analysis of different univariate “Long Short Term Memory (LSTM)” models for forecasting new cases and resulting deaths. The result shows that vanilla, stacked, and bidirectional LSTM models outperformed multilayer LSTM models. Besides, we discuss the findings related to the statistical analysis on simulated datasets. For correlation analysis, we included features, such as external temperature, rainfall, sunshine, population, infected cases, death, country, population, area, and population density of the past three months—January, February, and March in 2020. For univariate timeseries forecasting using LSTM, we used datasets from 1 January 2020, to 22 April 2020.


Introduction
In December 2019, Chinese authorities released the first official information to the world about the spreading of the human coronavirus in their country as a community disease [1][2][3]. Till 15th May 2020, more than 4.6 million people are infected with more than 0. 38   COVID-19 is a new ICD code and appeared with multiple significant research questions, research directions, and that is the reason, the research outcome related to COVID-19 is limited in numbers Multiple reputed publishing agencies such as, "Springer", "Nature", "Wiley", "Taylor & Francis Group" have made all the COVID-19 related articles open access and freely available [15,16]. Different studies have been conducted by different research groups on COVID-19 to analyze its nature, effect, spreading, probable consequences with statistical data analysis and AI based approaches. We classified COVID-19 related studies based on two popular AI inspired approaches, such as machine learning (ML) and deep learning (DL) as follows-(a.) machine learning-based approaches-Dong et al. [17] developed an interactive publicly available web-based dashboard to track the outbreak by scientists, researchers, public health authorities, and general people. It was hosted by the "Center for Systems Science and Engineering (CSSE)" at Johns Hopkins University, Baltimore, MD, USA, to visualize and follow reported cases of COVID-19 in real-time. Yang et al. [18] developed a dynamic SEIR model with machine learning (ML) to predict the COVID-19 epidemic COVID-19 is a new ICD code and appeared with multiple significant research questions, research directions, and that is the reason, the research outcome related to COVID-19 is limited in numbers Multiple reputed publishing agencies such as, "Springer", "Nature", "Wiley", "Taylor & Francis Group" have made all the COVID-19 related articles open access and freely available [15,16]. Different studies have been conducted by different research groups on COVID-19 to analyze its nature, effect, spreading, probable consequences with statistical data analysis and AI based approaches. We classified COVID-19 related studies based on two popular AI inspired approaches, such as machine learning (ML) and deep learning (DL) as follows-(a.) machine learning-based approaches-Dong et al. [17] developed an interactive publicly available web-based dashboard to track the outbreak by scientists, researchers, public health authorities, and general people. It was hosted by the "Center for Systems Science and Engineering (CSSE)" at Johns Hopkins University, Baltimore, MD, USA, to visualize and follow reported cases of COVID-19 in real-time. Yang et al. [18] developed a dynamic SEIR model with machine learning (ML) to predict the COVID-19 epidemic peaks and sizes with 2003 data for training after 23 January in China. The research team guessed when the epidemic would be highest in Hubei, China, and when it would start declining gradually, considering quarantine as a factor. Rao et al. [19] did their research on a machine learning (ML) based framework to identify COVID-19 related cases quickly using a phone-based survey. The framework can help to classify cases between no-risk, minimal-risk, moderate-risk, and high-risk, so that high-risk cases can be quarantined earlier, therefore diminishing the chance of spread. Men et al. [20] researched the incubation period of COVID-19 with a machine learning (ML) approach, and their result showed that the incubation distribution of COVID-19 did not follow general incubation distributions such as Lognormal, Weibull, and Gamma distributions. They estimated that the mean and median of COVID-19 incubation were 5.84 and 5.0 days respectively, via bootstrap and proposed "Monte Carlo" simulations. They also noticed that the incubation period of the groups with age >= 40 years and age < 40 years exhibited a statistically significant variation. The initial group had more extended incubation period and more significant variance than the later. The study further indicated that separate quarantine time should be employed to the groups for their distinct incubation periods. Pandey et al. [21] did their research on proactive management with machine learning methods to raise the "WASH" awareness for maintaining personal hygiene. They utilized the co-creation technique to develop the user interface solution using mHealth technologies (WashKaro app) in the local Indian language "Hindi". They utilized a total of 13 combinations of pre-processing approaches and evaluated word-embeddings, similarity metrics by 8 human participants via calculation of agreement statistics. The archived the best performance with Cohen's Kappa of 0.54, and the solution was deployed as "On Air", WashKaro app's AI-powered back end. Li et al. [22] evaluated the risk of a pandemic for all cities and regions in China using popular machine learning classifier 'Random Forest (RF)' with identified factors such as accumulative and increased numbers of confirmed cases, total population, population density, and gross domestic product (GDP). The experiment found a risk of unnecessary economic loss due to COVID-19. Yan et al. [23] and Jia et al. [24] Worked on the predictive model to predict the criticality of COVID-19. The first research group developed a machine learning based (XGBoost) prognostic model with clinical data in Wuhan from 10 January to 18 February 2020, based on 3 clinical features. The model can predict the health risk and quickly access the risk of death. The former research group used the "Logistic model", "Gompertz model" and "Bertalanffy model" to predict the cumulative number of confirmed cases and the development trend of the COVID-19 epidemic. The "Logistic model" outperformed other models in fitting all the data in Wuhan, while the "Gompertz model" performed better in fitting the data in non-Hubei areas. Randhawa et al. [25,26] conducted two ML-based genomic studies to analyze the genomic signatures to provide evidence of associations between Wuhan 2019-nCoV and bat coronaviruses and to classify novel pathogens of COVID-19 rapidly. (b.) deep learning-based approaches-Gozes et al. [27] developed artificial intelligence-based automated 2D and 3D deep learning-based CT image analysis tools to detect, quantify, track, and monitor corona infected patients from those who have not infected. Zhang et al. [28] proposed a deep learning-based drug screening model "DFCNN" for novel coronavirus 2019-ncov with virus RNA sequence database "GISAID" of Coronavirus and demonstrate that they can differentiate coronavirus patients from those who do not have the disease. Xu et al. [29] conducted a study to establish an early screening model to distinguish COVID-19 pneumonia from Influenza-A viral pneumonia and healthy cases with pulmonary CT images using deep learning techniques with 86.7% accuracy. Shan et al. [30] and Li et al. [31] conducted their research on CT images with deep learning techniques to quantify lung infections in a COVID-19 patient and to distinguish COVID-19 patients from community-acquired pneumonia patients, respectively. Narin et al. [32] and Wang et al. [33] did their research on Deep Convolutional Neural Network Design to identify the COVID-19 cases from the chest X-ray images. Ghosal et al. [34] investigated on drop-weights based "Bayesian Convolutional Neural Networks (BCNN)" to guesstimate uncertainty in deep learning-based solution to expand the diagnostic performance of the human-machine team using publicly available COVID-19 chest X-ray dataset and exposed that the uncertainty in prediction is highly correlated with the accuracy of prediction. Santosh et al. [35] and Hu et al. [36] did their research on the human coronavirus outbreak forecast model with AI approaches. The former research group utilized ML algorithms to analyze data and followed by decision making to forecast the nature of COVID-19 spread across the globe using active learning-based cross-population train/test models that used multimodal data. The following research group used deep learning LSTM model (modified stacked auto-encoder) to forecast and estimate the size, lengths, and ending time of COVID-19 across China based on the data collected from January 11 to 27 February 2020, by WHO. Maghdid et al. [37] designed an AI-enabled framework to diagnose COVID-19 using smartphone embedded sensors. The developed low-cost solution takes input from the camera sensor (CT scan images of lungs, human tracking video observation), inertial sensor (30-second-sit to stand), microphone sensor (cough voice prediction), temperature fingerprint sensor (fingerprint on the screen) to predict COVID-19 disease, based on the deep learning (RNN and CNN) techniques.
The AI inspired approaches are a powerful tool for helping public health planning and policymaking. Our research aims to perform statistical analysis on available COVID-19 related datasets available in "ourworldindata.org" [5] and newly created dataset to find a set of probable risk factors associated with the spreading of COVID-19 and we have identified it as a research gap. Once correlation analysis was accomplished, we explored univariate LSTM models for timeseries forecasting on total cases and deaths. LSTM is an artificial "recurrent neural network (RNN)" architecture and used in the field of deep learning. Therefore, in this research we followed deep learning-based approach. In addition, we proposed an algorithm to prove our assumed hypothesis that that social isolation or social distancing might restrict the spreading of the COVID-19.
The global scientific community is looking for three possible solutions, such as virus enzyme inhibitors [38,39], plasma therapy [40], and vaccination to give a counter fight against COVID-19. According to the WHO director general, the safest and fastest method of corona treatment is patient identification, separation, examination, and treatment. WHO has specified a standard on its official website where guidelines are specified formally to slow down and prevent its further transmission. "Worldometers.org" [11], "ourworldindata.org" [5], and WHO [4] are updating situation reports, data tables, and a COVID-19 dashboard on regular basis. We assumed that all the available data provided by all countries on total case numbers, total deaths, total recoveries, daily cases, daily deaths, and daily recoveries are correct, and based on that assumption we carried out our further analysis of the data.
The main contributions of this paper are as follows: (a) Risks associated with the human coronavirus spreading? (b) Identification of a set of probable correlated factors associated with the expansion of COVID-19 following statistical approaches on the fabricated datasets? (c) Analysis of the impact of social isolation with a spread factor ("f") to restrict the spread of the human coronavirus? (d) Analysis of different univariate LSTM models for forecasting of total cases and total deaths caused by COVID-19.
The remainder of the paper is structured as follows: In Section 2, risks associated with the human coronavirus spreading is discussed with data. Section 3 describes the methodology utilized for the data processing. In Section 4, we discuss our findings. The paper is concluded in Section 5. Clinical trials, chemical compounds, genetic analysis, political arguments, and economic analysis related to COVID-19, are beyond the scope of this paper.

Risks Associated with the Spreading of COVID-19
COVID-19 has created significant health and economic slowdown in many countries since January 2020 due to global and local lockdown to encourage social distance. It has infected more than 4.6 million people so far, with more than 0.38 million death and more than 1.7 million recoveries reported until 15th May 2020 [4,5]. It attacked not only developed nations but also developing ones, regardless of the socioeconomic condition, age, and gender discrimination. COVID-19 is highly contagious and transmissible from human to human, with an incubation period of up to 24 days [6].
WHO officials initially considered SARS-CoV-2 as non-airborne, but a recent study has discovered that it can survive in air staying suspended as aerosol depending on factors such as heat and humidity [41]. Therefore, the infection mediums can be classified as contact (direct or indirect), droplet spray in short-range transmission, and aerosol in long-range transmission (airborne transmission) [41]. According to the "Center for Disease Control and Prevention (CDC)", a social distance of about 1.8 m is necessary to avoid large droplets of virus-laden mucus [42], but some experts suggest that 1.8 m distance is not enough [43] due to possible air current (Table 1). Pollution caused by nitrogen dioxide (NO 2 ) can be one of the most critical contributors to increase the fatality rate, caused by COVID-19 [44]. Recent studies found the existence of SARS-CoV-2 in sewage water [45] and non-potable water [46]. Scientists are exploring how humidity, temperature, and ultraviolet lighting alters the virus as well as how long it can survive on different surfaces. Some studies have revealed that relative humidity affects all infectious virus droplets, independent of their source and location [41], and gravity and airflow cause the most virus droplets to float to the ground. The temperature, along with humidity, affects the properties of viral surface proteins and lipid membrane [41]. According to the same study, humidity between 50% to 80% is the best for low stability in SARS-CoV-2 [41]. According to the studies [9,53,54], The SARS-CoV-2 can exist on different objects and surfaces as follows: (a) half of the samples from the soles of the ICU medical staff shoes tested positive, (b) surface contamination (computer mouse, trash cans, sickbed handrails, doorknobs), (c) equipment (exercise equipment, medical equipment including spirometer, pulse oximeter, and nasal cannula, personal computers, iPads, and reading glasses), and (d) surfaces (cellular phones, remote controls, toilets, room floors, bedside tables, and bed rails, and window ledges).
According to the epidemiologists, the fatality rate of COVID-19 can change as SARS-CoV-2 can mutate. WHO claimed that social distancing is the only way to slow down COVID-19 transmission, and that is the reason, many countries are locked down, and people are asked to stay at home. The concept of social distancing is not to eradicate the COVID-19, but to slow down its transmission, hence declining the pressure on the health care systems and economy and, in this manner, reduce the fatality rate. It might infect around 90% of the global population if no mitigation measures are taken soon, as estimated by a leading statistical modeling group at "Imperial College London (ICL)" [55]. COVID-19 took 67 days for its initial 0.1 million cases, then it took just 3 days to reach from 0.4 million to 0.5 million cases as depicted in Figure 2. The ICL team analyzed that, if proactive measures, such as social distancing, rigorous testing and isolation of diseased people are taken with proper planning when fatality rate of each infected country is 0.2/100,000 victims/week, then the outcome might reduce wide-reaching deaths to 1.9 million. Studies found that Italy hit the 0.

Methodology
We performed following three analytical studies in this paper-a. correlation analysis to identify how human coronavirus spreading and its fatality are related to factors such as, external temperature, sunshine, rainfall, population, area, and density. b. finding importance of social isolation factor ("f") to restrict the spread of COVID-19, and c. development of univariate LSTM models to forecast total death and total cases globally or country-wise (choice-based) and their performance comparison.
The overall process (methodology) includes [56-58]-a. data collection/data simulation, b. data pre-processing, c. statistical analysis and data visualization, d. algorithm selection for LSTM model development, e. model training and testing, f. model evaluation, and g. model reusability.

Data Collection
In this study, we used two types of datasets-a. real datasets available in "ourworldindata.org" for timeseries forecasting and data visualization, and b. simulated dataset. For univariate timeseries forecasting, we used "ourworldindata.org" datasets (total cases and total death) from 1 January 2020 to 22 April 2020 for the whole world and afflicted specific countries separately for individual processing.

Methodology
We performed following three analytical studies in this paper-a. correlation analysis to identify how human coronavirus spreading and its fatality are related to factors such as, external temperature, sunshine, rainfall, population, area, and density. b. finding importance of social isolation factor ("f") to restrict the spread of COVID-19, and c. development of univariate LSTM models to forecast total death and total cases globally or country-wise (choice-based) and their performance comparison.
The overall process (methodology) includes [56-58]-a. data collection/data simulation, b. data pre-processing, c. statistical analysis and data visualization, d. algorithm selection for LSTM model development, e. model training and testing, f. model evaluation, and g. model reusability.

Data Collection
In this study, we used two types of datasets-a. real datasets available in "ourworldindata.org" for timeseries forecasting and data visualization, and b. simulated dataset. For univariate timeseries forecasting, we used "ourworldindata.org" datasets (total cases and total death) from 1 January 2020 to 22 April 2020 for the whole world and afflicted specific countries separately for individual processing.
The datasets are as described in Table 3. All the simulated datasets are available in the repository as mentioned in the "Supplementary Materials" along with python codebase to reproduce the results.

Data Processing
Collected data are categorized among two groups-continuous and categorical. Accumulated data in this research are labeled. Downloaded data from "ourworldindata.org" are inconsistent with missing values. We utilized data mining techniques to filter data samples from the dataset, discard samples containing outliers, pattern discovery, calculation of feature correlation, feature selection, and noise removal. Data processing combines three steps as stated below [56][57][58]: • Data preprocessing includes-data integration, removal of noisy data that are incomplete and inconsistent, data normalization and feature scaling, encoding of categorical data, feature selection after correlation analysis, and split data for training and testing an LSTM model.

•
Training of a LSTM model and test its accuracy with loss functions as described in Section 3.5.

•
Data postprocessing includes-pattern evaluation, pattern selection, pattern interpretation, and pattern visualization.
In this experiment, we have used "Python 3.
x" language libraries for data processing, as described in Table 4. We established a python environment using anaconda distribution and "Spyder IDE" for developing python-based deep learning application. We used traditional "Keras" library with "TensorFlow" backend for LSTM model development, training, and testing.

Statistical Analysis
In this study, we performed following two statistical approaches-hypothesis testing and correlation analysis. Hypothesis testing is a statistical method that is used in achieving statistical decisions using trial data. The critical parameter of hypothesis testing is the null hypothesis (H 0 ), that tells there is nothing different or unique about the data. On the contrary, the alternative hypothesis (Ha) directly contradicts H 0 . The confidence factor or value of significance (α) is used to decide whether to accept or reject an H 0 . The value of α is usually kept as 0.05 or 5%, as 100% accuracy is impossible to achieve whether to accept or reject an H 0 . Popular, widely used hypothesis testing method, and a short description is demonstrated in Table 5. For the testing method, resultant probability value (P-value) is compared with "α" to accept or reject a null-hypothesis [56][57][58]. Example: Hypothesis (H 0 ). Time series has a unit root (non-stationary). It has some time dependent structure.

Hypothesis (H a ).
The null hypothesis is rejected. It suggests that the time series does not have a unit root (stationary). It does not have time-dependent structure, and α = 5% or 0.05. Table 5. Hypothesis testing method [62].

Method Description
Augmented Dickey-Fuller test To test if a timeseries is stationary or non-stationary Covariance (COV(x,y)) is a property of a function to retain its form when its variables are linearly transformed. It helps to measure correlation (r xy ) that measures the strength of the linear relationship between two variables.
corr(x, y) = COV(x, y)/(σx * σy), where − 1 < r < +1 The "sign" shows the direction of the relationship among two variables x and y. Table 6 shows the meaning of different |r| values. If two variables are strongly correlated, it is recommended to select any one of them during feature selection. Pearson's correlation coefficient is used to summarize the strength of the linear relationship between two variables in normal distribution and spearman's correlation is used to calculate the non-linear relationship between two variables. The used statistical methods are described in Table 7 [56-58,61]. Medium to substantial 0.6-0.8 Very strong 0.8-1 Extremely strong

LSTM Modelling
The long short-term memory networks (LSTM) [63] are applied in long term dependencies, such as timeseries forecasting, handwriting recognition, speech detection, and anomaly detection in network traffic. LSTMs are a special kind of RNN and used in the field of deep learning. An LSTM model has a chain-like structure (a cell, an input gate, an output gate and a forget gate), but the repeating module has a different structure. Unlike standard feedforward neural networks, LSTM has feedback connections. LSTM networks are well-suited to classify, process, and make predictions based on timeseries data. They are used to overcome following two problems associated with the RNN-exploding gradients, and vanishing gradients. There are different types of LSTM models (univariate, multivariate, multi-step, and multivariate multi-step) which can be used for each specific type of timeseries forecasting problem. In this study, we have used univariate LSTM models, such as vanilla, stacked, bidirectional, and multilayer, for timeseries forecasting. The sates of a vanilla LSTM model are summarized below and illustrated in Figure 5.
Step#1: What we need to forget? Identify that information which are not required and must be thrown away from the cell state. This decision is made by a sigmoid layer called as forget gate layer ("f t ").
Step#2: What new information we are going to add to our cell state? A sigmoid gate called the "input gate layer" decides which values will be updated ("o t "). Next, a "tanh" layer creates a vector of new candidate values, that could be added to the state.
In this study, we have selected below six LSTM models for timeseries analysis and forecasting:
Step#2: What new information we are going to add to our cell state? A sigmoid gate called the "input gate layer" decides which values will be updated ("ot"). Next, a "tanh" layer creates a vector of new candidate values, that could be added to the state.
Step#3: Combine step#1 and step#2 two achieve a new cell state ("ct"), and Step#4: Finally, receive the output ("ht"). In this study, we have selected below six LSTM models for timeseries analysis and forecasting: (1) There are three deep learning model optimizers for hyperparameter tuning and cross validationa. Adaptive gradient (ADAGARD), b. RMSProp (adds exponential decay), and c. ADAM. In this study, we used "ADAM" optimizer. c.
"Dense layer" is the regular deeply connected neural network layer. e.
"ReLU" stands for rectified linear unit. It is a type of activation function. Mathematically, it can be defined as y = max (0, x), where x > 0. Its convergence is faster. It is fast to compute. It is sparsely activated. f.
LSTM units can be trained in a supervised fashion, on a set of training sequences, using an optimization algorithm, such as gradient descent, combined with backpropagation through time to compute the gradients needed during the optimization process, to change every weight of the LSTM network in proportion to the derivative of the error (at the output layer of the LSTM network) with respect to corresponding weight.

Model Training and Testing
The steps applied to train and test an LSTM model in this study, are described as below: • Importing of python libraries Execute the model for five times and then calculate the average of performance metrics as described in Section 3.5, and predicted value. It helps prove the testing rate and increase the validity of timeseries analysis.

Note:
a. Univariate sequences are timeseries data of total cases, and total death for the world or individual countries. In this study, we have considered univariate timeseries data of the world for both training and testing of LSTM models, but the same model can be extended to use for individual countries. b.
The "acc" refers to accuracy in metrics = ["acc"] of the corresponding LSTM model.

Model Performance Evaluation
Developed univariate LSTM models for timeseries forecasting are evaluated with below metrices: • Regression metrices: mean absolute error (MAE), mean squared error (MSE), root mean square error (RMSE), forecast bias, and R 2 regression metric.
MAE is the easiest error metric used in the regression problem following the formula: MSE squares the difference of actual and predicted output before adding them all instead of using the absolute value following the formula: RMSE is the square root of the calculated mean squared error (MSE). Forecast bias can be wither positive or negative. The forecast bias is calculated directly as the mean of the forecast error. A mean forecast error value other than zero suggests a tendency of the model to over forecast (negative error) or under forecast (positive error). As such, the mean forecast error is also called the forecast bias. If forecast error = 0, then no error, or perfect skill for that forecast. If forecast bias < 0, then over forecast and if forecast bias = 0 or close to zero, then the model is unbiased.
R 2 regression metric has been used for the explanatory purpose to provides an indication of the fitness in predicted output values to the actual output values. It is calculated with a formula having numerator as MSE and the denominator as the variance in Y values. The R 2 signifies how much variance of the data is explained by the model. The R 2 = 0.90 means that 0.10 of the variances cannot justify by the model, when the logical case is R 2 = 1, then the model completely fit and explained all variance. The calculation of R 2 > 1 represents an abnormal case that has no logical meaning, and it may result from the small sample size.

Model Store and Reuse
We saved our final trained LSTM model in a file and restored it to reuse, either by comparing the model with other models or by testing the model on a new or updated data. The process of storing the model is called serialization, and restoring the model is called deserialization. It can be done in two ways, as described in Table 8. The pickled model can be stored in the database for distributed access. Table 8. LSTM model store [61,64].

Method Implementation
Pickle string Import pickle library Pickled model Import joblib from sklearn.externals library

Algorithm Design to Find the Importance of Social Distancing
In this research, we studied the importance of social distancing by flattening the curve of afflicted population over specific days, with a spreading factor ("f") of 0 < f ≤ 5 [6]. The spread factor is used to determine the transmission rate of a virus [6].
If "f" = 0 then no spreading, else one infected person can infect up to 3-5 people daily, in maximum [6]. The recovery from COVID-19 takes a maximum of 7-10 days [6]. Therefore, we have selected the value of "days_to_recover" as 10. In the proposed algorithm, we assumed that no patient has died. The "days" feature can be contemplated as a "lockdown" period. The Algorithm 1 we used for analysis is described below -Algorithm 1. Importance of social distancing by flattening the curve of afflicted population over specific days Step 1: Initialize necessary parameters as follows to create a simulated town infected with COVID-19days = 100/*lockdown days*/ population = 200,000/*population of the town*/ spread_factor = 0.25/*COVID-19 transmission rate (0 < f ≤ 5) */ days_to_recover = 10/*maximum recovery days from COVID-19*/ inital_afflicted_people = 5/*initial infected people of the town with COVID-19*/ Step 2: Initialize a data frame ("town") for the simulated town with the following four featuresid = range(population)/* id € (0-population) */ infected = false recovery_day = none recovered = false Step 3: Initialize the initial cases ("initialCases") with inital_afflicted_people variable, update corresponding infected feature as true, and update recovery_day feature with days_to_recover variable Step 4: Initialize the initial active cases ("active_cases") with initally_afflicted variable and initial recovered cases ("recovery") with 0.
Step 5: for day = 1 to days do Step 5.1: Mark the people of town data frame, who have recovered on current day -update the feature recovery_day as True and infected feature as False if they have crossed days_to_recover else ignore.
Step 5.2: Calculate the number of people who are afflicted today with spread_factor -calculate number of people infected in the town data frame based on feature infected = True -multiply the count of total infected people with spread_factor to calculate total possible cases of infected people on current day Step 5.3: Forget people who were already infected in cases of current day Step 5.4: Mark the new cases as afflicted, and their recovery day by updating active_cases and recovery lists of the town data frame.
Step 6: Repeat the step 5 for spread_factor = 0.25 to 5.0 and plot every distribution graph of active_cases over days.

Note:
a.
The algorithm was implemented with "simulated_data_2". c.
The worst-case time complexity of the algorithm is O(N 2 ), where N = problem size.

Results and Discussion
The correlation analysis of the simulated data ("simulated_data_1") is depicted in Figure 6. The resultant correlation heatmap of simulated data is a well-accepted data visualization method among machine learning communities, and it illustrates the magnitude of a phenomenon as color in two dimensions. Here, the variation in color is the value of correlation factor "r" which is giving understandable visual cues about how the phenomenon is clustered or varies over space. The code of the color is changing according to the values of "r", from a weak correlation to the strong correlation. The color bar beside the correlation matrix is signifying that color change following "r" values, where −1 < r < +1 as described in Section 3.3.
We excluded the feature "country" from correlation study. The correlation study was conducted to investigate how infected cases and death are related to external temperature, sunshine, and precipitation! Correlation factor |r| > 0.6 represents a strong correlation according to Table 5.
In this study, we represented the relation between total population (p), cases (c), and death (d) with the following functions ("f")c = f(p), and d = f(c), where p > 0, c > 0, d > 0, and p, c, d are natural numbers (N). Hence, d = f(f(p)). Generally, dc/dt ≥ 0, dd/dt ≥ 0, and dp/dt > 0, where "t" is the time and t > 0. Let, p' is the total infected population, where p'€ p.
Let, c = f(p') is a function defined on an interval [a, a + h], where "a" is the initial infected population, "h" is the newly infected population, {a, h} € p', a ≥ 0, and h < p'.
Therefore, the instantaneous rate of change of "c" at "a" is its derivative - Hence, for small change in "h", f'(a) approximates to (f (a + h) − f (a))/h. Subsequently, it can be derived thatdc/dt = dp /dt ≥ 0 Sensors 2020, 20, x FOR PEER REVIEW 18 of 28 Figure 6. Correlation heatmap of simulated data ("simulated_data_1") to check feature correlation.
The correlation analysis, as depicted in Figure 6, is exhibiting that COVID-19 does not have any dependency on external temperature, sunshine, and precipitation. It is genuinely a community disease. Death is highly correlated (|r| > 0.8) to the number of cases rather than the weather (external temperature, sunshine, and precipitation), as depicted in Figure 7. We performed exponential The correlation analysis, as depicted in Figure 6, is exhibiting that COVID-19 does not have any dependency on external temperature, sunshine, and precipitation. It is genuinely a community disease. Death is highly correlated (|r| > 0.8) to the number of cases rather than the weather (external temperature, sunshine, and precipitation), as depicted in Figure 7. We performed exponential regression analysis to plot increase in death (Y-axis) with an increase in the number of cases (X-axis) as depicted in Figure 7, and the obtained equation of an approximated exponential curve is: Y = e∧(5.95734475e+00) * e∧(1.25996126e-05*X).
The total cases are highly related (|r| > 0.7) to the population, as depicted in Figure 6. If the number of populations increases, the number of new deaths also increases due to the high correlation value of |r|, as depicted in Figure 6. Therefore, social distancing or social isolation is one of the primary keys to stop its spreading. Countries with high population density, such as Bangladesh, Singapore, Pakistan, and India, have a high chance of getting afflicted by COVID-19 very drastically until controlled from the beginning. Hence, social isolation, lockdown, social distancing are significant in this regard to stop the spreading of COVID-19 at the community level.
That is why, many countries have been locked down, and people are being asked to stay at home. It might have a chance to slow down the spread of the COVID-19 by flattening the curve of afflicted population over days and relaxing pressure on the healthcare system. It is one of the essential measures to restrict the fatality rate of COVID-19. Besides the decision of lockdown, ordinary people should understand its importance as the human coronavirus is highly contagious.
Sensors 2020, 20, x FOR PEER REVIEW 19 of 28 population over days and relaxing pressure on the healthcare system. It is one of the essential measures to restrict the fatality rate of COVID-19. Besides the decision of lockdown, ordinary people should understand its importance as the human coronavirus is highly contagious. We hypothesized that social isolation or social distancing might restrict the spreading of the human coronavirus as it may slow down the spread factor ("f"). To prove the assumed hypothesis, we proposed an algorithm in Section 3.8. After executing the algorithm with simulated data ("simulated_data_2"), we plotted different distribution graphs of "active cases" (Y-axis) over the number of lockdown "days" (X-axis), for the following set of spread factor ("f") values: [0. 25, 0.5, 0.75, 1.0, 2.0, 3.0, 4.0, 5.0] as depicted in Figure 8. The spread factor ("f") with the lowest value of 0.25 has produced a nice gaussian distribution in Figure 8. With increasing spread factor ("f"), active cases are growing high as compared to lockdown period, as described in Table 9. If the average load of active cases goes high in short span of days as described in Table 9, the healthcare sector may collapse to cope up and unable to provide adequate treatment to infected patients. Therefore, the recovery rate may become very low and death rate may increase. The figures (Figure 8) are illustrating that We hypothesized that social isolation or social distancing might restrict the spreading of the human coronavirus as it may slow down the spread factor ("f"). To prove the assumed hypothesis, we proposed an algorithm in Section 3.8. After executing the algorithm with simulated data ("simulated_data_2"), we plotted different distribution graphs of "active cases" (Y-axis) over the number of lockdown "days" (X-axis), for the following set of spread factor ("f") values: [0.25, 0.5, 0.75, 1.0, 2.0, 3.0, 4.0, 5.0] as depicted in Figure 8. The spread factor ("f") with the lowest value of 0.25 has produced a nice gaussian distribution in Figure 8. With increasing spread factor ("f"), active cases are growing high as compared to lockdown period, as described in Table 9. If the average load of active cases goes high in short span of days as described in Table 9, the healthcare sector may collapse to cope up and unable to provide adequate treatment to infected patients. Therefore, the recovery rate may become very low and death rate may increase. The figures (Figure 8) are illustrating that social isolation or social distancing has a significant impact on flattening the curve of afflicted population over days to alleviate sudden pressure on the existing capacity of the healthcare system.    India implemented its first lockdown on 23 March 2020 to 13 April 2020 (21 days) and the second lockdown until 3 May 2020. The trend of total reported cases has been compared between four Asian countries, such as India, Singapore, Iran, and Turkey, till 22 April 2020 as depicted in Figure 9. The trend is showing that successful lockdown might have a chance to slow down the spreading of the human coronavirus in India and Singapore. As per the study at "John Hopkins University", the human coronavirus growth rate in India is declining consistently by flattening the curve of case doubling due to the first phase of lockdown [14].
We downloaded four types of timeseries data from "ourworldindata.org" as follows-a. the total number of cases, b. total deaths, c. new confirmed cases, and d. new deaths. We performed hypothesis testing on the timeseries data to check whether they are stationary or not, following Table 5. The result is described in Table 10.
We analyzed the performance of six LSTM models as described in Section 3.4 on the following two datasets-a. the total number of cases, and b. total deaths, available in "ourworldindata.org" to forecast probable total infected cases and death in advance. The designed models can be used to forecast total infected cases and total deaths of any selected countries individually, available in "ourworldindata.org". We processed data from 1 January 2020, to 22 April 2020 as described in Section 3.5. Total 97% of the data utilized to train the models and the remaining 3% data used for testing (total 110 future predictions) the performance of the models.   Fail to reject null hypothesis; the data has a unit root and data is non-stationary We executed training and testing of individual models for 5 times, then took the average of corresponding performance metrics, and predicted values. The average performance results of different LSTM models are described in Tables 11 and 12 respectively, and corresponding model calibrations are depicted in Figures 10 and 11, respectively. According to the result, no single model is 100% accurate, and they tend to either over-forecast or lower forecast. The vanilla, stacked, and bidirectional LSTM models performed better than multilayer LSTM models. In this study, we focused only on the general trend of data, and that might be the reason to over-forecast. The forecasting may help us to be aware of upcoming unwanted situations and take necessary actions in advance to mitigate it.  Figure 10. Comparing the calibration of the LSTM models to forecast total cases of the "World". Figure 10. Comparing the calibration of the LSTM models to forecast total cases of the "World". Figure 10. Comparing the calibration of the LSTM models to forecast total cases of the "World". Figure 11. Comparing the calibration of the LSTM models to forecast total deaths of the "World".

Conclusions
The statistical correlation study proved that COVID-19 does not depend on external weather factors, such as external temperature, sunshine, and precipitation. It depends on the population and its density mostly. Therefore, it is considered as a community disease. This research verified our assumed hypothesis that social isolation/social distancing might restrict the spreading of the human coronavirus by diminishing its spread factor. The forecasting of probable new corona cases and death Figure 11. Comparing the calibration of the LSTM models to forecast total deaths of the "World".
For verification, we trained our vanilla, stacked, and bidirectional LSTM models with Indian dataset available in "ourworldindata.org" from 1 January 2020 to 23 March 2020. The focus was to forecast an approximate total number of cases after 21 days starting 23 March 2020, as the first lock-down period of India ended on 13 April 2020. We executed individual models for 5 times, then took the average of total predicted cases. Once forecasting was completed, we verified whether lock-down (social distancing/social isolation) had any impact on lowering the spread of the human coronavirus. The result showed that without lock-down, India could cross 0.2 million of total corona cases on 14 April 2020. Therefore, it supports our assumed hypothesis that social isolation/social distancing is one of the main criteria to fight against COVID-19.

Conclusions
The statistical correlation study proved that COVID-19 does not depend on external weather factors, such as external temperature, sunshine, and precipitation. It depends on the population and its density mostly. Therefore, it is considered as a community disease. This research verified our assumed hypothesis that social isolation/social distancing might restrict the spreading of the human coronavirus by diminishing its spread factor. The forecasting of probable new corona cases and death count with proposed LSTM models in this study may help to take necessary actions in advance to control the upcoming undesirable health crisis. SARS-CoV-2 can infect people of all ages, but people who have pre-existing medical conditions such as COPD, CVDs, diabetes, hypertension, cerebrovascular disease, and cancer are more susceptible to become severely sick with the viral infection. Complete data related to different health factors, age, sex, health history of COVID-19 infected patients are still not available in public to conduct more detailed research.
In the future, the accuracy of the LSTM forecasting can be improved after considering additional needed parameters rather than relying on univariate trend of timeseries data. eHealth with information and communication technologies (ICT) [65], may open a new direction in COVID-19 research and remote patient monitoring after collecting necessary health and wellness data through standard sensors, questionnaires and followed by, train a decision support system (DSS) for tailored recommendation generation. Funding: This research is funded by the "University of Agder, Department of Information and Communication Technology, Center for e-Health, Grimstad, Norway".