Selected article for: "autoregressive model and delay distribution"

Author: Meakin, S.; Abbott, S.; Bosse, N. I.; Munday, J. D.; Gruson, H.; Hellewell, J.; Sherratt, K.; CMMID COVID-19 Working Group,; Funk, S.
Title: Comparative assessment of methods for short-term forecasts of COVID-19 admissions in England at the local level
  • Cord-id: nd94s24o
  • Document date: 2021_10_18
  • ID: nd94s24o
    Snippet: Background: Forecasting healthcare demand is essential in epidemic settings, both to inform situational awareness and facilitate resource planning. Ideally, forecasts should be robust across time and locations. During the COVID-19 pandemic in England, it is an ongoing concern that demand for hospital care for COVID-19 patients in England will exceed available resources. Methods: We made weekly forecasts of daily COVID-19 hospital admissions for National Health Service (NHS) Trusts in England bet
    Document: Background: Forecasting healthcare demand is essential in epidemic settings, both to inform situational awareness and facilitate resource planning. Ideally, forecasts should be robust across time and locations. During the COVID-19 pandemic in England, it is an ongoing concern that demand for hospital care for COVID-19 patients in England will exceed available resources. Methods: We made weekly forecasts of daily COVID-19 hospital admissions for National Health Service (NHS) Trusts in England between August 2020 and April 2021 using three disease-agnostic forecasting models: a mean ensemble of autoregressive time series models, a linear regression model with 7-day-lagged local cases as a predictor, and a scaled convolution of local cases and a delay distribution. We compared their point and probabilistic accuracy to a mean-ensemble of them all, and to a simple baseline model of no change from the last day of admissions. We measured predictive performance using the Weighted Interval Score (WIS) and considered how this changed in different scenarios (the length of the predictive horizon, the date on which the forecast was made, and by location), as well as how much admissions forecasts improved when future cases were known. Results: All models outperformed the baseline in the majority of scenarios. Forecasting accuracy varied by forecast date and location, depending on the trajectory of the outbreak, and all individual models had instances where they were the top- or bottom-ranked model. Forecasts produced by the mean-ensemble were both the most accurate and most consistently accurate forecasts amongst all the models considered. Forecasting accuracy was improved when using future observed, rather than forecast, cases, especially at longer forecast horizons. Conclusions: Assuming no change in current admissions is rarely better than including at least a trend. Using confirmed COVID-19 cases as a predictor can improve admissions forecasts in some scenarios, but this is variable and depends on the ability to make consistently good case forecasts. However, ensemble forecasts can make forecasts that make consistently more accurate forecasts across time and locations. Given minimal requirements on data and computation, our admissions forecasting ensemble could be used to anticipate healthcare needs in future epidemic or pandemic settings.

    Search related documents:
    Co phrase search for related documents
    • absolute difference and adjusted value: 1
    • absolute difference and admission number: 1
    • absolute difference and local level: 1
    • absolute difference and low quality: 1, 2
    • absolute difference and low quality care: 1
    • absolute error and accurate forecast: 1, 2, 3, 4, 5
    • absolute error and additional model: 1
    • absolute error and admission number: 1
    • absolute error and local level: 1, 2, 3
    • absolute error and local response: 1
    • absolute error and location time: 1, 2, 3
    • absolute error and low quality: 1, 2, 3, 4