Advertisement
Research Article

Forecast of Dengue Incidence Using Temperature and Rainfall

  • Yien Ling Hii mail,

    yienling.hii@epiph.umu.se

    Affiliation: Umeå Centre for Global Health Research, Epidemiology and Global Health, Department of Public Health and Clinical Medicine, Umeå University, Umeå, Sweden

    X
  • Huaiping Zhu,

    Affiliation: Laboratory of Mathematical Parallel Systems, Department of Mathematics & Statistics, York University, Toronto, Ontario, Canada

    X
  • Nawi Ng,

    Affiliation: Umeå Centre for Global Health Research, Epidemiology and Global Health, Department of Public Health and Clinical Medicine, Umeå University, Umeå, Sweden

    X
  • Lee Ching Ng,

    Affiliation: Environmental Health Institute, National Environment Agency, Singapore, Singapore

    X
  • Joacim Rocklöv

    Affiliation: Umeå Centre for Global Health Research, Epidemiology and Global Health, Department of Public Health and Clinical Medicine, Umeå University, Umeå, Sweden

    X
  • Published: November 29, 2012
  • DOI: 10.1371/journal.pntd.0001908

Reader Comments (1)

Post a new comment on this article

Would like to see cross-validation

Posted by bcreiner on 11 Dec 2012 at 18:36 GMT

This is very impressive work, and I would imagine very useful for the public health workers of Singapore. I would like to have seen a little more cross-validation of the model. It would seem, based on the use of an autoregressive term that uses dengue case data from each of the last 6 weeks to predict the next week that the model is fit to optimize one-week ahead prediction (and not 16 weeks ahead). I may be reading this wrong, but the results shown in figure 2 (while impressive) are one week ahead predictions (since through the D_{AR} term they require the dengue cases for the previous week to run the model). The only place (again as I read it, I could be wrong) where the authors predict cases farther than one week into the future is in the prediction experiment they run for 2012 where they use predicted case data to iteratively predict the next week of case data and so on. Since the model uses 6 weeks of old case data, it really starts 'predicting' when you go out 6 weeks. The last bit of Figure 3 shows the early results for this test, but since these predictions truly are into the future, the results weren't in by publication.

What I would like to have seen is them do a variation of drop-one prediction where they take the data, throw out 16 sequential weeks, and then see how well their model does predicting that 16th week using the same approach they used for predicting 2012. By doing this for every 16 week period, they would arrive at a set of predictions that would truly test the 16 week predictive power of their model.

No competing interests declared.