Improvement of retrospective optimal interpolation by incorporating eigen-decomposition and covariance inflation

Retrospective optimal interpolation (ROI) is a method that is used to minimize cost functions with multiple minima without using adjoint models. We address two weaknesses associated with the cost‐effective formulation of ROI and offer possible solutions. The first weakness of the cost‐effective ROI...

Full description

Saved in:
Bibliographic Details
Published inQuarterly journal of the Royal Meteorological Society Vol. 138; no. 663; pp. 353 - 364
Main Authors Song, Hyo-Jong, Lim, Gyu-Ho
Format Journal Article
LanguageEnglish
Published Chichester, UK John Wiley & Sons, Ltd 01.01.2012
Wiley
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Retrospective optimal interpolation (ROI) is a method that is used to minimize cost functions with multiple minima without using adjoint models. We address two weaknesses associated with the cost‐effective formulation of ROI and offer possible solutions. The first weakness of the cost‐effective ROI formulation is that the error tolerance would become large in practical application due to computation costs. When the error tolerance is large, accuracy‐saturated modes do not extract information from new incoming observations even though they are likely to be flawed. To address this problem, we modify the existing ‘reduced‐resolution’ formulation by using eigen‐decomposition of the background error covariance at each analysis step. We refer to the modified algorithm as a ‘reduced‐rank’ algorithm. This modification allows us to deal with larger error variances while analysing the same number of control variables, because eigen‐decomposition steeply reorders the error variance in descending order. As a result, when the reduced‐rank algorithm is applied, the number of analysed control variables becomes smaller than when the reduced‐resolution algorithm is used with the same error tolerance. The second weakness is the underestimation of the trailing mode of background error covariance that is projected onto the future observation space. This originates from errors in control variables extracted from analysis procedures. To prevent the occurrence of filter divergence due to this underestimation, we reduce the weighting of the observational increments in the analysis. By implicitly assuming that the rate of being projected onto the future observation space of the trailing eigenmodes is similar to that of the leading eigenmodes, we develop a method that inflates the observation error covariance and consequently improves analysis quality in the Lorenz 40‐variable and the 960‐variable model experiments. Copyright © 2011 Royal Meteorological Society
Bibliography:ark:/67375/WNG-P2751D3P-S
ArticleID:QJ911
istex:C0789E22D49A3AA4770D77D4632C88FB4FBA93B9
ISSN:0035-9009
1477-870X
DOI:10.1002/qj.911