Page History
Table of Contents |
---|
Model Data Assimilation, overview
Data assimilation is an analysis technique in which the observed information is accumulated into the model state.
Approaches to Data Assimilation
There are three basic approaches to data assimilation:
...
Fig2.5-1: Representation of four basic strategies for data assimilation as a function of time. Observations are made at different times and arrive at irregular times. Intermittent assimilation of observations induces step changes in the model analysis (red line) while continuous assimilation of observations gives smoother changes in the model analyses.
Quality Control
Data is quality controlled using constraints of:
...
- Data Extraction
- Thinning (to reduce over-emphasis towards values in a small area)
- Check out duplicate reports (to remove over-emphasis of the observation)
- Ship tracks check (to ensure observations made at a reasonable location)
- Hydrostatic check
- Some data is not used to avoid over-sampling and correlated errors.
- Departures and flags are still calculated for further assessment.
- Blocklisting
- Data skipped due to systematic bad performance or due to different considerations (e.g. data being assessed as unreliable, inconsistent or misleading).
- Model/4D-Var dependent quality control
- First guess based rejections
- 4D-Var quality control rejections
- The Analysis
Outline of the analysis process
The analysis process seeks to realistically represent in the model the actual state of the atmosphere. However, inconsistencies in time and space of the observations mean that this aim will never be actually attained. The best that can be done is to approximate (hopefully closely) the actual state of the atmosphere while maintaining the numerical stability of the model atmosphere both horizontally and vertically. The assimilation process is carried out by 4D-Var (see below). In simple terms the previous analysis step has used model processes (e.g. dynamics, radiation, etc) to reach a forecast first guess value at a given location. This will usually differ from an observation at that location and time. The difference between them is the "departure". The analysis scheme now adjusts the value at the location towards the observed value while retaining stability in the model atmosphere. This adjustment is the "Analysis Increment". If the magnitude is large it suggests the model is not capturing the state of the atmosphere well (e.g. a jet is displaced, a trough is sharper, large scale active convection has not been completely captured). However, a large Analysis Increment may also suggest poor data. Analysis Increment charts are a powerful tool for identifying areas of uncertainty which might propagate downstream.
...
Fig2.5-2: Schematic of the data assimilation process (from a diagnostic perspective). All the model forecast parameters (dynamics, radiation, etc.) are used in the model forecast to deliver the first guess forecast (red). The observations (grey) however will normally differ to a greater or lesser extent from the first guess forecast (red). The analysis increment (yellow) is evaluated to bring the the evolution more into line with the observations while maintaining model stability. The resultant value (blue) becomes the first guess for the next analysis.
Model Data Assimilation, 4D-Var
The four-dimensional variational analysis (4D-Var) system uses an optimisation procedure to adjust the initial condition to obtain:
...
Fig2.5-4: Each observation has an error (instrumental, representativeness, etc) and error within the IFS forecast models is also taken into account. A way to simulate both these effects is to run an Ensemble of Data Assimilation (EDA). These are shown in green.
Analogies to 4D-Var and the EDA
Suppose a forecaster wants to create a sequence of manually analysed hourly synoptic charts for their region, which evolve smoothly, continuously and realistically from one time to the next, throughout. To achieve this one approach would be to draw up one chart, using all available data, surface observations, imagery etc, then another for a subsequent time in the same way. Then the forecaster might go back to the first chart, rub something out and re-draw it in a rather different fashion that still fits the available observations reasonably well, but that allows the next chart to follow on better. Then they might draw up other charts for other times, and repeat this process many times, rubbing out and re-drawing, probably all of the charts in some way or other, to achieve the final goal of sensible continuity. Each time a chart is readjusted the changes needed will become smaller and smaller, until finally the forecaster is happy that they have a full and consistent sequence. This is the forecaster's equivalent of 4D-Var. Of course in 4D-Var there is the additional constraint of full vertical consistency also, though in the forecaster world soundings and imagery may indeed be contributing in an analogous way.
Meanwhile the Ensemble of Data Assimilation (EDA) could be thought of as being equivalent to the above process, but activated several times producing slightly different smooth and continuous but equally probable sequences. Where there are lots of observations the several sequences might end up almost the same, but where data is sparse one could end up with much more variability between sequences. And where there is large dynamic instability (e.g. a developing frontal wave rather than the centre of an anticyclone) this 'spread' might be even greater. In the real EDA this is essentially also what happens; more spread is found amongst members in data sparse areas and when there is larger innate uncertainty in actual atmospheric developments.
Additional Sources of Information
(Note: In older material there may be references to issues that have subsequently been addressed)
- Read more on 4D-Var.
- Read more on the use of EDA in 4D-Var.
- Read more on data assimilation and interpolation of data.
- Watch a webinar on Data assimilation and 4D-Var (30sec delay).
(FUG Associated with Cy49r1)