Page History
...
- there is no need for comprehensive understanding of physics theory. The relationships are simple to program but at the expense of understanding physically the meteorological effects that are in action.
- the AI process is effectively a "black box" producing results by a process unknown to the user. It requires a good deal of trust in the method though initial results show high effectiveness. The user can have difficulty interpreting or explaining forecast results.
- the ability to interpret the results of AI forecasts ("Interpretability") may be built up with experience; the ability to explain the results of AI forecasts ("Explainability") may be more difficult.
- the set of observed and forecast variables is limited (see Table1).
- post-processing at a given location may require further physical or practical interpretation.
- problems regarding inter-compatabilty between programing languages of physical and AI models. This might become a problem where hybrid models are employed (e.g. where transferring data from AI to physical model for post-processing).
- input observations at different times and locations have to be assigned to specific grid points (encoding) and the reverse process to assign forecast values from grid points to specific locations (decoding).
- each forecast variable is independent of the others. They are not interdependent. The forecast wind may not be consistent with the forecast pressure or height gradient.
(FUG Associated with Cy49r1)
Overview
Community Forums
Content Tools