The AI Weather Quest leaderboards page provides a dynamic view of how teams and models are performing throughout the competition.
The AI Weather Quest is a collaborative forecasting challenge hosted by ECMWF, designed to explore the potential of AI and machine learning in medium-range weather prediction. Participating teams are challenged with providing quintile-based probabilistic forecasts for three key variables: near-surface air temperature (tas), mean sea level pressure (mslp), and accumulated precipitation (pr). Forecasts target lead times of days 19 to 25 (week 3) and days 26 to 32 (week 4).
Starting 19 September 2025, the Leaderboards page will display:
Evaluation results are indeed only made available after the evaluation date (day 37 of each competition week) has passed.
This guide explains how to use the leaderboards page effectively and track the evolution of team and model performance over time.
Filters apply to all elements on the page, including RPSS tables and evolution graphs. By default, the page displays the latest evaluated period and week, the first forecast window (Days 19–25), and variable-averaged scores.
To customize your view, use the following options:
All rankings are based on the RPSS of each team's best-performing model. However, individual model ranks and scores are also visualised, allowing users to compare performance across multiple submissions from the same team.
Click any RPSS score in the table to view the corresponding forecast in the ECMWF-hosted sub-seasonal AI forecasting portal.
There are two main types of RPSS tables:
Each RPSS table is accompanied by two evolution graphs:
Shows how team rankings evolve week by week:
Tracks performance trends of models across weeks:
Click on any team name in the leaderboard to view: