Real-time data is delivered to authorised or licensed users either ​directly by ECMWF or via the chosen Member or Co-operating State National Meteorological Service. This choice is specified during the ordering process.

Real-time forecast data delivery - technical set-up

Our technical team will set up the dissemination of the data based on the information you have provided in the registration form that you completed when you ordered your real-time data.

We will contact you via the ticket you would have open with us to let you know that the set-up is ready and that dissemination can start.

Required information for delivery of real-time forecast data from ECMWF

We deliver real-time products via Google Cloud, Microsoft Azure, Amazon AWS, S3 Compatible Third-Party Storage Systems and FTP/SFTP. Please click on one of the options below to see the exact requirements for all these methods. 

Please note that the speed of transfers can vary according to geographic location, service provider and internet stability/connectivity.


 Google Cloud Platform

For Google Cloud Platform, we will need the following information:

  • Bucket Name (mandatory)
  • Client Email (mandatory)
  • clientId (mandatory)
  • prefix (optional)
  • privateKey (mandatory)
  • privateKeyId (mandatory)
  • projectId (mandatory)
  • scheme = e.g. https

These can be found/extracted from the Google service account credentials file.

We need the following permissions: storage.objects.get, storage.objects.list, storage.objects.create, storage.objects.delete, storage.buckets.get.

Why do we need extra permissions?

  • storage.objects.create: to upload new files.
  • storage.objects.get: to read object metadata (e.g. size, checksum) after upload.
  • storage.objects.delete: to remove or replace corrupted files and re-upload them.
  • storage.objects.list: to check if an object already exists in the bucket.
  • storage.buckets.get: to confirm that the target bucket exists and is reachable.

 Microsoft Azure Platform

We will need the following information:

  • SAS (SharedAccessSignature) URL with access to service, container and objects. (The URL should contain parameter srt=sco).

  • Blob container name

We need the following permissions: blob.upload(), blob.exists(), blob.delete(), blob.upload(), blob.download() and blob.getProperties().

Why do we need extra permissions?

  • blob.upload(): to upload new files into the container.
  • blob.exists(): to check if an object is already present.

  • blob.delete(): to remove or replace corrupted files and re-upload them.

  • blob.download() / blob.getProperties(): to read object metadata (e.g. size, checksum) after upload.

Amazon AWS Platform

We will need the following information:

  • aws_access_key_id 
  • aws_secret_access_key
  • Bucket name

We need the following permissions: s3:DeleteObject, s3:PutObject, s3:GetObject, s3:ListBuckets.

For Amazon S3 transfers, ECMWF can also enable Amazon S3 Transfer Acceleration and Amazon S3 Dual-Stack Endpoints. Please let us know if you wish to use these features.

Why do we need extra permissions?

  • s3:PutObject: to upload new files. Uploads are atomic; objects only appear once the full transfer has succeeded.
  • s3:GetObject: to read object metadata (e.g. size, checksum) after upload.

  • s3:DeleteObject: to remove or replace corrupted files and re-upload them.

  • s3:ListBuckets: to check that the target bucket exists.

S3 Compatible Third-Party Storage Systems

We will need the following information:

  • aws_access_key_id 
  • aws_secret_access_key
  • Bucket name

We need the following permissions: s3:DeleteObject, s3:PutObject, s3:GetObject, s3:ListBuckets.

We can configure S3-compatible third-party storage systems (for example, Google Cloud Storage via the S3 API or Cloudflare R2). Please ensure the provided endpoint and credentials follow the standard S3 format.

Why do we need extra permissions?

  • s3:PutObject: to upload new files. Uploads are atomic; objects only appear once the full transfer has succeeded.
  • s3:GetObject: to read object metadata (e.g. size, checksum) after upload.

  • s3:DeleteObject: to remove or replace corrupted files and re-upload them.

  • s3:ListBuckets: to check that the target bucket exists.

FTP/SFTP Connection

 We will need the following information:

  • host name or IP
  • protocol (ftp/sftp)
  • username and password(or SSH-Key*) with permission to read-write
  • receiving directory

*Alternatively, for SFTP, we support SSH key-based authentication. In this setup, we will provide our public SSH key, which should be added to the appropriate user's list of authorised keys on your server (typically in the .ssh/authorized_keys file within the user's home directory). Once configured, we will initiate connections using the corresponding private key.

Please note that for ftp we support the following ports:
21 (standard)
2121 (non standard)

For sftp we support:
22 (standard)
2222 (non standard)

Why do we need extra permissions?

  • read: to verify uploaded files and check directory contents.
  • write: to upload files.
  • delete/rename: to replace corrupted files and to finalise transfers (optional: temporary .tmp files are renamed only after a successful upload).

Our dissemination system is designed to push files directly to your storage. During this process, we do more than simply upload objects. We also check whether a file already exists, and if it is found to be corrupted or incomplete, we delete it and resend the correct version. For this reason, we require permissions not only to upload files but also to delete, rename, and overwrite them in the receiving directory.

For certain methods such as FTP/SFTP, during the transmission, dissemination files may have the extension '.tmp'. If the transmission is unsuccessful for whatever reason, it is repeated. The file gets renamed to its original name only after successful transmission.

Firewall set up and permissions

Set up your firewall to accept connections from the following ECMWF dissemination hosts:

INTERNET:

  • 136.156.192.0/26
  • 136.156.193.0/26

RMDCN:

  • 136.156.196.0/26
  • 136.156.197.0/26

For servers in America, add

  • 195.190.82.172
  • 195.190.82.140

Access to ECPDS

To access ECPDS, you need to enter your primary TOTP in the password field. Therefore, you must first configure a TOTP.

Further details on the setup can be found here:

Using Time-based One-Time Passwords



Contents of this page

Get help

Create a support ticket in our Support portal

Licence and invoice

Contents of this space