The motivation for this test case is to test how NWP models deal with WIGOS ids.
To this aim, a Python3 program has been created to add WIGOS ids to current SYNOP messages received at ECMWF.
The outline of this page is
1) Problem description
2) Program flow
3) Test data file and caveats
1) Description
The WIGOS id contains four parts such as 0-2XXXX-0-YYYYY,
| wigosIdentifierSeries | Issuer of Identifier | Issue Number | LocalIdentifier |
|---|---|---|---|
| 0 | 2XXXX | 0 | YYYYY |
The OSCAR web REST API interface ("https://oscar.wmo.int/surface/rest/api/search/station?) was used to obtain a list of all the WIGOS Ids available at the moment ( ).
From this information only the surface observations 0-20000-0-YYYYY were used.
The last part of the WIGOS id, ( local Identifier) matches the current BUFR message identifier ( concatenation of blockNumber and stationNumber) and is used to do the mapping between
old stations and their WIGOS ids.
2)Program description
'''
Created on 22 Oct 2019
# Copyright 2005-2018 ECMWF.
# This software is licensed under the terms of the Apache Licence Version 2.0
# which can be obtained at http://www.apache.org/licenses/LICENSE-2.0.
# In applying this licence, ECMWF does not waive the privileges and immunities
# granted to it by virtue of its status as an intergovernmental organisation
# nor does it submit to any jurisdiction
This is a test program to encode Wigos Synop
requires
1) ecCodes version 2.8 or above (available at https://confluence.ecmwf.int/display/ECC/Releases)
2) python3.6
To run the program
-i <input bufr > -m <mode [web|json]> -l <logFile> -o <output BUFR file>
Uses BUFR version 4 template and adds the WIGOS Identifier 301150
REQUIRES TablesVersionNumber above 28
Author : Roberto Ribas Garcia ECMWF 28/10/2019
Modifications
Added copy_header function to keep the header keys from the input message 04/11/2019
'''
from eccodes import *
import argparse
import json
import re
import pandas as pd
import numpy as np
import logging
import requests
import os
def read_cmd_line():
p=argparse.ArgumentParser()
p.add_argument("-i","--input",help="input bufr file")
p.add_argument("-o","--output",help="output bufr file with wigos")
p.add_argument("-m","--mode",choices=["web","json"],help=" wigos source [ json file or web ]")
p.add_argument("-l","--logfile",help="log file ")
args=p.parse_args()
return args
def read_oscar_json(jsonFile):
with open(jsonFile,"r") as f:
jtext=json.load(f)
return jtext
def read_oscar_web(oscarURL="https://oscar.wmo.int/surface/rest/api/search/station?"):
r=requests.get(oscarURL)
jtext=json.loads(r.text)
return jtext
def parse_json_into_dataframe(jtext):
'''
parses the JSON from the file wigosJsonFile
filters the stations by wigosStationIdentifiers key in the dictionaries
'''
wigosStations=[]
nowigosStations=[]
for d in jtext:
if "wigosStationIdentifiers" in d.keys():
wigosStations.append(d)
else:
nowigosStations.append(d)
'''
uses only the wigos 0-20XXX-0-YYYYY (surface)
'''
p=re.compile("0-20\d{3}-0-\d{5}")
fwigosStations=[]
for d in wigosStations:
wigosInfo=d["wigosStationIdentifiers"]
for e in wigosInfo:
if e["primary"]==True:
wigosId=e["wigosStationIdentifier"]
if p.match(wigosId):
wigosParts=wigosId.split("-")
d["wigosIdentifierSeries"]=wigosParts[0]
d["wigosIssuerOfIdentifier"]=wigosParts[1]
d["wigosIssueNumber"]=wigosParts[2]
d["wigosLocalIdentifierCharacter"]=wigosParts[3]
d["oldID"]=wigosParts[3][-5:]
fwigosStations.append(d)
df=pd.DataFrame(fwigosStations)
df=df[["longitude","latitude","name","wigosStationIdentifiers","wigosIdentifierSeries","wigosIssuerOfIdentifier","wigosIssueNumber",
"wigosLocalIdentifierCharacter","oldID"]]
return df
def get_ident(bid):
'''
gets the ident of the message by combining blockNumber and stationNumber keys from the input BUFR file
the ident may be single valued or multivalued ( only single valued are considered further)
'''
ident=None
if ( codes_is_defined(bid, "blockNumber") and codes_is_defined(bid,"stationNumber") ):
blockNumber=codes_get_array(bid,"blockNumber")
stationNumber=codes_get_array(bid,"stationNumber")
if len(blockNumber)==1 and len(stationNumber)==1:
ident="{0:02d}{1:03d}".format(int(blockNumber),int(stationNumber))
elif len(blockNumber)==1 and len(stationNumber)!=1:
blockNumber=np.repeat(blockNumber,len(stationNumber))
ident=[str("{0:02d}{1:03d}".format(b,s)) for b,s in zip(blockNumber,stationNumber)
if b!=CODES_MISSING_LONG and s!=CODES_MISSING_LONG]
elif len(blockNumber)!=1 and len(stationNumber)!=1:
ident=[str("{0:02d}{1:03d}".format(b,s)) for b,s in zip(blockNumber,stationNumber)
if b!=CODES_MISSING_LONG and s!=CODES_MISSING_LONG]
return ident
def copy_header(bid,obid):
'''
this function copies the header keys and avoids using the default values on the output message
'''
bhc=codes_get(bid,"bufrHeaderCentre")
codes_set(obid,"bufrHeaderCentre",bhc)
bhsc=codes_get(bid,"bufrHeaderSubCentre")
codes_set(obid,"bufrHeaderSubCentre",bhsc)
usn=codes_get(bid,"updateSequenceNumber")
codes_set(obid,"updateSequenceNumber",usn)
dc=codes_get(bid,"dataCategory")
codes_set(obid,"dataCategory",dc)
if codes_is_defined(bid, "internationalDataSubCategory"):
idsc=codes_get(bid,"internationalDataSubCategory")
codes_set(obid,"internationalDataSubCategory",idsc)
dsc=codes_get(bid,"dataSubCategory")
codes_set(obid,"dataSubCategory",dsc)
year=codes_get(bid,"typicalYear")
codes_set(obid,"typicalYear",year)
month=codes_get(bid,"typicalMonth")
codes_set(obid,"typicalMonth",month)
day=codes_get(bid,"typicalDay")
codes_set(obid,"typicalDay",day)
hour=codes_get(bid,"typicalHour")
codes_set(obid,"typicalHour",hour)
tmin=codes_get(bid,"typicalMinute")
codes_set(obid,"typicalMinute",tmin)
sec=codes_get(bid,"typicalSecond")
codes_set(obid,"typicalSecond",sec)
return
def add_wigos_info(ident,bid,wdf,obid):
'''
add the wigos information to the message ident pointed by bid
the wdf is the whole wigos dataframe and obid is the output bid
'''
if codes_is_defined(bid, "shortDelayedDescriptorReplicationFactor"):
shortDelayed=codes_get_array(bid,"shortDelayedDescriptorReplicationFactor")
else:
shortDelayed=None
if codes_is_defined(bid, "delayedDescriptorReplicationFactor"):
delayedDesc=codes_get_array(bid,"delayedDescriptorReplicationFactor")
else:
delayedDesc=None
nsubsets=codes_get(bid,"numberOfSubsets")
compressed=codes_get(bid,"compressedData")
masterTablesVersionNumber=codes_get(bid,"masterTablesVersionNumber")
if masterTablesVersionNumber<28:
masterTablesVersionNumber=28
unexpandedDescriptors=codes_get_array(bid,"unexpandedDescriptors")
outUD=list(unexpandedDescriptors)
outUD.insert(0,301150)
'''
only treat the uncompressed messages with 1 subset
for future add treatment of compressed messages with more than 1 subset
'''
if compressed==0 and nsubsets==1:
if shortDelayed is not None:
codes_set_array(obid,"inputShortDelayedDescriptorReplicationFactor",shortDelayed)
if delayedDesc is not None:
codes_set_array(obid,"inputDelayedDescriptorReplicationFactor",delayedDesc)
copy_header(bid,obid)
codes_set(obid,"masterTablesVersionNumber",masterTablesVersionNumber)
codes_set(obid,"numberOfSubsets",nsubsets)
odf=wdf.query("oldID=='{0}'".format(ident))
if not odf.empty:
codes_set_array(obid, "unexpandedDescriptors",outUD)
wis=odf["wigosIdentifierSeries"].values
if len(wis)!=1:
wis=wis[0]
codes_set(obid,"wigosIdentifierSeries",int(wis))
wid=odf["wigosIssuerOfIdentifier"].values
if len(wid)!=1:
wid=wid[0]
codes_set(obid,"wigosIssuerOfIdentifier",int(wid))
win=odf["wigosIssueNumber"].values
if len(win)!=1:
win=win[0]
codes_set(obid,"wigosIssueNumber",int(win))
wlid=odf["wigosLocalIdentifierCharacter"].values
wlid="{0:5}".format(wlid[0])
logging.info(" wlid here {0}".format(wlid))
codes_set(obid,"wigosLocalIdentifierCharacter",str(wlid))
codes_bufr_copy_data(bid,obid)
else:
logging.info(" wigos {0} is empty for ident {1}".format(ident,odf["wigosLocalIdentifierCharacter"].values))
else:
logging.info(" skipping compressed message id {0} with {1} subsets ".format(ident,nsubsets))
return obid
def main():
args=read_cmd_line()
logfile=args.logfile
logging.basicConfig(filename=logfile,level=logging.INFO,filemode="w")
infile=args.input
outfile=args.output
mode=args.mode
if mode=="web":
jtext=read_oscar_web()
cdirectory=os.getcwd()
oscarFile=os.path.join(cdirectory,"oscar.json")
with open(oscarFile,"w") as f:
json.dump(jtext,f)
else:
cdirectory=os.getcwd()
oscarFile=os.path.join(cdirectory,"oscar.json")
with open(oscarFile,"r") as f:
jtext=json.load(f)
wigosDf=parse_json_into_dataframe(jtext)
f=open(infile,"rb")
nmsg=codes_count_in_file(f)
fout=open(outfile,"wb")
for i in range(0,nmsg):
obid=codes_bufr_new_from_samples("BUFR4")
bid=codes_bufr_new_from_file(f)
codes_set(bid,"unpack",1)
ident=get_ident(bid)
if ident:
logging.info (" \t message {0} ident {1} ".format(i+1,ident))
add_wigos_info(ident,bid, wigosDf, obid)
codes_write(obid,fout)
else:
logging.info ("message {0} rejected ".format(i+1))
codes_release(obid)
codes_release(bid)
f.close()
print (" finished")
if __name__ == '__main__':
main()
The program can be called with the following arguments
-i input BUFR file containing SYNOP messages without WIGOS ids
-o output BUFR file that will contain the SYNOP messages with WIGOS Id.
-m mode ( can be 'web' to allow the program connect to OSCAR server or 'json' to make the program use a JSON file containing the same information as the OSCAR server) this was done to speed up the development avoiding reloading the Oscar data from the web
-l log file to write the progress of the conversion
The program flow is the following
1) read the command line arguments
2) read the OSCAR information from web or JSON file and store it in a pandas DataFrame that will help in the mapping. The two functions read_oscar_web and read_oscar_json return a JSON list of dictionaries
that are filtered to retain only the surface observations with issuer Number 20000( surface observations) Then a pandas dataframe is used to store this information and help in the querying of the database.
3) open the input BUFR file and read each individual message
4) for each message, create the message identifier ( concatenation of blockNumber+stationNumber) and add the WIGOS information to the messages
that are uncompressed ( compressed =0) and single subset ( numberOfSubsets=1) if their ident matches the ones in wigosDf.
5) a new function ( copy_header) was added to avoid changing the header of the message. Now, it copies the keys from bid to obid except typicalDate which is read only
During program execution a log file is generated containing information about the processing.
At this point some caveats are needed
- Only uncompressed messages (compressed =0) and single subset (numberOfSubsets=1) are considered
- The Oscar information retrieved from the web server has to be cleared for this program to work. This is the goal of the function parse_json_into_dataframe that uses regular expressions to filter out the WIGOS data.
- When setting the WIGOS information It is important to preserve the data types , for example "wigosLocalIdentifierCharacter" is a character string.
- The masterTablesVersionNumber must be above 28 otherwise no WIGOS ids can be added. This is done in the add_wigos_info function that updates the table version number key for each message processed.
Results
The output file contains 22724 messages