access_ModelEvaluationSuite - ACCESS-NRI/accessdev-Trac-archive GitHub Wiki

Model Evaluation Suite

The model evaluation suite is a set of python scripts which conduct the comparison between model outputs and observations. This page briefly describes the

work flow of the suite. Please note that this suite uses the monthly observation archive which resides on vayu:/short/p66/bxh599/Eval/DataStore or

dcc:/projects/ua4. It doesn't support any other user specified observations to date. For what is available in the current observation archive, please

contact [email protected]

You can use this software package by simply copy the following 2 directories and run the commands show in the step by step work through.

vayu:/short/p66/bxh599/EvalPackage/CMIP5_model_eval/CMIP5_model_eval
vayu:/short/p66/bxh599/EvalPackage/CMIP5_model_eval/um2netcdf-bh
dcc:/projects/ua4/CMIP5_model_eval
dcc:/projects/ua4/um2netcdf-bh

step 1: cast model output files into netcdf format

$somedir/um2netcdf-bh/exe.py [-d outputdirectory] inf1 inf2 inf3
# You must provide all the files you wish to convert in the arguments as the output files will be on per variable basis.
# Please ignore this step if you already have the model files in netcdf format.  However you will still need to check whether your files are satisfy the 

CMIP5 standards and whether they are in the format of one variable per file.

step 2: conduct the model evaluation

$somedir/CMIP5_model_eval/CMIP5_exe.py [-d dataset1,dataset2...etc][-v var1,var2...etc][-f global|land|sea][-o outdir] modelid modeldir
# You can specify the datasets you wish to compare by using the -d option which defaults to compare against all datasets.
# Datasets in the archive:
# 20CRanl - 20th century reanalysis
# 20CRfcst - 20th century forecast
# CFSR 
# CMAP
# COREv2
# ERA40
# ERA40c - Corrected ERA40 precipitation
# ERA_INT
# ERSST
# GISS
# GPCP
# HADCRU
# Hadisst
# HOAPS
# ISCCP
# JRA25anl - JRA25 analysis
# JRA25fcst - JRA25 forecast
# Merra
# NCEP - NCEP on gussian grid
# NCEP2 - NCEP2 on gussian grid
# NCEP2g25 - NCEP2 on 2.5 regular grid
# NCEPg25 - NCEP on 2.5 regular grid
# OAFLUX
# SOC
# SRB
# You can specify the variables you wish to compare by using the -v option which defaults to all variables.
Available variables:
# atas hus prw psl ta tas ua uas va vas
# zg alb clh cll clm clt hfls hfss huss pr 
# prc ps rlds rlus rlut rsds rsus rsut prls rldscs
# rluscs rlutcs rsdscs rsdt rsuscs rsutcs wap dps evspsbl rls
# rlt rss rst sic tcw ts emss rldsos rlusos rlutos 
# rsdsos rsusos rsutos
# You can specify which type of datasets you wish to compare by using the -f option.
# modelid is the unique id of your model run
# modeldir is where you store the model files (in netcdf format) either from step 1 or your specified location.

The end result (currently only in rms of month to month comparison/annualcycle climate/seasonal climate/climatology) will be stored on the NCI's MySQL. Database server and can be accessed through any of the nci machines.

The user will require a MySQL login to access the data on the MySQL server. If you would like to do so, please contact Ben Hu[email protected] or Lawrie

Rikus[email protected]. (The user doesn't need an account when running the script. However, they will automatically be identified as the owner of

his/her specific model id when running the script to create the score entries in the database. This mechanism ensures that no one else can modify the data

and hence provides the user with some form of data security. The MySQL login created for the user is merely for viewing and extracting the scores; any

write/modify privilege is still restricted.)

A more comprehensive software solution for data extraction is still under development.