3. Download CMEMS Data - NOC-MSM/SRIL34 GitHub Wiki

Downloading daily Copernicus (CMEMS) data

This page details how to download daily CMEMS data for each year. As the bulk of the CMEMS data is used for PyNEMO, we download them using one of the local livljobs servers.

1. Create motu environment

If you do not already have one, create an account on Copernicus. CMEMS data can be downloaded using the motu client, for which you need to create a python environment. Load the latest anaconda module on livljobs (here anaconda/5-2021), and create an environment. Note that motu client only works with python v3, make sure you are creating an environment that uses on of these.

module load anaconda/5-2021
conda create --name motu_env

activate your environment and install the motu client

conda activate motu_env

conda install pip
pip install motuclient

Note that you can activate and deactivate your environment using conda activate motu_env and conda deactivate

2. Download CMEMS data

For SRIL34, you will need to download: -temperature -salinity -sea surface height -U velocity component -V velocity components

from the Global Reanalysis product (GLOBAL_REANALYSIS_PHY_001_030).

An example of how to download an individual day (e.g. 1st Jan 1993) is:

python -m motuclient --motu http://my.cmems-du.eu/motu-web/Motu --service-id GLOBAL_REANALYSIS_PHY_001_030-TDS --product-id global-reanalysis-phy-001-030-daily --longitude-min 50 --longitude-max 115 --latitude-min -10 --latitude-max 30 --date-min "1993-01-01 12:00:00" --date-max "1993-01-01 12:00:00" --depth-min 0.493 --depth-max 5727.918000000001 --variable thetao --variable so --variable zos --out-name "CMEMS_1993_01_01_download.nc" --user USERNAME --pwd PASSWORD

USERNAME and PASSWORD are your individual credentials that you use to log in to your Copernicus account.

To download an entire year, use the scripts download_CMEMS.sh (for salinity, temperature and ssh), and the scrip download_CMEMS_UV.sh (for U and V velocity components). These are modified versions of annkat's SEAsia scripts for downloading CMEMS data. Edit the script for the year you want to download, and the scripts will automatically download each day in that year (NOTE that it doesn't include the 29th Feb for leap years so this has to be downloaded separately).

At the bottom of each script there are two more daily data requests for the last and first day of the previous and subsequent years (e.g. for 1993, there will be extra requests for 31st Dec 1992 and 1st Jan 1994). Manually edit the dates for the ones you want to download.

Create the directories which you want to store the data in, eg. mkdir CMEMS_data, and run both scripts separately to download data. You can either run both script manually (!one at the time, not simultaneously!) or run the script Pynemo_workflow_1.sh.

. ./Pynemo_workflow_1.sh

This subsequently runs download_CMEMS.sh and download_CMEMS_UV.sh in turn and stores output files in your directory. Make sure you are in the directory in which the data are stored eg. CMEMS_data. This script also generates two text files ( Download1.txt and Download2.txt) which output the terminal messages for each download script. It also allows you to check more easily if any data is missing as shown in the next step.

ATTENTION: Downloading CMEMS data takes a long time (>5 hrs per year). The Copernicus system can only cope with one data request per user at a time, so don't run both the scripts together

3. Checking any data omissions

There have been consistent issues with data file omissions. To spot which files are missing, you can use the *.txt files for the CMEMS downloads.

First, navigate to the directory you stored the temp, sal and ssh files in, e.g.:

cd CMEMS_data

Then see how many files have been downloaded using the command:

ls -l . | egrep -c '^-'

There should be 367 files in the directory (368 if its a leap year). If any files are missing, they will need to be download individually.

The Download1.txt (CMEMS temp, sal, ssh) and Download2.txt (U,V) contain the log of all data being downloaded. Search the files for any dates that have an ERROR. Do this for all years and manually download the missing files into the appropriate directory.

NOTE: I've usually found that missing files bunch up, so you're likely to get subsequent omissions

⚠️ **GitHub.com Fallback** ⚠️