3b. Run unforced model. Constant stratification - NOC-MSM/SEAsia GitHub Wiki
This is a test of the numerics and grid discretisation. With any discretisation of a continuous system approximations and trade-offs are made. The most basic type of vertical coordinate system are z-levels, which are constructed to be orthogonal to the gravitational forces, and are a natural choice for a hydrostatic fluid where vertical acceleration is set to zero, and where the vertical pressure gradients are conveniently separated from the much weaker, but circulation driving, lateral pressure gradients. The separation is geometrically enforced by these different types pressure gradient forces being applied across vertical and lateral cell faces respectively. However, in these regional configurations we choose to use variants of terrain following coordinates in order to better represent the shallow water processes by preserving resolution in both shallow and deep water environments. A short-coming of these terrain following coordinates is that the vertical and horizontal pressure gradients are no longer cleanly separated across the cells of the numerical grid. Numerical errors arising from numerically separating the hydrostatic component of pressure gradient from those which can accelerate the flow can drive spurious currents. Similarly, in a z-level coordinate system horizontal diffusion is often used stabilise the numerics with little effect of the dynamics. However with sloping coordinates, along cell diffusion of salt and heat can raise or lower the centre of gravity of the cell, which in turn can establish a horizontal pressure gradient and drive a spurious flow. See \citep{Wise21} for a more detailed study investigating the effects of different vertical discretisations. In this test we seek to evaluate the significance of these spurious flows, which we attribute to horizontal pressure gradient errors.
A climatological stratification is obtained from the WOCE climatology\footnote{\url{(http://icdc.cen.uni-hamburg.de/las/getUI.do?dsid=id-3c81c78c2b&catid=9B3B992E6DA90BD12ED57F84FE8364BC&varid=TEMP-id-3c81c78c2b&plot=XY_zoomable_image&view=xy&auto=true)}} for a representative location (E.g. -2N, 95E). Horizontally homogenous temperature and salinity fields are then generated as the initial conditions by mapping the profiles into hybrid sigma space. This is achieved by hardcoding the climatological profiles into the compiled NEMO code (via the \verb|usrdef_istate.F90| routine) wherein a cubic spline is fitted to the data and used to map the fields to the target hybrid sigma vertical grid.
%%f

\caption{PLACE HOLDER - FIG NOT CORRECT. Unforced 60 day simulation from rest to assess the magnitude of the spurious currents arising from the vertical discretisation. Upper panel: time series of velocity evolution. Lower panel: map of velocity magnitude. THESE FIGS WERE FROM ORCA12 INITCD NOT CLIM PROFILES. LOWER PANEL SHOWN IS SSH NOT VEL}
In this experiment an unforced ocean is initialised from rest. It is initialised with horizontally uniform stratification. Any velocities that are generated are the result of model errors. These can be either i) genuine code bugs (hopefully zero), or ii) numerical errors associated with the model levels following the terrain rather than the density surfaces.
The model can be initialised with an idealised stratification either by prescribing it as an initial condition, or by compiling it into the code as an analytic function. Here we do the latter. This can be done by editing the CONFIG in make_paths.sh and repeating the NEMO build process
export CONFIG=NEMOhorizTS # NEMO exec with hardwired horizontally uniform T,S
Then compile NEMO. It will select and compile with a prescribed stratification given in $NEMO/cfgs/$CONFIG/MY_SRC/usrdef_istate.F90
cd SCRIPTS
. ./make_nemo.sh
The executable is stored in $NEMO/cfgs/$CONFIG/BLD/bin/nemo.exe
(Previously NEMO with compiled with CONFIG=NEMOconstTS, this has hard wired constant T and S if activated with ln_usr=.true.)
The boundary conditions are turned off.
Run the experiment from SCRIPTS folder
cd SCRIPTS
. ./run_unforced_horizTS.sh
NB the run_unforced_horizTS.sh script assumes you can submit jobs to the n01-ACCORD ARCHER2 account. Edit submit.slurm accordingly.
First time through for a new configuration don't specify the processor decomposition as an optimal suggestion can be calculated at runtime. In namelist_cfg:
!-----------------------------------------------------------------------
&nammpp ! Massively Parallel Processing ("key_mpp_mpi")
!-----------------------------------------------------------------------
... ! if T: the largest number of cores tested is defined by max(mppsize, jpni*jpnj)
ln_nnogather = .true. ! activate code to avoid mpi_allgather use at the northfold
jpni = -1 ! jpni number of processors following i (set automatically if < 1)
jpnj = -1 ! jpnj number of processors following j (set automatically if < 1)
Also check that you are only running a few timesteps, as this is otherwise a waste of CPU:
!-----------------------------------------------------------------------
&namrun ! parameters of the run
!-----------------------------------------------------------------------
nn_no = 0 ! Assimilation cycle index
cn_exp = "SEAsia_unforced"
nn_it000 = 1 ! first time step
nn_itend = 10 ! last time step
After running inspect ocean.output to see the recommended processor decomposition and update accordingly. For example (as in the repository):
!-----------------------------------------------------------------------
&nammpp ! Massively Parallel Processing ("key_mpp_mpi")
!-----------------------------------------------------------------------
... ! if T: the largest number of cores tested is defined by max(mppsize, jpni*jpnj)
ln_nnogather = .true. ! activate code to avoid mpi_allgather use at the northfold
jpni = 41 ! jpni number of processors following i (set automatically if < 1)
jpnj = 23 ! jpnj number of processors following j (set automatically if < 1)
Also now update the number of time steps appropriate for your simulation (E.g. 1 month with a timestep, rn_rdt, of 360s):
nn_itend = 7200 ! (1 month)
Having configured the namelist_cfg with the appropriate processor decomposition and run length submit the run script again.