ASGS Developers Guide - StormSurgeLive/asgs GitHub Wiki

The ASGS Developers Guide

Jason Fleming [email protected] v2.0, March 2015, Seahorse Coastal Consulting:

Seahorse Coastal Consulting logo

Introduction

The ADCIRC Surge Guidance System (ASGS) is a portable, geographically agnostic software system for generating storm surge and wave guidance from ADCIRC + SWAN in real time on high resolution grids.

It derives meteorological forcing either from gridded wind fields (e.g., NCEP’s NAM model) or, for tropical cyclones, from a parametric wind / pressure model using storm parameters extracted from the text of the National Hurricane Center's Forecast/Advisories.

For tropical cyclones the system is also configured to run an ensemble of storms comprised of a predefined set of perturbations to the NHC’s consensus forecast. The ASGS can be configured to accept river flow forecasts as input as well.

The system runs successfully on a wide range of high performance computing platforms including those at the Texas Advanced Computing Center (Univ Texas), the US Army Corps of Engineers Engineering Research and Development Center (ERDC), LONI and LSU, and several HPC platforms / clusters at the University of North Carolina at Chapel Hill.

The ASGS has been run on several grids covering the western North Atlantic, Caribbean Sea and Gulf of Mexico with high resolution areas in the northern and western Gulf of Mexico, in North Carolina and in Maryland and Virginia.

This document provides (i) a tour of the source, with brief explanations of the function of each major source code file; (ii) installation requirements and procedure; (iii) details of the vortex meterological model embedded in ADCIRC; (iv) the guidelines that ADCIRC Surge Guidance System developers should use when developing new code for ASGS and its related utilities; and (v) a short summary of the development workflow employed by ASGS on Github.

The ASGS Shell Environment

To improve portability and ease development efforts directed towards supporting a variety of platforms, the ASGS Shell Environment was introduced. The best way to description of this is to say it's a virtual environment on top of one's existing user environment. This is accomplished by providing a shell, called asgsh, that implements all the necessary environmental adjustments. It is built on top of the existing environment, but through the shell replaces the user's environment for the entirety of the interactive session in asgsh. This is as opposed to something like modules that attempts to augment the current environment.

The ASGS Cheatsheet (add link) is the primary summary of commands any operator or user of ASGS should study to get an idea of how to set up ASGS and fire up an instance of asgsh. More information on this environment is also available in the ASGS Installation (add link) document that covers the details of the main installer script, asgs-brew.pl. More information on the installer is contained in the section later on Porting.

A Note About the Software Environment and Programming Language Decisions

ASGS is written primarily as a combination of Bash, Perl, and Fortran. There is a smattering of other languages represented throughout, but anyone who is reading this should be comfortable with this fact before continuing. The decision to use this software stack has served the project very well over nearly two decades and has resulted in the saving of lives, property, and precious time during real life storms where the ability to make decisions quickly and accurately is the priority.

For more information on this aspect of ASGS development, please view the file, $SCRIPTDIR/DESIGN-DESCRIPTION.

The Guided Tour

The following is a list of directories and files that are provided with the ASGS along with a description of their purpose and function.

base directory - $SCRIPTDIR

The root of the ASGS directory structure contains the scripts that provide the main features of the ASGS.

When ASGS is installed, the base directory is recorded an stored in the ASGS Shell Environment (asgsh) in the environmental variable, $SCRIPTDIR. This directory is the literal base of operations, not just the install root of ASGS. Operators should become familiar with the shortcut used to cd to this directory from anywhere on the file system while under asgsh; this shortcut, sd.

asgs_main.sh

This file is a Unix shell script, written in bash.

The that is executed to activate the ASGS is called asgs_main.sh. It is usually not run directly, but if $ASGS_CONFIG, then the ASGS Shell Environment provides the run command that is equivalent to, ./asgs_main.sh -c $ASGS_CONFIG.

asgs_main.sh's main function is to perform a hindcast if needed, then move to a nowcast/forecast loop. It contains code that controls the preparation of new runs, sets up computational jobs, submits them, and monitors them. It also contains a small section that contains conditionally executed system-specific configuration information.

All of the code that actually does the specialized work, e.g., downloading meteorological data, constructing control files, generating visualizations of results, etc., is contained in an array of external scripts. Some of these scripts are hard-coded into asgs_main.sh, some are defined via environmental variable (e.g., $QSCRIPTGEN), and some defined as a member of a set of tools that are run at well defined hook points in the execution life cycle of the asgs_main.sh script (e.g., $POSTPROCESS). How these scripts and external tools are wired into asgs_main.sh has everything to do with practical need mostly devoid of non-practical development ideologies.

The asgs_main.sh script is the most central and most important one in the ASGS. However, it should be noted that the script is focused very narrowly on driving the original purpose of ASGS, the coupling of ADCIRC and SWAN during real life tropical or cyclonic storm events.

The following sections describe scripts that required by asgs_main.sh, in order of importance. But all are required in one way or another.

control_file_gen.pl

This script, as the extension suggests, is written in Perl.

ADCIRC uses a file named fort.15 to define configurations for the model runs. SWAN uses a similar file named, fort.26. The fort part of the names of these files comes from the default file naming scheme employed by Fortran programs, which ADCIRC and SWAN both are.

The ADCIRC fort.15 and SWAN fort.26 (run control) files are generated by control_file_gen.pl. This script accepts a wide array of input arguments that describe the sort of run the Operator has configured, including the use of tidal forcing, wave coupling, time step size, desired output file format (ascii vs netcdf), and many others.

Other input arguments to this script provide state information, e.g., whether the model is to be hotstarted, the current simulation time, the advisory number from the latest National Hurricane Center Forecast/Advisory (in the case of hurricane-related runs), etc.

When the script runs, it uses templates that have been made for the control files along with the input arguments and a few external programs (for generating tides, etc) and the already-generated meteorological forcing files (if any) and produces control files for ADCIRC or ADCIRC+SWAN along with metadata in the run.properties file.

Variables used throughout the execution of ASGS related to control file generation, used by asgs_main.sh, are:

CONTROLTEMPLATE
CONTROLTEMPLATENOROUGH
CONTROLPROPERTIES

These variables are usually defined for each mesh used, most often in the file, config/mesh_defaults.sh. A user defined configuration file almost never needs to define these directly.

get_atcf.pl

ASGS' primary use case is to automatically trigger an ensemble of ADCIRC (+SWAN when WAVES=ON) simulations when the National Hurrican Center issues a new storm advisory. The settings in a $ASGS_CONFIG file that govern this mode, and their values are:

TROPICALCYCLONE=YES # tells ASGS to run get_atcf.pl in a loop
STORM=NN            # NHC's storm number, 01 to #storms
YEAR=YYYY           # current year (when used for live events) 
BACKGROUNDMET=NO    # important to turn this off

The get_atcf.pl script is used by the ASGS to monitor the RSS feed from the National Hurricane Center for new Forecast/Advisories and to download the latest advisory the moment it becomes available. It does this by parsing the posted RSS XML file, and when the advisory number has been updated by the NHC, get_atcf.pl follows the link to the HTML file containing the actual advisory information that gets extracted.

After downloading a new Forecast/Advisory from the National Hurricane Center website, the script then downloads the latest hindcast (BEST track) file, a .dat file containing observational records in the ATCF fixed-column format, for the same storm from NHC's anonymous ftp site. To be clear, he BEST track (.dat) is not a forecast. Both these files are required for generating an ADCIRC fort.22 (meteorological forcing) file and are used for hindcasting needs, as the gap between the last hindcast and now is determined by ASGS. Forecasts must start only after the last observational data known to ASGS.

Once a ADCIRC forecast is complete, and the next advisory has not yet been posted by the NHC, the script waits 60 seconds before checking the RSS site again. It will continue checking every 60 seconds until the next advisory is posted by the NHC.

The script is called get_atcf.pl because the so-called BEST track file format (that describes the files that the script downloads from the anonymous ftp site) was designed by the Automated Tropical Cyclone Forecasting system (ATCF). Please see the section below that discusses the ADCIRC parametric vortex models below for more details on this format, including a link to the online documentation.

More information on ATCF may be read at https://ftp.nhc.noaa.gov/atcf/docs/NRL_doc_ATCFdatabase.html

This script is written in Perl.

nhc_advisory_bot.pl

The nhc_advisory_bot.pl script takes the HTML file created by get_atcf.pl script and parses the forecast block for all the information required to write an ATCF formatted forecast file ("OFCL"). It then generates the ATCF formatted file with the forecast parameters; this file could be used directly by one of ADCIRC's internal parametric vortex models, but ASGS uses the official OFCL file for an additional step below, to generate the track permutations required by the specification in the $ASGS_CONFIG file.

This script is written in Perl.

storm_track_gen.pl

The storm_track_gen.pl script takes the ATCF-formatted file generated by nhc_advisory_bot.pl (representing the forecast), the ATCF BEST track file downloaded by get_atcf.pl (representing the hindcast), and the current simulation state (via the hotstart file produced by previous model runs); and constructs an ADCIRC fort.22 (meteorological forcing) file that contains the latest obervational storm information, augmented with the forecasted data provided in the latest NHC advisory.

This script accepts optional arguments that can be used to vary the both the intensity and the veering of the track (e.g., veerRight or veerLeft), expressed as a percentage of the desired distance between the consensus track and the edge of the cone of uncertainty. The radius to maximum winds is expressed as a percentage of the radius to maximum winds determined from the input, the overland speed of the storm (to speed it up or slow it down), and the maximum wind speed. These variations are configured in $ASGS_CONFIG and can be used to explore various "what-if" scenarios stemming from a single NHC forecast advisory, as they are issued during tropical cyclone events.

This script is written in Perl.

get_nam.pl

This script is used to trigger ASGS during non-tropical events. This operational mode is generally called daily mode, and is used to provide ADCIRC surge forecasts in mode suitable for reporting on the evening news grin.

TROPICALCYCLONE=NO          # tells ASGS to run get_atcf.pl in a loop
STORM=NN                    # ignored 
YEAR=YYYY                   # ignored 
BACKGROUNDMET=YES           # important to turn this off
FORECASTCYCLE="00,06,12,18" # in UTC (Z), tells ASGS the NAM forecasts to watch

The get_nam.pl script is used by the ASGS to monitor NCEP's anonymous ftp site for newly released output from their NAM (North American Mesoscale) atmospheric model. The WRF-NMM (Weather Research and Forecasting, Non-hydrostatic Mesoscale Model) is used to produce NAM results.

The script accepts the current time of the ADCIRC simulation as input, and downloads the latest available NAM data, provided that the latest available data is later than the current ADCIRC simulation time. If there are no NAM data available that postdate the current ADCIRC simulation time, the script print and information message and exits. The calling routine (in asgs_main.sh) then retries the download every 60 seconds until new data have been discovered based on the trigger forecasts set via FORECASTCYCLE; once detectect, they are downloaded and the ADCIRC(+SWAN) forecast is prepared and run (or submitted).

It should be noted that ASGS does not permute the NAM forecasts like it does for NHC storm tracks, it provides the surge forecast that complements the provided NAM forecasts.

It is also notable that for weak or early tropical systems, ASGS is often run in this mode. Once the storm has become more well defined and the NHC has started issuing advisories, operational modes are generally switch over to the tracks with the internal wind models contained within ADCIRC>

The get_nam.pl script relies on the "wgrib2" executable to perform basic sanity checks on the grib2 files that it downloads. This executable is built and provided within the ASGS Shell Environment during installation, driven by asgs-brew.pl; which itself is often run via init-asgs.sh. More on this process will be provided further down in this document.

This script is written in Perl.

NAMtoOWI.pl

The NAM output files in grib2 format that were downloaded by get_nam.pl are converted into a form that ADCIRC (or ADCIRC+SWAN) can use by the NAMtoOWI.pl script. This script uses the wgrib2 utility to extract the sea level u, v, and p values from the NAM output files (in grib2 format) and produce ascii data files. The ascii data are then reprojected from the NAM coordinate system (lambert conformal) to the ADCIRC coordinate system (geographic) using the awip_lambert_interp executable. A list of points must be provided where the reprojected and interpolated data should be calculated. Finally, the interpolated data are output in OWI (Ocean Weather Inc) format for use in ADCIRC or ADCIRC+SWAN (NWS=12).

This script is written in Perl.

get_flux.pl

If time-varying river flux boundary condition data (ADCIRC's fort.20 file) are available, and the ASGS has been configured to include river flux forcing, the get_flux.pl script retrieves the boundary condition files and formats them for use in ADCIRC. It must reads the ADCIRC mesh file to determine the number of river flux boundary nodes so that it can interpret the data in the incoming flux files, since the file format is not self describing. The script also takes into account the current time in the ADCIRC simulation and the files that are currently available on the remote ftp site when constructing the flux boundary condition file that ADCIRC will actually use.

In $ASGS_CONFIG, the setting used to turn this on or off is,

VARFLUX

There are are additional settings contained in config/mesh_defaults.sh that control how asgs_main.sh handles this portion of configuring ADCIRC and preparing input data and handling resulting output files:

RIVERINIT
RIVERFLUX
HINDCASTRIVERFLUX

This script is written in Perl.

queue script generators

The ASGS supports many HPC platforms, and it can be assumed that each one is different from the others and has its own idiosyncracies. As a result, ASGS employs a level of abstraction for generating queue scripts that involves templates and a Perl script.

It is sufficient for this section to simply point out that the one may change both the template used and the userland executable that is used to generate the queue script:

QSCRIPTGEN=qscript.pl             # assumes it is in $SCRIPTDIR
QSCRIPTTEMPLATE=qscript.template  # also assumes this template is located in $SCRIPTDIR

Often the most expedient way to deal with a local queuing or batch submission system is to copy qscript.template, modify it to ones needs, and explicitly set QSCRIPTTEMPLATE to the new template inside of the configuration file, $ASGSC_CONFIG.

The default script is written in Perl.

Additional User Defined Files

  • $HOME/.ssh/config

ASGS uses ssh, scp, and things that use those (like rsync) quite a bit. They are also very useful to anyone creating their own scripts to use with an ASGS run. Therefore, ASGS relies heavily on ssh's ability to use $HOME/.ssh/config creating host aliases that define any aspect of a connect that can be provided via commandline. Therefore, it is critical to note and understand the importance a well defined $HOME/.ssh/config file plays. This file comes up in any discussions related to defining remote hosts that are used. For example, consider this remote server is registered with ASGS for use as a TDS server for sending post run files to a remote server:

#!/usr/bin/env bash

THREDDSHOST=chg-1.oden.tacc.utexas.edu # WWW hostname for emailed links
OPENDAPHOST=tacc_tds3                  # alias in $HOME/.ssh/config
OPENDAPPORT=":80"                      # ':80' can be an empty string, but for clarity it's here
OPENDAPPROTOCOL="http"
DOWNLOADPREFIX=/asgs
CATALOGPREFIX=/asgs
OPENDAPBASEDIR=/hurricane

(the entire file can be seen at https://github.com/StormSurgeLive/asgs/blob/master/ssh-servers/tacc_tds3.sh)

Implied above is that an alias has been created in $HOME/.ssh/config. Doing so greatly simplifies ASGS' ability to deal with remote hosts.

A sample .ssh/config file is provided in the main repository.

More information on .ssh/config set up may be read in the READMEs directory of the ASGS repository.

  • $HOME/asgs-global.conf

This script is positioned to be used for a variety of settings, but it is most handly for defining credentials for any external services that ASGS uses.

For example, ASGS comes with its own sendmail program used to send email, since the mail environment on host machines can't be assumed. bin/asgs-sendmail, for example, reads SMTP host information from $HOME/asgs-global.conf.

A sample sample asgs-global.conf file is provided in the main repository.

More information on email set up may be read in the READMEs directory of the ASGS repository.

  • $HOME/.asgsh_profile

This file acts as $HOME/.profile or $HOME/.bashrc is intended to act, but for asgsh - the command used to enter into the ASGS Shell Environment. It is not required, but often is used to define things that will be common across all asgsh sessions. For example, variables used to manage access to batch processing queues:

ACCOUNT
RESERVATION
QUEUENAME
SERQUEUENAME
GROUP
QOS

config

The "config" subdirectory contains default values for a varienty of essential capabilities (e.g., I/O) and resources (e.g., ADCIRC meshes) required by ASGS.

This directory once contained an archive of $ASGS_CONFIG files, but these have since been moved to their own repository:

https://github.com/StormSurgeLive/asgs-configs

doc

The contents of this directory have since been moved to the Wiki hosted at Github,

https://github.com/StormSurgeLive/asgs/wiki

input

The input subdirectory contains the input files (mesh files, nodal attributes files, etc) and templates for dynamically generated input files as well as dynamically generated queue scripts. It also contains the ptFile_gen.pl script for generating a set of points for reprojecting NAM data into geographic projection.

output

The output subdirectory contains all the routines that can be used when dealing with output and post processing. The code in this subdirectory can be categorized as follows:

  • Notification via email that new results are available, e.g., cera_notify.sh

  • File format transformations and conversions: asgsConvertR3ToNETCDF.pl, asgsConvertToNETCDF.pl, station_transpose.pl.

  • Controlling the generation of line plots, contour plots, images, etc.

  • Posting results files to other locations over the network: opendap_post2.sh

  • Archiving of results

output/PartTrack

This subdirectory contains infrastructure for performing particle tracking post processing.

output/POSTPROC_KMZGIS

This subdirectory contains code for generating Google Earth (kmz) images and GIS shape files. Much of the underlying technology was developed by RENCI.

output/TRACKING_FILES

This subdirectory contains infrastructure for performing particle tracking post processing.

PERL

ASGS installs its own Perl environment via perlbrew, an open source too that is meant for creating and maintaining fully isolated Perl installations (which includes the familiar utilities as well as installed Perl modules).

Any Perl modules that are available on CPAN are installed when ASGS is installed, other modules that are needed can be installed via the cpanm utility that is also installed when ASGS installs its own perl.

The ./PERL directory in the main ASGS repository is meant for including any Perl modules that are not available on CPAN. The number of custom Perl modules distributed in this way is limited, but there are some.

For the Perl literate, the $SCRIPTDIR/PERL directory is available to the Perl environment installed under asgsh via the PERL5LIB environmental variable. This means that scripts writting under the asgsh environment can directly use any module that is distributed with ASGS without any need to manipulate the @INC directory either via use lib or explicitly.

PERL-MODULE files

The following files manage what Perl modules are installed during the installation process started by init-asgs.sh and managed by asgs-brew.pl:

  • PERL-MODULES

These modules are installed with out any qualifying options passed to cpanm.

  • PERL-MODULES.wget

These modules are installed by first downloading the .tgz file using wget then installing the tarball directly using cpanm (which is a supported way of installation).

  • PERL-MODULES.notest

These modules are installed by cpanm, but testing is skipped via --notest option. This is a last resport and added only after it is clear failing tests are unavoidable and of no consequence for these modules. This is also to avoid using the --force flag.

platforms

ASGS has a well developed method of adding new platforms to support. The platforms distributed with ASGS are considered officially supported. New platforms are usually added when a major site used operationally gets a new HPC machine (e.g., TACC or LSU/LONI). It is a fairly straightforward process to port ASGS to new machines, but does benefit from experience.

This directory contains all supported platforms using the new method. There are some legacy platforms still maintained in the platforms.sh file; but any new ones should be added in the ./platforms directory. In this directory, a README that describes the process of adding a platform exists.

There is also information on how one may define their own set of platforms locally, which is done when installing ASGS for the first time using init-asgs.sh's -p option.

ssh-servers

ASGS also has a well developed method for adding new remote machines that are meant to be targets for uploading the results of model runs. The ssh-servers directory contains a description of this process in the README file. Unlike the platforms directory, there is no support for pointing to local remote server definitions; but if this is ever requested we will likely be happy to add it.

Please note, it is critical that the $HOME/.ssh/config file also contain an alias for any machines that are used by ASGS. Using ssh's native capability to manage a variety of hosts and their settings (including actual ip or host addres and usernames) greatly reduces the code complexity in the scripts that deal with getting input data and posting output data.

tides

The tides subdirectory contains code and data for dynamically providing nodal factors and equilibrium arguments for simulations that include tidal forcing. It also has infrastructure for setting tidal boundary conditions.

patches

Where various patch-sets are stored. Currently it contains a number of them for ADCIRC; but presumably is where to keep other patch-sets for any other third party codes. E.g., for a brief time, this directory contained a patch that was required to install Image Magick properly with the ASGS provided Perl environment.

ASGS Created Directories During Installation

  • $SCRIPTDIR/opt

The main root directory where everything that is built from upstream sources is placed. This includes perl, netCDF, and HDF5. There are built binaries that are built and left in place. See the value of PATH inside of asgsh to see how it differes from the login environment.

  • $SCRIPTDIR/opt/models/ADCIRCs

Where ADCIRC is build and installed. Referenced to by meta data stored in the $SCRIPTDIR/.adcirc-meta directory.

  • $SCRIPTDIR/.adcirc-meta

This directory contains the meta data used by the list adcirc command.

Installation

For a supported platform, it is strongly recommended that the ./init-asgs.sh script be used. Efforts have been made to make this support most common HPC installations known for hosting ASGS. We will add support for a platform by request provided access to the platform is provided for testing. ASGS also provides a way for sites to maintain their own set of supported platforms via init-asgs.sh's -p option. See the section on the platforms directory above for more information on that. Efforts have also been made to support Ubuntu and Debian based Linux environments, this is typically conidered to be desktop or docker mode. It's not perfect, mainly because most known users are on the supported platforms. We will address any issues related to Ubuntu or Debian relatively quickly.

The rest of this section is retained more for the quality and depth of information, even though reference to platforms is rather dated. It is still a good overview of the considerations involved in platform support.

The list of requirements for deployment of the ASGS at a new high performance computing facility is as follows:

Hardware Requirements

  • ASGS can complete a single tropical cyclone forecast on the North Carolina grid (version 6b, 295328 nodes) in 1 hour 10 minutes on 384 2-year old Nehalem cores. A 5 member ensemble on this machine and using this grid would therefore require 1920 cores.

  • The ASGS produced 15 GB of data per advisory with a single ensemble member on the NC v6b grid during Irene. Running one complete storm (estimated at 30 advisories) with 5 ensemble members would therefore require 15530 = 2250 GB = 2.25 TB uncompressed ascii (includes hourly output from all full domain files, also includes all SWAN output). We are presently completing the conversion of ASGS output to NetCDF which will provide significant data storage savings.

  • The OCPR Louisiana mesh has 1,088,315 nodes, so the use of this mesh to produce results for Louisiana would require 3.7x more resources than described for the NC v6b mesh, all other things being equal.

  • The preferred distribution mechanism for distributing output data files is an opendap server, preferably one that shares a file system with the ASGS. A shared filesystem would allow results to be streamed to the OpenDAP system as they are produced, allowing visualization postprocessing to proceed in parallel with the generation of results.

  • Incoming connectivity requirements include http and ftp for downloading data. Outgoing connectivity requirements include mail as well as the ability to expose results to an opendap server.

Software Requirements

Note 1: The following is maintained in documentation for historical purposes, but all software requirements are handled by the ASGS Shell Environment during installation except standard libraries and utilities that are expected to be provided for by the system, including: compilers, automake tools (make, etc), version control software (git), etc.

Note 2: Computing environments managed by programs like modules (very common in the HPC realm) tend to cause some confusion. When ASGS is built, the environment one wishes to present inside of the ASGS Shell Environment (asgsh) should already be present and it should already be your stable, familiar login environment. We recommend strongly not to use any programs like modules or load software using it inside of ASGS itself. Doing so leads to a lot of instability in the operational environment, and thus over the years has been very strongly discouraged. If you feel like this is unavoidable, talk to us.

  • The 2011stable version of the ASGS is compatible with the latest stable release version of ADCIRC v50. The latest trunk version of the ASGS always seems to require the latest trunk version of ADCIRC.

  • Running the ASGS requires minimal external libraries and executables, because the System has been designed with portability in mind. These libraries and executables are as follows: a Fortran compiler including mpif90, a C compiler, MPI, NetCDF, GNU make, svn client, bash, perl, tar, gzip, screen, and mail.

  • If graphical post processing is required, the list of requirements becomes much longer, including GMT, ImageMagick, ghostscript, gnuplot and RenciGETools and all their dependencies.

Installation Step-by-Step

Note: The following if more for historical and background information. Now it is strongly recommended one uses ./init-asgs.sh to install ASGS. For a more modern treatment of the steps involved in installing ASGS, please inspect the internal documentation and code present in the main installer script called by ./init-asgs.sh, cloud/general/asgs-brew.pl.

Historical Steps

  • Determine if your platform already meets the requirements listed in the previous two sections. If not, the unfulfilled requirements must be resolved.

  • If the ASGS is to be installed on an HPC system, determine if the target HPC system is already supported by the ASGS. This determination can be made by checking the 'env_dispatch' subroutine in the 'asgs_main.sh' source code.

    • If the target HPC system is not found in the list in 'env_dispatch', then it is not already supported by the ASGS. Adding support for a new HPC system to the ASGS is not that difficult; please see the section entitled Pioneering below for details.

    • If the ASGS is to be installed on a personal linux machine (i.e., a desktop or laptop without a queueing system where you just run parallel jobs directly via mpiexec), then use 'desktop' as the environment.

    • If NAM data are to by used as input, compile 'input/awip_lambert_interp.F' to an executable called 'awip_lambert_interp.x' and place the executable in the ASGS base directory. Example instructions for compiling this program are listed in the comments at the top of the source file. This code is used with meteorological input from the NAM (North American Mesoscale) model to convert the data from a Lambert Conformal projection to geographic.

    • If NAM data are to be used as input, download and compile the wgrib2 package from NCEP's Climate Prediction Center (CPC): http://www.cpc.ncep.noaa.gov/products/wesley/wgrib2/ ... instructions for compiling this code are available from that site. The resulting executable should be named 'wgrib2' and placed in the ASGS base directory.

    • Compile the program 'tides/tide_fac.f' according to the instructions in the source code and name the resulting executable 'tide_fac.x', leaving it in the 'tides' subdirectory.

    • Go to the directory that contains the ADCIRC source code that will be used by the ASGS and compile 'adcprep', 'padcirc' and 'hstime'. If tropical cyclone forcing will be used, also compile 'aswip'. If wave coupling will be activated, also compile 'padcswan'. Leave the executable files in place.

    • Collect up the ADCIRC input files that would normally be required for the mesh that will be used in real-time operation. At a minimum, this includes a fort.14 (mesh) and fort.15 (control) file. It may also include a fort.13
      (nodal attributes), fort.26 (swan control), and swaninit files. Place these files in the 'input' subdirectory. The actual file names are arbitrary; the ASGS does not require them to be named fort.14 etc.

    • Make a copy of the fort.15, swaninit, and fort.26 files; convert the copies into template files by removing key parameters and replacing them with a special string that the ASGS will use to fill in the parameters during operation (analogous to filling out a form). The template versions of the control files have the string '.template' appended to their file names by convention, but the names are arbitrary. Have a look at the files 'input/FEMA_R3_fort.15.template' and 'input/FEMA_R3_fort.26.template' to get an idea of how to turn a control file into a template. Its not complicated, but may be slightly tedious. It only has to be done once.

    • If there was a list of elevation or meteorological recording stations in the fort.15 file, cut and paste the lists of stations into separate files with the same format as required by the fort.15 ... the files 'input/FEMA_R3_elev_stations.txt' and 'input/FEMA_R3_met_stations.txt' are examples. The station lists should not appear in the fort.15 template file.

Once the above steps are complete, the ASGS should be ready to run and produce results. The installation of programs required for post processing are not included in the above procedure, and will be covered in a later iteration of this document.

Instructions for configuring an instance of the ASGS and starting it up are found in the ASGS Operators Guide (in 'doc/ASGSOperatorsGuide.html').

Pioneering

Note: This section is also preserved for historical interest, there is valuable insights to be gained by reading this section. Pioneering has made way to Porting, and this can be done pretty efficielty these days by someone who is experienced with the ASGS code base. Anyway may recommend platform support. However we require machine access, at least for a short while to do the work. ASGS also supports privately maintained platform support via ./init-asgs.sh's -p option.

The term 'Pioneering' is used to refer to the process of installing and running the ASGS on an HPC system where it has never run before. Because the ASGS is not specific to any particular type of HPC system, it is not too difficult to do this, particularly for an ASGS Developer that already has experience with the target HPC system and knows its idiosyncracies.

The differences between HPC platforms that are relevant to ADCIRC and ASGS generally fall into the following categories:

  • c, f90, and mpif90 compilers, or different versions of those compilers
  • netcdf libraries, versions of those libraries, and procedures for compiling programs that use netcdf
  • differences in the type of queueing system that different platforms use for job submission (e.g., ASGS currently supports PBS, SGE, LoadLeveler, LSF and mpiexec)
  • setting the PATH, LD_LIBRARY_PATH, and/or loaded "modules" interactively and in compute jobs so that programs can find their libraries when they run
  • technical requirements and IT policies for submitting MPI jobs, including
    • differences in the way that the number of cores is actually specified when submitting a job
    • method for handling reservations and special high priority queue submission
    • method of transitioning from a lower priority to a higher priority queue
  • technical requirements and IT policies for submitting single processor jobs (i.e., adcprep), including:
    • whether they can be run on a login node or must be processed through a queue
    • how different the submission of single processor jobs is from regular MPI jobs
      • different queue name?
      • different account number?
      • special characters on queue script submission line?
    • whether single processor jobs that require high memory (as adcprep often does on large meshes) have other special requirements for submission
  • method of transmitting numerical and/or graphical results to end users
  • optional: libraries available for graphical post processing on the target HPC system
  • optional: available facilities and methods for archiving data

These differences will be described more fully in each of the sections below.

Compilers

Note: ASGS officially supprts 2 compilers: gfortran (< 10.0) and the Intel Compiler Suite. Other sets of compilers may be supported, but with all of the platforms where ASGS runs, there seem to be the two options available. Compiler support is something that takes work, but is not impossible. If needed, create an issue on the Github issue tracker, and the developers will consider it; especially if monetary or in kind support is offered. Access to the machine environment will definitely be required for a time.

The issue of compilers and compiler versions generally comes up with ADCIRC, rather than the ASGS itself. The first step in moving ASGS to a new system is to put ADCIRC on that system and compile 'adcprep', 'adcirc', 'padcirc', 'padcswan', 'aswip' and 'hstime'. ADCIRC supports a wide variety of compilers, but different HPC platforms have different compilers, and different compilers balk at different things. In contrast, the Fortran utilities required by the ASGS are generally easy to compile with just about any Fortran compiler.

NetCDF

Note: ASGS installs its own NetCDF and HDF5 libraries; this is required to properly manage the fundamental requirements ADCIRC (and SWAN) place on these libraries. The following information is relevant, but historical.

The NetCDF libraries that are installed on HPC systems varies widely; some systems don't have it at all, while others have it installed in a directory that you have to have in your PATH to get things to compile and run, while others use a "module" system that requires the right modules to be loaded to compile and/or run programs with NetCDF.

If the PATH, LD_LIBRARY_PATH, or module state has to be set in a particular way to get NetCDF programs to run interactively, these settings will probably have to be duplicated in the queue scripts generated by the ASGS, which leads us to the next section.

Job Submission

Note: This detailed section contains historical references, but is relevant to how ASGS has traditionally approached handling the variety of batch user systems present on large institutional HPC resources.

Configuration variables that are important to interfacing with the system's queuing system are usually defined in the platform (e.g. for legacy platforms in ./platforms.sh; individually in the ./platforms directory) as follows:

QUEUESYS           # e.g., "PBS"
PPN                # number of (P)rocessors are available (P)er compute (N)ode
QCHECKCMD          # a command
QSUMMARYCMD        # a command
QUOTACHECKCMD      # a command
ALLOCCHECKCMD      # a command
QUEUENAME          # parallel queue name
SERQUEUE           # serial queue name
SUBMITSTRING       # a command template with placeholders
QSCRIPTTEMPLATE    # noted above, defaults to "qscript.template"
QSCRIPTGEN         # noted above, defaults to "qscript.pl"
JOBLAUNCHER        # a command
REMOVALCMD         # a command
ACCOUNT            # typically a resource "service unit" (SU)  allocation group
GROUP              # Unix user group associated with disk quotas (e.g., on TACC)

All of the above commands may be overrided in either $ASGS_CONFIG or the ~/.asgsh_profile file. For example, some operators will very often set site-wide variables like ACCOUNT, GROUP, and PPN in their ~/.asgsh_profile, which results in much cleaner and more portable $ASGS_CONFIG files.

Historical Description

The main difference between different HPC platforms that is relevant to the ASGS is the machine-specific way in which jobs are submitted for execution. There are many subtle differences in the ways that different machines handle job submission that each require different information to be supplied in different ways. Consider a few simple examples:

  • Blueridge, a Dell Linux cluster at RENCI, uses PBS.

    • Job submission requires the number of "processors per node" to be specified in the queue script, as well as the total number of nodes that are requested (as opposed to requesting a certain number of processors) as well the name of the queue itself. If the number of CPUs is not evenly divisible by the number of processors per node, the ASGS must round the requested number of nodes up.

    • When running in the dedicated (high priority) queue on blueridge, the queue name must be specified as "armycore", whereas the normal priority queue is specified as "batch".

    • Furthermore, the number of processors per node is different for these two queues (PPN=12 if the queue name is "armycore" and PPN=8 if the queue name is "batch").

    • So, for example, for the ASGS to switch from a lower priority to higher priority queue, it has to be able to dynamically change the queue name, the number of processors per node, and recalculate the number of nodes, although the number of processors has not changed.

  • Diamond, a Cray cluster at ERDC also uses PBS and has the following requirements for job submission:

    • the queue name for single processor jobs is different from the queue name for MPI jobs

    • there is a "trick" in the submission process for single processor jobs that need a lot of RAM; it requires the ASGS to actually request 0 CPUs for the serial job

    • there are different queue names for differently sized jobs (small MPI jobs are not allowed in the queue for large jobs, and vice versa) ... this must be taken into account when testing ASGS on small meshes, then scaling up

    • if a dedicated reservation has been awarded (for high priority runs), the ASGS must start using use special, one-time-only queue names for serial and parallel jobs

    • when running on a dedicated reserved queue, a different account number must be used with the new queue names; this account number cannot be used with the regular queues

The examples above illustrate that (a) different HPC platforms have different requirements for job submission, even if they use the exact same queueing system (PBS for example); and (b) a set of job submission rules and requirements that are simple enough to accommodate when manually running individual jobs on a single machine can require a suprising amount of attention to generalize and automate reliably for the full range of anticipated use cases.

The ASGS deals with all these issues using a template approach. That is, an ASGS Developer starts with some manually developed queue scripts for various types of jobs on the target platform, and abstracts them to a template (or a group of templates in some cases). There are several templates available for currently supported platforms (e.g., 'input/erdc.adcprep.template.pbs' and 'input/ranger.template.sge', etc) to use as examples.

The other mechanism that the ASGS uses to manage these requirements is the use of pluggable template filler scripts. The ASGS Developer must write a script or scripts to properly fill in the queue script template(s) described above. There are several existing template filler scripts (e.g., 'erdc.pbs.pl', 'loadleveler.pl', 'queenbee.pbs.pl', 'ranger.serial.pl', 'ranger.sge.pl', and 'tezpur.pbs.pl') for existing platforms that can be used as examples.

Finally, the ASGS Developer must update the 'env_batch()' subroutine in 'asgs_main.sh' to specify---among other things---the names of the template and template filler that will be used for that platform. When first starting the ASGS, the Operator simply selects the name of the computing environment on the command line, e.g., something like 'garnet' or 'blueridge' or 'desktop'. The ASGS checks to see if the specified machine is supported using the 'env_dispatch()' subroutine, which then sets the names of the queue script templates and queue script filler scripts that are appropriate for that platform.

Here is an illustration of this template-based configurability in action:

  • In order to run 'adcprep', the ASGS calls the 'prepFile()' subroutine, which checks to see if the queueing system was specified as either Sun Grid Engine (SGE, which is used on Ranger at TACC), or Portable Batch System (PBS, which is used on many other machines).

  • if the queue system is PBS a. the ASGS calls a queue script generation program, whose name is configurable b. it feeds the queue script generation program a queue script template, whose name is also configurable c. it provides a variety of command line options, such as the number of processors per compute node, the number of compute processors the job should run on, the account number for the job, and the name of the queue d. the resulting queue script is then submitted with 'qsub' e. the ASGS monitors for completion

  • if the queue system is SGE

    a. the ASGS performs a similar set of actions as for the PBS case above, but with a shorter set of command line options to the queue script generator b. the ASGS could also perform a "resubmit" step, since SGE on Ranger at TACC had a habit of intermittently rejecting valid compute jobs

  • if the queue script is neither PBS nor SGE, the ASGS executes 'adcprep' directly, that is, on the login node if running on an HPC machine, or (more likely) on the command line, if running on a desktop or laptop

Post Processing Steps

The following set of configuration variables govern what is availabe for post processing, which includes email notifications and the running of script. Please note that POSTPROCESS is hook point, and the list of files listed in the Bash array is run in order. The implied base directory is $SCRIPTDIR/output, so the scripts list there should be available from this reference point.

EMAILNOTIFY=yes                                            # required for email sending
INTENDEDAUDIENCE=general                                   # "general" | "developers-only" | "professional"
OPENDAPPOST=opendap_post2.sh                               # current standard way to transfer files to a TDS server
POSTPROCESS=( createMaxCSV.sh includeWind10m.sh createOPeNDAPFileList.sh $OPENDAPPOST )
OPENDAPNOTIFY="[email protected], [email protected]"       # comma/space delimited list of email addresses to notify
NOTIFY_SCRIPT=cera_notify.sh                               # notification script
TDS=( tacc_tds3 )                                          # list of TDS servers to `scp` files to via opendap_post2.sh

The ASGS already supports a great deal of built-in post processing options that have been required over the years by different sites. These include generation of jpgs of contour plots using FigureGen, generation of GIS shape files of results and generation of Google Earth visualizations of results using RenciGETools, generation of line plots of elevation and wind speed using gnuplot, particle tracking algorithms, conversion of ascii adcirc output to netcdf, and posting of results to external opendap servers. Many of these output types have their own dependencies, which may or may not be satisfied by the target HPC environment. Please see the 'output' subdirectory for samples of various post processing scripts that are already available.

Archiving

The final consideration in platform-, machine-, and site-specific customization is the use of data archiving. Some environments have a dedicated archival storage system, while others have no long term storage facilities at all. The ASGS handles site-specific archiving with the same approach as it does with post processing: it allows the ASGS Developer to supply the file name of a script that performs whatever actions will be necessary to archive the results. The ASGS executes the script once all ensemble members in forecast have been completed.

Because of the nature of archiving (data compression, data copying or transmission which may take a long time), the archive script is executed with an ampersand, that is, it is run in the background. This allows the ASGS to get on with looking for the next cycle or advisory while the archive script packages up the previous cycle or advisory.

The configuration variables that are used in this step of ASGS' post processing are:

ARCHIVE
ARCHIVEBASE
ARCHIVEDIR

See more about these variables in the Operators Guide.

As with other variables in this section, these are commonly set per platform; but also commonly are set in $ASGS_CONFIG for clarity or portability reason.

Please see the 'output' subdirectory for samples of various existing archive scripts for hints and ideas about generating new archiving scripts.

Copying Final Results

The transmission of results to end users is considered part of the post processing in the ASGS. Different ASGS installations will vary widely on the type of post processing that they want to do in-situ, that is, right there in the HPC environment where the ASGS is running and the results are being generated.

This diversity is supported in the ASGS by allowing the ASGS Developer to simply supply the name of an executable (generally a shell script) that the ASGS should run at the end of each forecast. This allows the ASGS Developer to insert any type of post processing at all, and maintains the modular structure of the overall system.

The most basic post processing is simply transmission of numerical results to an external server. In this case, the post processing script would contain an 'scp' command to copy the result files. In order to avoid an interactive password prompt, the ASGS Operator's private key should be copied to the proper user account on the receiving server, and key authentication should be enabled for ssh.

This area of the code is always under active development to make it easier, and the most state of the art upload script is output/opendap_post2.sh. This script works in tandem with the defined servers that are set in the configuration file using the TDS variable.

Meteorological Forcing

The ASGS is capable of using gridded met fields from NCEP's NAM model, or two different types of vortex forcing with ADCIRC's asymmetric wind models (NWS9 and NWS19). An outline of the procedure used by ASGS for obtaining and processing these input data is provided below.

NAM Forcing

The National Centers for Environmental Prediction (NCEP, a division of the National Oceanic and Atmospheric Administration, NOAA) runs the North American Mesoscale model (NAM) four times per day, producing a nowcast and a 3.5 day forecast of the atmosphere over North America.

The NAM data are produced on a Lambert Conformal grid with 12km spacing and written in a binary file format called grib2. The ASGS uses the following procedure to prepare these data for use in ADCIRC.

  • The 'get_nam.pl' program is used to connect to the NCEP ftp site once per minute and check the dates and times of the most recent NAM output files. If it finds files that have dates and times that are more recent than the most recent ADCIRC hotstart file, a new cycle is deemed to have started, and the new files are downloaded in preparation for a nowcast run.

  • Once the data have been downloaded, the ASGS runs the 'NAMtoOWI.pl' script to perform the following procedures:

    • extract the data that are required (u, v, p at 10m or MSL ... the grib2 files also have a lot of data that is not needed by the ASGS)

    • call 'awip_lambert_interp.x' on each grib2 file to reproject the data from Lambert Conformal to geographic projection and interpolate the data onto a grid that is regular in geographic coordinates (which is required for the OWI file format) using an external list of lat,lon points (e.g., 'input/ptFile.txt')

    • write the data in OWI format to fort.221 and fort.222 files, and generate a companion fort.22

    • The ASGS then uses 'control_file_gen.pl' and the meteorological files that were generated by 'NAMtoOWI.pl' to properly formulate the ADCIRC fort.15 control file (and SWAN fort.26 file, if any).

    • Once the nowcast job has been submitted and completed, the ASGS calls 'get_nam.pl' again to download the NAM forecast data for the same cycle, using the steps listed above.

The forecast data are not downloaded at the same time as the nowcast data, because NCEP releases the NAM data files as they are generated, which can take more than an hour. As a result, it makes more sense to download and run the nowcast while NCEP continues to post forecast data, then come back when the nowcast run is complete and download the forecast.

The ASGS may be able to finish the nowcast before NCEP finishes posting forecast data, depending on the size of the mesh, number and speed of the processors the ASGS is using, and queue congestion, if any. If that happens, 'get_nam.pl' just downloads the forecast data as they become available, and when they have all been downloaded, the ASGS goes on to set up and run the forecast as described above.

Parametric Vortex Forcing

The ASGS can also be configured to use data from the National Hurricane Center (NHC) to generate parameter files for use in ADCIRC's internal configurable asymmetric vortex model (ADCIRC NWS=19).

Background

The asymmetric vortex wind model input differs from most other types of wind input in ADCIRC in that it consists of storm parameters, rather than the meteorological data itself. ADCIRC then uses these parameters to generate the actual meteorological data internally at each node at every time step during the simulation. The result is that the fort.22 files are very small (e.g. 20kB), in comparison with fort.22 files that contain actual meteorological data.

The format of the fort.22 file for the asymmetric wind model is the ATCF (a.k.a. "BEST track") format. This format was developed by the U.S. Navy, and ATCF stands for Automated Tropical Cyclone Forecast. Historical tracks, real-time hindcast tracks and real-time forecast tracks may be found in this format. The format is documented in detail at the following web site:

http://www.nrlmry.navy.mil/atcf_web/docs/database/new/abrdeck.html

One important thing to remember about this format is that it looks like CSV data, but it is actually fixed column width data. Never change the width of the columns when you edit this data!

It is assumed by the asymmetric wind code within ADCIRC that the first entry in the fort.22 file corresponds to the cold start time (ADCIRC time=0) if the simulation is cold started, or to the hotstart time if the simulation is hotstarted. Therefore, the forecast period (column #6) needs to be edited by the workflow scripts to reflect the time of the forecast/nowcast for each track location (each line) in hours from the start of the simulation (0, 6, 12, 18, etc). The original data in that column depends on what type of best track format data is being used. The original data might have 0 or other numbers in that column.

The behavior of the configurable asymmetric meteorological model (NWS=19) is similar to its ancestor, the original asymmetric meteorological model (NWS=9). However, the configurable asymmetric vortex wind model was developed for several reasons: (1) to allow more visibility into the parameters, such as Rmax, that control a storm's size and shape and are calculated within the wind model code; (2) to allow the user to control parameters such as Rmax so that they may be adjusted by the user; and (3) to allow the user to deterministically compensate for input data that are missing or nonexistent (such as wind radii in various quadrants for a particular isotach).

The mechanism for achieving the goals described above is a preprocessing program called the asymmetric wind input preprocessor, or 'aswip', that takes the ATCF formatted input data that would normally be used for the NWS8 or NWS9 and adds columns to it that describe the following things:

  1. The Rmax from the hindcast (i.e., BEST lines) has been persisted from the Rmax column (described as MRD in the ATCF documentation) from the value in the forecast (i.e., OFCL lines).

  2. The storm direction DIR and speed SPEED in the ATCF file have been replaced with the calculated direction and speed to be used by ADCIRC. The values are provided in the same format as the ATCF file to provide compatibility between methods. The speed in given in knots and the direction is given in compass coordinates, with zero degrees indicating North and values increasing clockwise.

  3. In the 2nd column after the storm name, the cycle number is provided. A 'cycle' is an entry or set of entries in the file that all have the same storm time or forecast period. For cycles that have more than one isotach, this value will be repeated for each isotach (starting from 1 for the first cycle in the file).

  4. The 3rd column after the storm name contains the number of isotachs that are reported for that particular cycle. This value is also repeated on each line for each isotach that is reported per cycle. For example, if the cycle has a 34kt and a 50kt isotach entry then this column will contain a '2' for both entries in that cycle.

  5. The following 4 columns contain the flags that tell the ADCIRC NWS 19 code whether or not to use a particular wind radius from the isotach under consideration. There is a flag for each quadrant. A 0 indicates that the wind radius for that isotach and quadrant will not be used. A 1 indicates that the wind radius for that isotach and quadrant will be used. For example: if only the 34kt isotach is provided, then then all four wind radii must be used, and the columns will all be set to 1.

  6. In the next 4 columns, the calculated Rmax for each quadrant is listed in the following order: NE SE SW NW.

  7. The next column contains the overall Holland B value.

Another example: if 3 isotachs are provided then the columns may look like the following:

34 ... 3 0 0 0 0 ... 50 ... 3 0 0 1 1 ... 64 ... 3 1 1 0 0 ...

this indicates -

  • use NO radii from the 34 kt isotach
  • use the 3 & 4 quadrant radii from the 50 kt isotach
  • use the 1 & 2 quadrant radii from the 64 kt isotach

Users could potentially modify these flags in the input file to manually select which radii to use for each cycle.

Finally, one valuable aspect of this file format is that it can be used by NWS8, NWS9, or NWS19, since the original data have not been modified. The extra columns are used as input by NWS19, and as a result, they provide both metadata and control over these parameters.

Procedure

The steps that the ASGS uses to prepare the Forecast/Advisory data for use in ADCIRC are described below.

  • The ASGS uses the 'get_atcf.pl' script to check the NHC RSS feed (which is just an xml text file they keep on their web site). The RSS feed contains the current advisory number and a hyperlink to the html-formatted file containing the forecast/advisory text.

  • The 'get_atcf.pl' starts by downloading the latest ATCF formatted hindcast file from the NHC anonymous ftp site. The current name of the storm is parsed out of the hindcast file for use in 'get_atcf.pl' during parsing of the RSS feed; e.g., the storm name can change from TD TWO to TD BERTHA from one advisory to the next. The hindcast data are also used later when ASGS calls 'storm_track_gen.pl'.

  • The 'get_atcf.pl' script then downloads the NHC forecast/advisory as follows

    • if the ASGS has just started, it follows the hyperlink in the RSS feed and downloads the current forecast/advisory

    • if the ASGS has already run through a complete advisory cycle, it polls the NHC RSS feed once per minute ... when it detects that the advisory number has been updated in that file, it follows the hyperlink in the RSS xml file and downloads the current forecast/advisory, returning the new advisory number to the ASGS

    • The ASGS calls the 'nhc_advisory_bot.pl' script to parse the forecast/advisory text out of the html and parse the relevant parameters out of the forecast advisory text to produce an ATCF-formatted forecast file.

    • After 'get_atcf.pl' and 'nhc_advisory_bot.pl' have run, the ASGS has an ATCF-formatted hindcast file (containing the past and present states of the tropical cyclone) and an ATCF-formatted forecast file containing the present and predicted future states of the tropical cyclone.

    • The ASGS then calls 'storm_track_gen.pl', providing it with the current hotstart time in ADCIRC; the 'storm_track_gen.pl' script melds the hindcast and forecast data together such that it

      • starts at the current ADCIRC hotstart time

      • ends at the end of the nowcast period, which is the same as the end of the hindcast data, if the run is a nowcast run

      • ends at the end of the forecast period, if the run is a forecast run

      • has the specifed track perturbations applied, if any perturbations were specified, and if the run is a forecast run

  • The 'storm_track_gen.pl' script writes out the melded data to an ATCF-formatted fort.22 file.

  • The ASGS then calls 'aswip' to process the fort.22 file produced by 'storm_track_gen.pl'; 'aswip' calculates the Rmax in each quadrant and writes out an NWS_19_fort.22 for use in ADCIRC.

After the NWS_19_fort.22 file has been produced, the ASGS feeds it to 'control_file_gen.pl', which generates a fort.15 file that will cover the time period specified by the parameters listed in the NWS_19_fort.22 file.

Development Strategy

A full and up-to-date treatment of the development process, please view the file:

https://github.com/StormSurgeLive/asgs/blob/master/CONTRIBUTING

Appendix A: This Document

This document was prepared from the text file ASGSDevGuide.txt using software called asciidoc (http://www.methods.co.nz/asciidoc/). The document can be formatted as an html page with the command

asciidoc --backend html5 -a toc2 ASGSDevGuide.txt

or formatted as a pdf with the command

a2x --format=pdf -a toc2 ASGSDevGuide.txt