wavefield decomposition - GeoscienceAustralia/hiperseis GitHub Wiki
Experimental method based on Tao based on constraining physical parameters by minimizing upwards propagating
S-wave flux at the top of the mantle. Unproven or needs further development (improved constraints) for more than 2 layers.
Scripts are available under seismic/inversion/wavefield_decomp
:
There are two ways of running runners.py
, either as a single station job (job type single-job
) or a batch of stations parallelized using MPI (job type batch-job
). Current documentation for theses run modes will be maintained in the submodule README file.
The resultant report can be interpreted to obtain estimated of thickness and velocity ratio of each layer in the model.
Note that the runners.py
script has a feature to allow the same output HDF5 file name to be used for multiple jobs. This is achieved by generating a job timestamp for each job, and storing the results of each job in the HDF5 tree under a root node named with the timestamp. This makes the traceability data intrinsic in the data storage format, and avoids the pitfalls of trying to embed such metadata in file names and trying to handle an explosion of output files that demand management. The runners.py
script also stores the input job settings dictionary for each station so that this job information is not lost.
Consequently, when the plot_nd_batch.py
script is run to generate a report, if it finds more than one solution stored in the file then the user will be prompted at the command line to select which solution to plot. The information presented to the user includes both the timestamp of the job and the job name (if run via PBS). Furthermore, when the solution is plotted to PDF the solution input settings are printed for each station.
It is important to note that when the output file is re-used, it is NOT making use of parallel HDF5 file output. Even when run in MPI batch mode, all the solutions are gather to MPI node 0 before writing to file, so that file output occurs from a single process thread. The intention of output file re-use is to re-use it over time when repeat jobs are run for a given network, it is not for running many jobs in parallel and having them all write synchronously to the same file. This may result in a runtime error. If you run multiple jobs that will write to the same output file, then you want to be sure they will complete at very different times to avoid file output overlap from two processes.