1- Verification of inputs
These final verifications should be performed on the generated data that are 'inputs' to the Cybershake simulations (e.g. SRF, VM, simulation config files).
Rupture:
- Magnitude of ruptures
Test 1a): Extract Mw for all ruptures from SRF.info, plot them versus area (i.e. Mw vs Area) and compare them with scaling relationships (e.g. the Leonard relationship (2011- Table 6)). Data should be colored based on the tectonic type (which indicates which scaling relationship should be used)
Pass criterion: if all the data sits on the scaling relationship lines, the magnitude of SRFs are correct. Eventually this can be automated, but visual examination will be sufficient at present.
Note: Make sure stable continental region (SRC) of is NOT used. Only use DS and SS equations from of Table 6 of Leonard (2011).
Note: SRF.info contains this information (File Formats Used On GM)
Test 1b) Plot Mw of SRf files (i.e., read it from the srf file) versus Leonard Mw
Pass criterion: If the data are on the 1-1 line, the results are consistent. - Number of rupture realizations per source
Test: Plot number of SRF files that exist in the corresponding directory for that given fault as a function of the source Mw (i.e. number of files vs. source Mw). This should be compared with the parametric model that is described (num rup vs. source Mw).
Pass criterion: When rounded to an integer, the values should be in line with the parametric model. Eventually this can be automated, but visual examination will be sufficient at present. - Lower Seismogenic depth
Test: Plot lower seismogenic depth of a given fault from national hazard model (Stirling et al 2012) versus that from SRF.info (i.e. dbottom).
Pass criterion: There should be two clusters of results on the plot. Some results should be on the one-to-one line (i.e., for the ruptures that have seismogenic depth lower than 12km), the other ones should have dbottom values in the SRF.info that are 3 km above the corresponding values from national hazard model.
Note: (Up untill 18p6 version of Cybershake) the 12 km and 2 km values are hard-coded in the SRF generation code. - Spatial distribution of sources across NZ
Test: Plot (on a map) one realization of SRFs generated for all the faults considered in the Cybershake runs. If faults are not included in a Cybershake run, plot the geometry of them with a different color.
Pass criterion: A researcher will look at the plot and search for anomalies in terms of fault geometries. Also, the researcher should see the faults that are not included in the cybershake runs. - Spatial distribution of sources across NZ based on tectonic type
Test: Plot (on a map) SRFs colored based on their tectonic type
Pass criterion: A researcher will look at the plot and search for anomalies in terms of tectonic assignment. Also, the researcher should see the faults that are not included in the cybershake runs. - Statistical properties of hypocentre locations (comment from BB, for KT to clarify specifics)
Test: for each and every rupture realization extract the shypo and dhypo values and normalize by the strike length and dip width. Plot the empirical distributions vs. the parametric models
Pass: Empirical distributions consistent with parametric model
Velocity model:
- Velocity model domains viewed spatially
Test: Plot all the VM domain boxes on a map view.
Pass criterion: A researcher will look at the plot and see if there are any anomalies (e.g, very large VM domains, VMs with large orientations at the wrong directions). - Core hour estimate as a function of magnitude of simulation
Test: Plot the core-hour estimates calculated by reading nx, ny, nz, dt, and total_duration from the velocity model params.py files by running the core-hour calculations. The core hour should be plotted as a function of the source magnitude.
Pass criterion: The plot should be looked at to find strange outliers in the calculations. - Simulation duration vs magnitude
Test: Plot the duration of the simulations from the velocity model params.py files versus the equation for the duration ( ???). Since the duration is a function of domain size (which depends strongly on magnitude), then this can be a plot of duration vs. source magnitude.
Pass criterion: The results should be compared with predictive models (need to define this more clearly) - Velocity model binary file vs magnitude (comment from BB, for KT to clarify specifics)
Test: Compare binary file size
Pass: Should be used simply to identify outliers
Others (ideas that are not yet ready to test, but worth noting)
- Test: Compare name and magnitude of the list of sources used for the automated Cybershake Run on HPC with the list of existing SRFs.
pass Criterion: there should be any difference identified. - Test: Plot the dt from the velocity model params.py files against the hard-coded value and also satisfy this equation (dt < 0.495 * hh / V_max ) from Graves 1996 BSSA
Pass criterion: The plot should show a single point. (so action this test once there are non-trivial values for this)
Note: hh is the grid size of the VM; V_max of the maximum velocity in your VM.
Note: if we have a varying discretization and Vs_max for specific sub-set runs, those values should show on the plot. - Test: Plot f0 (transition frequency) of the simulations from the velocity model params.py files against this equation ( f0 <= Vs_min / (5 * hh) )
Pass criterion: The results should show a single point on the plot (or lower values than f0 if varying transition frequency is used for different simulations – finer discretization for a subset of faults) (so action this test once there are non-trivial values for this)
Some sort of SRF slip realization verification would be good. e.g. looking at the random seeds, or looking at the mean or max slip distribution.
2- Outputs from simulation
Here are the checks on the obtained results from simulations:
IMs
- Test: Plot the ration of sim.empirical on a map view for the FIRST realization of a given fault (one the simulation is finished). Then compare the mean of bias in all location with a user_specifiedl_threshold.
Pass criterion: If the mean bias is smaller than a threshold, keep continuing with the rest of simulations for that fault.