You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 23 Next »

1- Verification of inputs

These final verifications should be performed on the generated data that are 'inputs' to the Cybershake simulations (e.g. SRF, VM, simulation config files).

Rupture: 

  1. Magnitude of ruptures

    Test 1a): Extract Mw for all ruptures from SRF.info, plot them versus area (i.e. Mw vs Area) and compare them with scaling relationships  (e.g. the Leonard relationship (2011- Table 6)).  Data should be colored based on the tectonic type (which indicates which scaling relationship should be used)
    Pass criterion: if all the data sits on the scaling relationship lines, the magnitude of SRFs are correct.  Eventually this can be automated, but visual examination will be sufficient at present.

    Note: Make sure stable continental region (SRC) of is NOT used. Only use DS and SS equations from of Table 6 of  Leonard (2011).
    Note: SRF.info contains this information (File Formats Used On GM)

    Test 1b) Plot Mw of SRf files (i.e., read it from the srf file) versus Leonard Mw
    Pass criterion: If the data are on the 1-1 line, the results are consistent.

  2. Number of rupture realizations per source
    Test: Plot number of SRF files that exist in the corresponding directory for that given fault as a function of the source Mw (i.e. number of files vs. source Mw).  This should be compared with the parametric model that is described (num rup vs. source Mw).
    Pass criterion:  When rounded to an integer, the values should be in line with the parametric model. Eventually this can be automated, but visual examination will be sufficient at present.

  3. Lower Seismogenic depth
    Test: Plot lower seismogenic depth of a given fault from national hazard model (Stirling et al 2012) versus that from SRF.info (i.e. dbottom).
    Pass criterion: There should be two clusters of results on the plot. Some results should be on the one-to-one line (i.e., for the ruptures that have seismogenic depth lower than 12km), the other ones should have dbottom values in the SRF.info that are 3 km above the corresponding values from national hazard model.

    Note: (Up untill 18p6 version of Cybershake) the 12 km and 2 km values are hard-coded in the SRF generation code.

  4. Spatial distribution of sources across NZ
    Test: Plot (on a map) one realization of SRFs generated for all the faults considered in the Cybershake runs. If faults are not included in a Cybershake run, plot the geometry of them with a different color.
    Pass criterion: A researcher will look at the plot and search for anomalies in terms of fault geometries. Also, the researcher should see the faults that are not included in the cybershake runs.

  5. Spatial distribution of sources across NZ based on tectonic type
    Test: Plot (on a map) SRFs colored based on their tectonic type
    Pass criterion: A researcher will look at the plot and search for anomalies in terms of tectonic assignment. Also, the researcher should see the faults that are not included in the cybershake runs.

 

6. Statistical properties of hypocentre locations

Test 6a) For a given fault, plot the normalized s_hype (i.e., s_hype / rupture_length) empirical distribution of realizations versus the theoretical distribution used.
Note: for CS18p6, hypocentre normalized location long the strike is based on a normal distribution with shyp_mu = 0.5  shyp_sigma = 0.25 (from Mai et al 2005 BSSA).


Test 6b) For a given fault, plot the normalized d_hype (i.e., d_hype / rupture_width) empirical distribution of realizations versus the theoretical distribution used.
Note for CS18p6, hypocentre normalized location along the dip is based on a Weible distribution with dhyp_scale = 0.612 and dhyp_shape = 3.353 (from Mai et al 2005 BSSA)

Pass criteria for two test: The empirical and theoretical distributions should be consistent (visually). 

We can also use Kolmogorov Smirnov test bounds to mathematically the consistency... 


Velocity Model:

 

1. Velocity model domains viewed spatially
Test: Plot all the VM domain boxes on a map view.
Pass criterion: A researcher will look at the plot and see if there are any anomalies (e.g, very large VM domains, VMs with large orientations at the wrong directions).

2. Core hour estimate as a function of magnitude of simulation
Test: Plot the core-hour estimates calculated by reading nx, ny, nz, dt, and total_duration from the velocity model params.py files by running the core-hour calculations.  The core hour should be plotted as a function of the source magnitude.
Passcriterion: The plot should be looked at to find strange outliers in the calculations. 

3. Simulation duration vs magnitude
Test: Plot the duration of the simulations from the velocity model params.py files versus the equation for the duration ( ???).  Since the duration is a function of domain size (which depends strongly on magnitude), then this can be a plot of duration vs. source magnitude.
Pass criterion: The results should be compared with predictive models (need to define this more clearly)

4. Velocity model binary file vs magnitude.
Test: Compare binary file size
Pass:  Plot velocity model size versus rupture magnitude to find outliers.

Others (ideas that are not yet ready to test, but worth noting)

  1. Test: Compare name and magnitude of the list of sources used for the automated Cybershake Run on HPC with the list of existing SRFs.
    pass Criterion: there should be any difference identified.

  2. Test: Plot the dt from the velocity model params.py files against the hard-coded value and also satisfy this equation (dt < 0.495 * hh / V_max ) from Graves 1996 BSSA
    Pass criterion: The plot should show a single point. (so action this test once there are non-trivial values for this)
    Note: hh is the grid size of the VM; V_max of the maximum velocity in your VM.
    Note: if we have a varying discretization and Vs_max for specific sub-set runs, those values should show on the plot.

  3. Test: Plot f0 (transition frequency) of the simulations from the velocity model params.py files against this equation ( f0 <= Vs_min / (5 * hh) )
    Pass criterion: The results should show a single point on the plot (or lower values than f0 if varying transition frequency is used for different simulations – finer discretization for a subset of faults) (so action this test once there are non-trivial values for this)

 

Some sort of SRF slip realization verification would be good.  e.g. looking at the random seeds, or looking at the mean or max slip distribution.

2- Outputs from simulation 

Here are the checks on the obtained results from simulations:

IMs

  1. Test: Plot the ration of sim.empirical on a map view for the FIRST realization of a given fault (one the simulation is finished). Then compare the mean of bias in all location with a user_specifiedl_threshold.
    Pass criterion: If the mean bias is smaller than a threshold, keep continuing with the rest of simulations for that fault.

 

 


 

 

  • No labels