These final verifications should be performed on the generated data that are 'inputs' to the Cybershake simulations (e.g. SRF, VM, simulation config files).

Here are the details about the code and how to run it is :

Cybershake input verification code

Here is the example output of the code for CS18.6

 Cybershake v18.6 input verification

Note: the italic items are not implemented yet (17/10/2018), but will be added progressively.

Rupture: 

  1. Magnitude of ruptures

    Test 1a): Extract Mw for all ruptures from SRF.info, plot them versus area (i.e. Mw vs Area) and compare them with scaling relationships  (e.g. the Leonard relationship (2011- Table 6)).  Data should be colored based on the tectonic type (which indicates which scaling relationship should be used)
    Pass criterion: if all the data sits on the scaling relationship lines, the magnitude of SRFs are correct.  Eventually, this can be automated, but visual examination will be sufficient at present.

    Note: Make sure stable continental region (SRC) of is NOT used. Only use DS and SS equations from of Table 6 of  Leonard (2011).
    Note: SRF.info contains this information (File Formats Used On GM)

    Test 1b) Plot Mw of SRf files (i.e., read it from the srf file) versus Leonard Mw
    Pass criterion: If the data are on the 1-1 line, the results are consistent.

  2. Number of rupture realizations per source
    Test: Plot number of SRF files that exist in the corresponding directory for that given fault as a function of the source Mw (i.e. number of files vs. source Mw).  This should be compared with the parametric model that is described (num rup vs. source Mw).
    Pass criterion:  When rounded to an integer, the values should be in line with the parametric model. Eventually this can be automated, but visual examination will be sufficient at present.

  3. Lower Seismogenic depth
    Test: Plot lower seismogenic depth of a given fault from national hazard model (Stirling et al 2012) versus that from SRF.info (i.e. dbottom).
    Pass criterion: There should be two clusters of results on the plot. Some results should be on the one-to-one line (i.e., for the ruptures that have seismogenic depth lower than 12km), the other ones should have dbottom values in the SRF.info that are 3 km above the corresponding values from national hazard model.

    Note: (Up untill 18p6 version of Cybershake) the 12 km and 2 km values are hard-coded in the SRF generation code.

  4. Spatial distribution of sources across NZ
    Test: Plot (on a map) one realization of SRFs generated for all the faults considered in the Cybershake runs. If faults are not included in a Cybershake run, plot the geometry of them with a different color.
    Pass criterion: A researcher will look at the plot and search for anomalies in terms of fault geometries. Also, the researcher should see the faults that are not included in the cybershake runs.

  5. Spatial distribution of sources across NZ based on tectonic type
    Test: Plot (on a map) SRFs colored based on their tectonic type
    Pass criterion: A researcher will look at the plot and search for anomalies in terms of tectonic assignment. Also, the researcher should see the faults that are not included in the cybershake runs.

6. Statistical properties of hypocentre locations

Test 6a) For a given fault, plot the normalized s_hype (i.e., s_hype / rupture_length) empirical distribution of realizations versus the theoretical distribution used.
Note: for CS18p6, hypocentre normalized location long the strike is based on a normal distribution with shyp_mu = 0.5  shyp_sigma = 0.25 (from Mai et al 2005 BSSA).


Test 6b) For a given fault, plot the normalized d_hype (i.e., d_hype / rupture_width) empirical distribution of realizations versus the theoretical distribution used.
Note for CS18p6, hypocentre normalized location along the dip is based on a Weible distribution with dhyp_scale = 0.612 and dhyp_shape = 3.353 (from Mai et al 2005 BSSA)

Pass criteria for two test: The empirical and theoretical distributions should be consistent (visually). 

We can also use Kolmogorov Smirnov test bounds to mathematically the consistency... 


7. The difference between (NSTRK * subfault_size)  and LEN; and (NDIP * subfault_size) with WID 

Test: Plot these two values from the SRF files for all the faults

    

(NSTRK * subfault_size  - LEN) / LEN

(NDIP * subfault_size - WID ) / WID

Pass: These values should be close to zero.

 

8. Create a text file containing the names of SRFs that should have been generated, but are not present in the directory (missing SRFs).

9. Plot the SRF and stochfile subfault sizes for all generated  SRF and Stoch.


Velocity Model:

 

1. Velocity model domains viewed spatially
Test: Plot all the VM domain boxes on a map view.
Pass criterion: A researcher will look at the plot and see if there are any anomalies (e.g, very large VM domains, VMs with large orientations at the wrong directions).

2. Core hour estimate as a function of magnitude of simulation
Test: Plot the core-hour estimates calculated by reading nx, ny, nz, dt, and total_duration from the velocity model params.py files by running the core-hour calculations.  The core hour should be plotted as a function of the source magnitude.
Passcriterion: The plot should be looked at to find strange outliers in the calculations. 

3. Simulation duration vs magnitude
Test: Plot the duration of the simulations from the velocity model params.py files versus the equation for the duration ( ???).  Since the duration is a function of domain size (which depends strongly on magnitude), then this can be a plot of duration vs. source magnitude.
Pass criterion: The results should be compared with predictive models (need to define this more clearly)

4. Velocity model binary file vs magnitude.
Test: Compare binary file size
Pass:  Plot velocity model size versus rupture magnitude to find outliers.

 

5. Create a text file containing the names of VMs that should have been generated, but are not present in the directory.


Ground motion simulation:

1. Transition frequency

Test: Plot f0 (transition frequency) of the simulations from the velocity model params.py files against this equation ( f0 <= Vs_min / (5 * hh) )

Pass criterion: The results should show a single point on the plot (or lower values than f0 if varying transition frequency is used for different simulations – finer discretization for a subset of faults) (so action this test once there are non-trivial values for this)

2. time discretization 

Test: Plot the dt from the velocity model params.py files against the hard-coded value and also satisfy this equation (dt < 0.495 * hh / V_max ) from Graves 1996 BSSA
Pass criterion: The plot should show a single point. (so action this test once there are non-trivial values for this)
Note: hh is the grid size of the VM; V_max of the maximum velocity in your VM.
Note: if we have a varying discretization and Vs_max for specific sub-set runs, those values should show on the plot.

Tentative items for future verification tasks:

  1. Slip realization: random seed; mean slip; max slip
  2. Magnitude realizations of a given fault.
  3. Stress drop for realizations of a given fault
  4. site-specific kappa value
  5.  Vs, Vp, rho min, mean, and max values for realizations of the velocity mode
  6. Rupture velocity
  7. Rise time
  8. Rake
  9. Path duration for HF simulation 
  10. Time window length for HF simulation
  11. Campbell and Bozirgnia site amplification values (i.e., site_fmin, site_fmidbot)
  12. ...

 

 

  • No labels