You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 11 Next »

1- Verification of inputs

Here are the checks that should be conducted before submitting simulation jobs to HPC:

Rupture: 

  1. Test: Extract Mw for all ruptures from SRF.info and plot them versus the Leonard relationship (2011- Table 6).
    Pass criterion: if all the data sits on the one-to-one line, the magnitude of SRFs are correct.

    Note: Make sure stable continental region (SRC) of is NOT used. Only use DS and SS equations from of Table 6 of  Leonard (2011).
    Note: SRF.info contains this information (File Formats Used On GM)

  2. Test: Plot number of realizations for a given fault based on its magnitude versus the number of the SRF files exists in the corresponding directory for that given fault.
    Pass criterion:  if the values sit on the one-to-one line, the number of realizations are correct.

  3. Test: Plot lower seismogenic depth of a given fault from national hazard model (Stirling et al 2012) versus that from SRF.info (i.e. dbottom).
    Pass criterion: There should be two clusters of results on the plot. Some results should be on the one-to-one line (i.e., the ruptures that have seismogenic depth lower than 12km), the other ones should have dbottom values in the SRF.info that are 3 km above the corresponding values from national hazard model.

    Note: 12 km and 2 km values are hard-coded in the SRF generation code.

  4. Test: Plot (on a map) one realization of SRFs generated for all the faults considered in the Cybershake runs. If faults are not included in a Cyebrshake run, plot the geometry of it with a different color.
    Pass criterion: A researcher will look at the plot and look for anomalies in terms of fault geometries. Also, the researcher should see the faults that are not included in the cybershake runs

Velocity model:

  1. Test: Plot all the VM domain boxes on a map view.
    Pass criterion: A researcher will look at the plot and see if there are any anomalies (e.g, very large VM domains, VMs with large orientations at the wrong directions).

  2. Test: Plot the core-hour estimates calculated by reading nx, ny, nz, dt, and total_duration from the velocity model params.py files and running the core-hour calculations.
    Pass criterion: The plot should be looked at to find strange outliers in the calculations. 

  3. Test: Plot the duration of the simulations from the velocity model params.py files versus the equation for the duration.
    Pass criterion: The results should sit on the one-to-one line.

  4. Test: Plota the dt from the velocity model params.py files against the hard-coded value and also satisfy this equation (dt < 0.495 * hh / V_max ) from Graves 1996 BSSA.
    Pass criterion: The plot should show a single point.

    Note: hh is the grid size of the VM; V_max of the maximum velocity in your VM.
    Note: if we have a varying discretization and Vs_max for specific sub-set runs, those values should show on the plot.

  5. Test: Plot f0 (transition frequency) of the simulations from the velocity model params.py files against this equation ( f0 <= Vs_min / (5 * hh) )
    Pass criterion: The results should show a single point on the plot. 

2- Outputs from simulation 

Here are the checks on the obtained results from simulations:

Intensity measures of a given scenario simulation


  1. Test: Plot
    Pass criterion: dfd

 

 

  • No labels