Plan to generate the 95th percentile of simulation data and compare that with the averaged empirical data.
Use this data for each IM as an indicator for if a simulation is out of line and something is most likely wrong.

How the 95th percentile was calculated

simulation_95 = All realisations for all faults combined together per site and the 95th percentile was calculated over the set per site/IM
residual = log(simulation_95th / empirical_average)

How to raise a flag for a given realisation when comparing

Given an IM realisation csv the residual will be calculated between the empirical and simulation data.
Then each site/IM pair will be tested against the 95 percentile threshold.
If any of the IM's for that site go over the threshold then that site will be flagged as failed.
If 50% of the sites fail within a csv then the realisation validation check will fail and will be flagged for closer manual inspection.


Plots for threshold values per IM (Sample)

  • No labels