Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • ssh your_login@dev01-quakecore.canterbury.ac.nz (if you don't have a login contact Melody or Daniel)
  • cd /var/www/seisfinder2/CLI
  • You will find lots of scripts, enter values in hazard_search_config.ini. A sample config for v17p9 is shown below:
    • [v17p9]
      TMPLOCATION_TO_CREATE_HAZARD_FILES = /rcc/home/projects/quakecore/cybershake/v17p9/Hazard/
      EMPIRICAL_FILES_LOCATION = /rcc/home/projects/quakecore/cybershake/v17p9/Empiricals
      FAULT_LIST_CSV_FILE_LOCATION = /rcc/home/projects/quakecore/cybershake/v17p9/cyber_shake_file_v17p9.csv

       

  • The one that you need to run is hazard_search.py.
    • python hazard_search.py -h will show the help for the script.
    • python hazard_search.py -l returns all the simulation groups we currently have (disabled and we only have v17p9).
    • python hazard_search.py -s cybershake_version latitude longitude IM (or --single cybershake_version latitude longitude IM) will create a temporary directory for you and will copy (if they exist) the empirical file for the closest station that we have for the given latitude and longitude. It will print some instructions on how to calculate hazard and deaggregation.
    • python hazard_search.py -m cybershake_version csv_file IM (or --multi cybershake_version csv_file IM) takes as input a CSV file containing latitude,longitude lines for several locations of interest. As previously, it will copy all the relevant empirical files to a temporary folder. The instructions printed will  be shorter, but you will be provided some bash scripts to automate the execution.

 

Running hazard search for entire Cybershake on Mahuika

 

  1. Go to /nesi/project/nesi00213/deploy/seisfinder2/CLI , and edit config

...

Code Block
cp /nesi/project/nesi00213/deploy/seisfinder2/CLI/hazard/mahuika/execute_hazard_search.sl  /nesi/project/nesi00213/deploy/seisfinder2/CLI

3. Edit  executeEdit  execute_hazard_search.sl

Code Block
#!/bin/bash
# script version: slurm
# Please modify this file as needed, this is just a sample
#SBATCH --job-name=hazard_search_multi
#SBATCH --account=nesi00213
#SBATCH --partition=prepost
#SBATCH --ntasks=1
####SBATCH --cpus-per-task=36
#SBATCH --time=00:59:00
#SBATCH --output hazard_search_multi-%j.out
#SBATCH --error hazard_search_multi-%j.err
###SBATCH --mail-type=all
###SBATCH --mail-user=test@test.com
###SBATCH --mem-per-cpu=16G
###SBATCH -C avx
#OpenMP+Hyperthreading works well for VM
###SBATCH --hint=nomultithread
## END HEADER
source machine_env.sh
date
srun python hazard_search.py -m v18p6 /nesi/project/nesi00213/deploy/cybershake/v18p6/non_uniform_whole_nz_with_real_stations-hh400_v18p6_land_lat_long.csv SA_5p0
date

...

Code Block
cat XXX.ll |awk '{ print $2", "$1}' > XXX.csv

 

4. sbatch execute_hazard_search.sl

 

Code Block
sbatch execute_hazard_search.sl


5. When the job has completed, open .out log file. This creates a temp directory under TMPLOCATIONunder TMPLOCATION_TO_CREATE_HAZARD_FILES specified, and places empirical files (symbolic links) and bash scripts that contain Python commands. 

...

Code Block
cp *.sl /nesi/project/nesi00213/deploy/seisfinder2/CLI/hazard
cd /nesi/project/nesi00213/deploy/seisfinder2/CLI/hazard

...

 

12 Submit jobs: sbatch each .sl or try below, which submits all the jobs at one.

 

Code Block
baes@mahuika02: /nesi/project/nesi00213/deploy/seisfinder2/CLI/hazard$ for sl in `ls parallel_hazard*.sl`; do sbatch $sl; done
Submitted batch job 69390
Submitted batch job 69391
Submitted batch job 69392
Submitted batch job 69393
Submitted batch job 69394
Submitted batch job 69395
Submitted batch job 69396
Submitted batch job 69397
Submitted batch job 69398
Submitted batch job 69399
Submitted batch job 69400
Submitted batch job 69401
Submitted batch job 69402
Submitted batch job 69403
Submitted batch job 69404
Submitted batch job 69405
Submitted batch job 69406
Submitted batch job 69407
Submitted batch job 69408

 

...