You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 10 Next »

This document explains how to use the implementation of hazard_search.py. Note that it currently works for CyberShake17p9 (v17p9) version only.

  • ssh your_login@dev01-quakecore.canterbury.ac.nz (if you don't have a login contact Melody or Daniel)
  • cd /var/www/seisfinder2/CLI
  • You will find lots of scripts, enter values in hazard_search_config.ini. A sample config for v17p9 is shown below:
    • [v17p9]
      TMPLOCATION_TO_CREATE_HAZARD_FILES = /rcc/home/projects/quakecore/cybershake/v17p9/Hazard/
      EMPIRICAL_FILES_LOCATION = /rcc/home/projects/quakecore/cybershake/v17p9/Empiricals
      FAULT_LIST_CSV_FILE_LOCATION = /rcc/home/projects/quakecore/cybershake/v17p9/cyber_shake_file_v17p9.csv

       

  • The one that you need to run is hazard_search.py.
    • python hazard_search.py -h will show the help for the script.
    • python hazard_search.py -l returns all the simulation groups we currently have (disabled and we only have v17p9).
    • python hazard_search.py -s cybershake_version latitude longitude IM (or --single cybershake_version latitude longitude IM) will create a temporary directory for you and will copy (if they exist) the empirical file for the closest station that we have for the given latitude and longitude. It will print some instructions on how to calculate hazard and deaggregation.
    • python hazard_search.py -m cybershake_version csv_file IM (or --multi cybershake_version csv_file IM) takes as input a CSV file containing latitude,longitude lines for several locations of interest. As previously, it will copy all the relevant empirical files to a temporary folder. The instructions printed will  be shorter, but you will be provided some bash scripts to automate the execution.

 

Running hazard search for entire Cybershake on Mahuika

 

  1. Go to /nesi/project/nesi00213/deploy/seisfinder2/CLI , and edit config

[database]
SQLITE_DB_PATH = /home/baes/seisfinderdb.db

[v18p6]
TMPLOCATION_TO_CREATE_HAZARD_FILES = /home/baes/cybershake/v18p6/Hazard/
EMPIRICAL_FILES_LOCATION = /nesi/project/nesi00213/deploy/empiricals/v18p5
FAULT_LIST_CSV_FILE_LOCATION = /nesi/project/nesi00213/deploy/cybershake/v18p6/fault_list.csv
LL_FILE = /nesi/project/nesi00213/deploy/cybershake/v18p6/non_uniform_whole_nz_with_real_stations-hh400_v18p6_land.ll
IM_PATH= /nesi/project/nesi00213/deploy/cybershake/v18p6/IMs

 2. 
cp /nesi/project/nesi00213/deploy/seisfinder2/CLI/hazard/mahuika/execute_hazard_search.sl  /nesi/project/nesi00213/deploy/seisfinder2/CLI

3. Edit  execute_hazard_search.sl

#!/bin/bash
# script version: slurm
# Please modify this file as needed, this is just a sample
#SBATCH --job-name=hazard_search_multi
#SBATCH --account=nesi00213
#SBATCH --partition=prepost
#SBATCH --ntasks=1
####SBATCH --cpus-per-task=36
#SBATCH --time=00:59:00
#SBATCH --output hazard_search_multi-%j.out
#SBATCH --error hazard_search_multi-%j.err
###SBATCH --mail-type=all
###SBATCH --mail-user=test@test.com
###SBATCH --mem-per-cpu=16G
###SBATCH -C avx
#OpenMP+Hyperthreading works well for VM
###SBATCH --hint=nomultithread
## END HEADER
source machine_env.sh
date
srun python hazard_search.py -m v18p6 /nesi/project/nesi00213/deploy/cybershake/v18p6/non_uniform_whole_nz_with_real_stations-hh400_v18p6_land_lat_long.csv SA_5p0
date

To generate a list of locations .csv from a .ll file, you can use the following command

cat XXX.ll |awk '{ print $2", "$1}' > XXX.csv

 

4. sbatch execute_hazard_search.sl

 

sbatch execute_hazard_search.sl


5. When the job has completed, open .out log file. This creates a temp directory under TMPLOCATION_TO_CREATE_HAZARD_FILES specified, and places empirical files (symbolic links) and bash scripts that contain Python commands. 

6. In this example. the temp location is  /home/baes/cybershake/v18p6/Hazard/hazard_search_5F73LY_baes 

7. In the temp dir, find hazard_calcs.sh. For entire cybershake, this can be very very long. We will be using SLURM's parallel for loop to run this in parallel, but it needs to be split to avoid SLURM complaining about too many lines.

8. Copy an .sl template to the temp dir and navigate to the directory.

cp /nesi/project/nesi00213/deploy/seisfinder2/CLI/hazard/mahuika/parallel_hazard_sl.template ~/cybershake/v18p6/Hazard/hazard_search_5F73LY_baes
cd ~/cybershake/v18p6/Hazard/hazard_search_5F73LY_baes

9. Edit the template file, in particular, HAZARD_CALC_SH. (This step can be automated)

#!/bin/bash
# script version: slurm
# Please modify this file as needed, this is just a sample
#SBATCH --job-name=parallel_hazard_%NUM%
#SBATCH --account=nesi00213
#SBATCH --partition=large
#SBATCH --nodes=1
###SBATCH --ntasks=36
#SBATCH --ntasks-per-node=32
#SBATCH --cpus-per-task=1
#SBATCH --time=03:59:00
#SBATCH --output parallel_hazard-%NUM%.%j.out
#SBATCH --error parallel_hazard-%NUM%.%j.err
###SBATCH --mail-type=all
###SBATCH --mail-user=test@test.com
###SBATCH --mem-per-cpu=16G
###SBATCH -C avx
#OpenMP+Hyperthreading works well for VM
#SBATCH --hint=nomultithread
## END HEADER
source machine_env.sh
HAZARD_CALC_SH=/home/baes/cybershake/v18p6/Hazard/hazard_search_5F73LY_baes/hazard_calcs_uniq_%NUM%.sh
count=0
while IFS='' read -r line || [[ -n "$line" ]]; do
    if [[ ${line:0:1} == '#' ]]
    then
        echo "comment:$line"
    else
        commands[count]=${line}
        count=$(( $count + 1 ))
    fi
done < $HAZARD_CALC_SH
for cmnd in "${commands[@]}"
do
    srun -n1 --exclusive $cmnd &
done
wait

10. Run split_sh.sh to split the sh file into many smaller ones.

bash /nesi/project/nesi00213/deploy/seisfinder2/CLI/hazard/mahuika/split_sh.sh hazard_calcs.sh

 

This does a number of things. It filters out all the duplicates in the sh file, and split them into 1000 lines each to produce hazard_calcs_uniq_0.sh, hazard_calcs_uniq_1000.sh....hazard_calcs_uniq_xxxxx.sh and matching .sl files.

11. Copy .sl files to  /nesi/project/nesi00213/deploy/seisfinder2/CLI/hazard/ (Can be automated)

cp *.sl /nesi/project/nesi00213/deploy/seisfinder2/CLI/hazard
cd /nesi/project/nesi00213/deploy/seisfinder2/CLI/hazard

 

12 Submit jobs: sbatch each .sl or try below, which submits all the jobs at one.

 

baes@mahuika02: /nesi/project/nesi00213/deploy/seisfinder2/CLI/hazard$ for sl in `ls parallel_hazard*.sl`; do sbatch $sl; done
Submitted batch job 69390
Submitted batch job 69391
Submitted batch job 69392
Submitted batch job 69393
Submitted batch job 69394
Submitted batch job 69395
Submitted batch job 69396
Submitted batch job 69397
Submitted batch job 69398
Submitted batch job 69399
Submitted batch job 69400
Submitted batch job 69401
Submitted batch job 69402
Submitted batch job 69403
Submitted batch job 69404
Submitted batch job 69405
Submitted batch job 69406
Submitted batch job 69407
Submitted batch job 69408

 

13. Hazard map. When everything is completed, you can plot hazard map. For details, see https://github.com/ucgmsim/seisfinder2/tree/master/CLI/hazard

 

 

To do

  1. hazard_search.py should contain python commands with the full path to hazard_calc.py with all PYTHONPATH sorted.  This will remove the need for copying .sl back to the code directory, enabling us to separate the code and working environment.
  2. HAZARD_CALC_SH in parallel_hazard_sl.template can be automatically set by split_sh.sh

  3. hazard_search.py can also copy .sl.tempalte and machine_env etc to the temp dir.

 

 


  • No labels