justIN           Dashboard       Workflows       Jobs       AWT       Sites       Storages       Docs       Login

21 July 2025: This instance at RAL is read-only. Please do not try submitting new workflows for now.

Workflow 7851, Stage 1

Priority50
Processors1
Wall seconds80000
Image/cvmfs/singularity.opensciencegrid.org/fermilab/fnal-wn-sl7:latest
RSS bytes4193255424 (3999 MiB)
Max distance for inputs100.0
Enabled input RSEs CERN_PDUNE_EOS, DUNE_CA_SFU, DUNE_CERN_EOS, DUNE_ES_PIC, DUNE_FR_CCIN2P3_DISK, DUNE_IN_TIFR, DUNE_IT_INFN_CNAF, DUNE_UK_GLASGOW, DUNE_UK_LANCASTER_CEPH, DUNE_UK_MANCHESTER_CEPH, DUNE_US_BNL_SDCC, DUNE_US_FNAL_DISK_STAGE, FNAL_DCACHE, FNAL_DCACHE_STAGING, FNAL_DCACHE_TEST, MONTECARLO, NIKHEF, PRAGUE, QMUL, RAL-PP, RAL_ECHO, SURFSARA, T3_US_NERSC
Enabled output RSEs CERN_PDUNE_EOS, DUNE_CA_SFU, DUNE_CERN_EOS, DUNE_ES_PIC, DUNE_FR_CCIN2P3_DISK, DUNE_IN_TIFR, DUNE_IT_INFN_CNAF, DUNE_UK_GLASGOW, DUNE_UK_LANCASTER_CEPH, DUNE_UK_MANCHESTER_CEPH, DUNE_US_BNL_SDCC, DUNE_US_FNAL_DISK_STAGE, FNAL_DCACHE, FNAL_DCACHE_STAGING, FNAL_DCACHE_TEST, NIKHEF, PRAGUE, QMUL, RAL-PP, RAL_ECHO, SURFSARA, T3_US_NERSC
Enabled sites BR_CBPF, CA_Victoria, CERN, CH_UNIBE-LHEP, CZ_FZU, ES_CIEMAT, ES_PIC, FR_CCIN2P3, IN_TIFR, IT_CNAF, NL_NIKHEF, NL_SURFsara, UK_Bristol, UK_Brunel, UK_Durham, UK_Edinburgh, UK_Lancaster, UK_Manchester, UK_Oxford, UK_QMUL, UK_RAL-PPD, UK_RAL-Tier1, UK_Sheffield, US_Caltech, US_Colorado, US_FNAL-FermiGrid, US_FNAL-T1, US_Michigan, US_MIT, US_Nebraska, US_NotreDame, US_PuertoRico, US_SU-ITS, US_Swan, US_UChicago, US_UConn-HPC, US_UCSD, US_Wisconsin
Scopeehn1-beam-np04
Events for this stage

Output patterns

 DestinationPatternLifetimeFor next stageRSE expression
1Rucio ehn1-beam-np04:calcuttj_g4beamline-w7851s1p1*root7776000False

Environment variables

NameValue
G4BL_DIR/cvmfs/fifeuser2.opensciencegrid.org/sw/dune/023b964308029fd1755f5b4a2ae2fc05d5107859/
G4DATA_DIR/cvmfs/fifeuser2.opensciencegrid.org/sw/dune/754ab2932db4c5a30549905327154f5c7a9f083c/
INPUT_DIR/cvmfs/fifeuser1.opensciencegrid.org/sw/dune/40a9891e2a7c1f62d618824f8f8588fe2ebec5fe/
NPART100000
PACK_DIR/cvmfs/fifeuser1.opensciencegrid.org/sw/dune/3851f3036b8fdf366c8f0dd3fe8fea81c1d30f87/
POLARITY-

File states

Total filesFindingUnallocatedAllocatedOutputtingProcessedNot foundFailed
1000000001000000

Job states

TotalSubmittedStartedProcessingOutputtingFinishedNotusedAbortedStalledJobscript errorOutputting failedNone processed
185580000120890744559605970
Files processed00200200400400600600800800100010001200120014001400160016001800180020002000Jun-23 16:00Jun-23 19:00Jun-23 22:00Jun-24 01:00Jun-24 04:00Jun-24 07:00Jun-24 10:00Jun-24 13:00Jun-24 16:00Jun-24 19:00Jun-24 22:00Jun-25 01:00Jun-25 04:00Jun-25 07:00Jun-25 10:00Jun-25 13:00Jun-25 16:00Jun-25 19:00Jun-25 22:00Jun-26 01:00Files processedBin start timesNumber per binUS_ColoradoES_PICUK_OxfordUK_LancasterUS_FNAL-FermiG…US_FNAL-FermiGridUK_DurhamUK_RAL-PPDCERNUS_UChicagoCZ_FZUNL_SURFsaraUK_RAL-Tier1US_FNAL-T1UK_QMULUS_WisconsinUK_EdinburghUK_ManchesterNL_NIKHEFUK_BrunelBR_CBPF

RSEs used

NameInputsOutputs
MONTECARLO148450
DUNE_UK_GLASGOW01777
DUNE_US_FNAL_DISK_STAGE01556
RAL_ECHO01384
PRAGUE0989
DUNE_UK_MANCHESTER_CEPH0925
DUNE_UK_LANCASTER_CEPH0833
RAL-PP0809
QMUL0773
SURFSARA0711
DUNE_CERN_EOS0122
NIKHEF089
DUNE_US_BNL_SDCC023
DUNE_CA_SFU01

Stats of processed input files as CSV or JSON, and of uploaded output files as CSV or JSON (up to 10000 files included)

File reset events, by site

SiteAllocatedOutputting
ES_PIC57643
UK_RAL-PPD52120
CZ_FZU44234
UK_RAL-Tier141751
UK_Manchester395274
UK_QMUL38130
UK_Lancaster32356
US_FNAL-FermiGrid26854
US_NotreDame2280
NL_SURFsara13931
US_Colorado1259
US_UChicago998
UK_Oxford968
UK_Durham8614
CERN4611
UK_Edinburgh112
US_Wisconsin82
US_FNAL-T186
BR_CBPF51
CA_Victoria50
UK_Brunel50
UK_Bristol40
NL_NIKHEF20
US_PuertoRico10

Jobscript

#!/bin/bash

SECONDS=0

## Set the poolarity
export POLARITY="${POLARITY:-+}"
if [ $POLARITY != "+" ] && [ ${POLARITY} != "-" ]; then
  echo "ERROR MUST SUPPLY + OR - TO POLARITY"
  exit 1
fi

##Set the upstream momentum
export PMOMENTUM="${POLARITY:-+}80000"
echo "PMOMENTUM: ${PMOMENTUM}"

export BEAMLINE="${BEAMLINE:-H4}"
export INFILE=${BEAMLINE}.in
export CENTRALP="${CENTRALP:-1}" #Set the momentum going into protodune
export MOMENTUMVLE="${POLARITY}${CENTRALP}" #"3"
echo "MOMENTUMVLE: ${MOMENTUMVLE}"

##Number of POT to run
export PARTPERJOB=${NPART:-100}
export ADDPARAM="momentumVLE=$MOMENTUMVLE pMomentum=$PMOMENTUM"
export ADDFILES=""


if [ -z ${JUSTIN_PROCESSORS} ]; then
  JUSTIN_PROCESSORS=1
fi

echo "Justin processors: ${JUSTIN_PROCESSORS}"

export TF_NUM_THREADS=${JUSTIN_PROCESSORS}   
export OPENBLAS_NUM_THREADS=${JUSTIN_PROCESSORS} 
export JULIA_NUM_THREADS=${JUSTIN_PROCESSORS} 
export MKL_NUM_THREADS=${JUSTIN_PROCESSORS} 
export NUMEXPR_NUM_THREADS=${JUSTIN_PROCESSORS} 
export OMP_NUM_THREADS=${JUSTIN_PROCESSORS}  


##Get the MC number from this to bookkeep for justin
DID_PFN_RSE=`$JUSTIN_PATH/justin-get-file`
pfn_exit=$?
if [ $pfn_exit -ne 0 ]; then
  echo "Error in justin-get-file. Exiting safely"
  exit 0
fi
echo "did_pfn_rse $DID_PFN_RSE"
pfn=`echo $DID_PFN_RSE | cut -f2 -d' '`
JOBID=$pfn
echo "JOBID: ${JOBID}"

echo $INPUT_DIR
ls $INPUT_DIR

echo $G4DATA_DIR
ls $G4DATA_DIR
#cp -rs $G4DATA_DIR ./Geant4Data/
ln -s $G4DATA_DIR/Geant4Data ./Geant4Data

echo $G4BL_DIR
ls $G4BL_DIR
cp -rs $G4BL_DIR/g4bl ./g4bl
export G4BL_DIR=$PWD/g4bl

echo $PACK_DIR
ls $PACK_DIR
for i in $PACK_DIR/*; do
  ln -s $i .
done

#Unpack all the tars -- TODO: put on cvmfs as a single tar
#echo "Unpacking g4bl"
#tar -xzf $INPUT_DIR/g4bl.tar.gz --checkpoint=1000
#if [ $? -ne 0 ]
#then
#  echo "Exiting with error"
#  exit 1
#fi
#
#echo "Unpacking Geant4Data"
#tar -xzf $INPUT_DIR/Geant4Data.tar.gz --checkpoint=1000
#if [ $? -ne 0 ]
#then
#  echo "Exiting with error"
#  exit 1
#fi
CURDIR=$(pwd)
echo $CURDIR/Geant4Data > g4bl/.data


#echo "Unpacking Inputfiles Pack"
#tar -xzf $INPUT_DIR/pack.tar.gz --checkpoint=1000
#if [ $? -ne 0 ]
#then
#  echo "Exiting with error"
#  exit 1
#fi

#Run
echo "running"
#$CURDIR
./g4bl/bin/g4bl $INFILE jobID=$JOBID totNumEv=$PARTPERJOB $ADDPARAM 2>&1 | tee g4bloutput.txt
g4bl_res=$?
if [ $g4bl_res -ne 0 ]
then
  echo "Failed running g4bl"
  exit $g4bl_res
fi
echo "ran"

#Clean up
unlink Geant4Data
rm -rf g4bl
for i in *.in *.map; do
  unlink ${i}
done



#Add timestamp to the output
now=$(date -u +"%Y%m%dT%H%M%SZ")
oldname=`ls ${BEAMLINE}*.root`
newname=`echo ${oldname} | sed -e "s/.root/_${now}_${pfn}.root/"`
mv ${oldname} ${newname}
if [ $? -ne 0 ]
then
  echo "Failed renaming ${oldname} ${newname}"
  exit 1
fi


if [ $POLARITY != "+" ]; then
  polar_str="neg"
else
  polar_str="pos"
fi

subrun=`echo $JUSTIN_JOBSUB_ID  | cut -f1 -d@ | cut -f2 -d.`
run=`echo $JUSTIN_JOBSUB_ID  | cut -f1 -d@ | cut -f1 -d.`

python $INPUT_DIR/make_g4bl_metadata.py \
  --bl "${BEAMLINE}" --polarity $polar_str --momentum $CENTRALP \
  --run $run --subrun $subrun --name $newname \
  --namespace ${JUSTIN_SCOPE:-dummy}

if [ $? -ne 0 ]
then
  echo "Exiting with error"
  exit 1
else
  echo "$pfn" > justin-processed-pfns.txt
fi

#errorsSaving=$((`cat g4bloutput.txt | grep "Error in <T" | wc -l`))
#if [ $errorsSaving -ne 0 ]
#then
#  echo "Exiting with error"
#  exit 1
#fi
#
#echo "RUNTIME: $SECONDS seconds elapsed."
justIN time: 2025-08-14 16:31:54 UTC       justIN version: 01.03.02