justIN           Dashboard       Workflows       Jobs       AWT       Sites       Storages       Docs       Login

21 July 2025: This instance at RAL is read-only. Please do not try submitting new workflows for now.

Workflow 7585, Stage 1

Priority50
Processors1
Wall seconds50000
Image/cvmfs/singularity.opensciencegrid.org/fermilab/fnal-wn-sl7:latest
RSS bytes2096103424 (1999 MiB)
Max distance for inputs100.0
Enabled input RSEs CERN_PDUNE_EOS, DUNE_CA_SFU, DUNE_CERN_EOS, DUNE_ES_PIC, DUNE_FR_CCIN2P3_DISK, DUNE_IN_TIFR, DUNE_IT_INFN_CNAF, DUNE_UK_GLASGOW, DUNE_UK_LANCASTER_CEPH, DUNE_UK_MANCHESTER_CEPH, DUNE_US_BNL_SDCC, DUNE_US_FNAL_DISK_STAGE, FNAL_DCACHE, FNAL_DCACHE_STAGING, FNAL_DCACHE_TEST, MONTECARLO, NIKHEF, PRAGUE, QMUL, RAL-PP, RAL_ECHO, SURFSARA, T3_US_NERSC
Enabled output RSEs CERN_PDUNE_EOS, DUNE_CA_SFU, DUNE_CERN_EOS, DUNE_ES_PIC, DUNE_FR_CCIN2P3_DISK, DUNE_IN_TIFR, DUNE_IT_INFN_CNAF, DUNE_UK_GLASGOW, DUNE_UK_LANCASTER_CEPH, DUNE_UK_MANCHESTER_CEPH, DUNE_US_BNL_SDCC, DUNE_US_FNAL_DISK_STAGE, FNAL_DCACHE, FNAL_DCACHE_STAGING, FNAL_DCACHE_TEST, NIKHEF, PRAGUE, QMUL, RAL-PP, RAL_ECHO, SURFSARA, T3_US_NERSC
Enabled sites BR_CBPF, CA_SFU, CA_Victoria, CERN, CH_UNIBE-LHEP, ES_CIEMAT, ES_PIC, FR_CCIN2P3, IN_TIFR, IT_CNAF, NL_SURFsara, UK_Bristol, UK_Brunel, UK_Durham, UK_Edinburgh, UK_Lancaster, UK_Manchester, UK_Oxford, UK_QMUL, UK_RAL-PPD, UK_RAL-Tier1, UK_Sheffield, US_Caltech, US_Colorado, US_FNAL-FermiGrid, US_FNAL-T1, US_Michigan, US_MIT, US_Nebraska, US_NotreDame, US_PuertoRico, US_SU-ITS, US_Swan, US_UChicago, US_UConn-HPC, US_UCSD, US_Wisconsin
Scopeehn1-beam-np04
Events for this stage

Output patterns

 DestinationPatternLifetimeFor next stageRSE expression
1Rucio ehn1-beam-np04:avizcaya_g4bl_mom5-w7585s1p1*root2592000False

Environment variables

NameValue
CENTRALP5
G4BL_DIR/cvmfs/fifeuser4.opensciencegrid.org/sw/dune/023b964308029fd1755f5b4a2ae2fc05d5107859/
G4DATA_DIR/cvmfs/fifeuser4.opensciencegrid.org/sw/dune/754ab2932db4c5a30549905327154f5c7a9f083c/
INPUT_DIR/cvmfs/fifeuser2.opensciencegrid.org/sw/dune/40a9891e2a7c1f62d618824f8f8588fe2ebec5fe/
NPART100000
PACK_DIR/cvmfs/fifeuser2.opensciencegrid.org/sw/dune/3851f3036b8fdf366c8f0dd3fe8fea81c1d30f87/

File states

Total filesFindingUnallocatedAllocatedOutputtingProcessedNot foundFailed
100000000545504545

Job states

TotalSubmittedStartedProcessingOutputtingFinishedNotusedAbortedStalledJobscript errorOutputting failedNone processed
52886000012805023782402100155
Files processed002002004004006006008008001000100012001200140014001600160018001800Jun-12 18:00Jun-12 21:00Jun-13 00:00Jun-13 03:00Jun-13 06:00Jun-13 09:00Jun-13 12:00Jun-13 15:00Jun-13 18:00Jun-13 21:00Jun-14 00:00Jun-14 03:00Jun-14 06:00Jun-14 09:00Jun-14 12:00Jun-14 15:00Jun-14 18:00Jun-14 21:00Jun-15 00:00Jun-15 03:00Files processedBin start timesNumber per binUS_FNAL-FermiG…US_FNAL-FermiGridES_PICUS_ColoradoFR_CCIN2P3US_SwanNL_SURFsaraUK_RAL-Tier1UK_RAL-PPDIT_CNAFCERNUS_UCSDUK_QMULUK_DurhamUS_UChicagoUS_WisconsinUK_OxfordUK_ManchesterUK_SheffieldUK_LancasterBR_CBPFCH_UNIBE-LHEPUK_EdinburghUK_BristolES_CIEMATCA_SFUUS_SU-ITS

RSEs used

NameInputsOutputs
MONTECARLO370650
RAL_ECHO01759
DUNE_UK_GLASGOW01024
DUNE_US_FNAL_DISK_STAGE0867
SURFSARA0653
DUNE_CERN_EOS0284
DUNE_FR_CCIN2P3_DISK0282
DUNE_UK_MANCHESTER_CEPH0199
RAL-PP0195
QMUL0125
DUNE_UK_LANCASTER_CEPH035
DUNE_CA_SFU016
NIKHEF010
PRAGUE02
DUNE_US_BNL_SDCC01

Stats of processed input files as CSV or JSON, and of uploaded output files as CSV or JSON (up to 10000 files included)

File reset events, by site

SiteAllocatedOutputting
UK_RAL-Tier1113481379
ES_PIC1803113
UK_Durham1459152
UK_RAL-PPD107453
UK_Manchester106254
NL_SURFsara781165
FR_CCIN2P3771139
CERN70422
UK_QMUL65630
US_UChicago55149
CH_UNIBE-LHEP4800
US_Wisconsin46258
US_FNAL-FermiGrid42438
US_Swan4136
UK_Sheffield37218
IT_CNAF36128
US_SU-ITS3261
US_NotreDame3050
UK_Oxford27623
US_Colorado1920
ES_CIEMAT18115
US_UCSD15321
BR_CBPF1531
UK_Bristol1525
UK_Lancaster9425
CA_SFU411
US_FNAL-T1340
US_PuertoRico250
UK_Brunel80
UK_Edinburgh62

Jobscript

#!/bin/bash

SECONDS=0

## Set the poolarity
export POLARITY="${POLARITY:-+}"
if [ $POLARITY != "+" ] && [ ${POLARITY} != "-" ]; then
  echo "ERROR MUST SUPPLY + OR - TO POLARITY"
  exit 1
fi

##Set the upstream momentum
export PMOMENTUM="${POLARITY:-+}80000"
echo "PMOMENTUM: ${PMOMENTUM}"

export BEAMLINE="${BEAMLINE:-H4}"
export INFILE=${BEAMLINE}.in
export CENTRALP="${CENTRALP:-1}" #Set the momentum going into protodune
export MOMENTUMVLE="${POLARITY}${CENTRALP}" #"3"
echo "MOMENTUMVLE: ${MOMENTUMVLE}"

##Number of POT to run
export PARTPERJOB=${NPART:-100}
export ADDPARAM="momentumVLE=$MOMENTUMVLE pMomentum=$PMOMENTUM"
export ADDFILES=""


if [ -z ${JUSTIN_PROCESSORS} ]; then
  JUSTIN_PROCESSORS=1
fi

echo "Justin processors: ${JUSTIN_PROCESSORS}"

export TF_NUM_THREADS=${JUSTIN_PROCESSORS}   
export OPENBLAS_NUM_THREADS=${JUSTIN_PROCESSORS} 
export JULIA_NUM_THREADS=${JUSTIN_PROCESSORS} 
export MKL_NUM_THREADS=${JUSTIN_PROCESSORS} 
export NUMEXPR_NUM_THREADS=${JUSTIN_PROCESSORS} 
export OMP_NUM_THREADS=${JUSTIN_PROCESSORS}  


##Get the MC number from this to bookkeep for justin
DID_PFN_RSE=`$JUSTIN_PATH/justin-get-file`
pfn_exit=$?
if [ $pfn_exit -ne 0 ]; then
  echo "Error in justin-get-file. Exiting safely"
  exit 0
fi
echo "did_pfn_rse $DID_PFN_RSE"
pfn=`echo $DID_PFN_RSE | cut -f2 -d' '`
JOBID=$pfn
echo "JOBID: ${JOBID}"

echo $INPUT_DIR
ls $INPUT_DIR

echo $G4DATA_DIR
ls $G4DATA_DIR
#cp -rs $G4DATA_DIR ./Geant4Data/
ln -s $G4DATA_DIR/Geant4Data ./Geant4Data

echo $G4BL_DIR
ls $G4BL_DIR
cp -rs $G4BL_DIR/g4bl ./g4bl
export G4BL_DIR=$PWD/g4bl

echo $PACK_DIR
ls $PACK_DIR
for i in $PACK_DIR/*; do
  ln -s $i .
done

#Unpack all the tars -- TODO: put on cvmfs as a single tar
#echo "Unpacking g4bl"
#tar -xzf $INPUT_DIR/g4bl.tar.gz --checkpoint=1000
#if [ $? -ne 0 ]
#then
#  echo "Exiting with error"
#  exit 1
#fi
#
#echo "Unpacking Geant4Data"
#tar -xzf $INPUT_DIR/Geant4Data.tar.gz --checkpoint=1000
#if [ $? -ne 0 ]
#then
#  echo "Exiting with error"
#  exit 1
#fi
CURDIR=$(pwd)
echo $CURDIR/Geant4Data > g4bl/.data


#echo "Unpacking Inputfiles Pack"
#tar -xzf $INPUT_DIR/pack.tar.gz --checkpoint=1000
#if [ $? -ne 0 ]
#then
#  echo "Exiting with error"
#  exit 1
#fi

#Run
echo "running"
#$CURDIR
./g4bl/bin/g4bl $INFILE jobID=$JOBID totNumEv=$PARTPERJOB $ADDPARAM 2>&1 | tee g4bloutput.txt
g4bl_res=$?
if [ $g4bl_res -ne 0 ]
then
  echo "Failed running g4bl"
  exit $g4bl_res
fi
echo "ran"

#Clean up
unlink Geant4Data
rm -rf g4bl
for i in *.in *.map; do
  unlink ${i}
done



#Add timestamp to the output
now=$(date -u +"%Y%m%dT%H%M%SZ")
oldname=`ls ${BEAMLINE}*.root`
newname=`echo ${oldname} | sed -e "s/.root/_${now}_${pfn}.root/"`
mv ${oldname} ${newname}
if [ $? -ne 0 ]
then
  echo "Failed renaming ${oldname} ${newname}"
  exit 1
fi


if [ $POLARITY != "+" ]; then
  polar_str="neg"
else
  polar_str="pos"
fi

subrun=`echo $JUSTIN_JOBSUB_ID  | cut -f1 -d@ | cut -f2 -d.`
run=`echo $JUSTIN_JOBSUB_ID  | cut -f1 -d@ | cut -f1 -d.`

python $INPUT_DIR/make_g4bl_metadata.py \
  --bl "${BEAMLINE}" --polarity $polar_str --momentum $CENTRALP \
  --run $run --subrun $subrun --name $newname \
  --namespace ${JUSTIN_SCOPE:-dummy}

if [ $? -ne 0 ]
then
  echo "Exiting with error"
  exit 1
else
  echo "$pfn" > justin-processed-pfns.txt
fi

#errorsSaving=$((`cat g4bloutput.txt | grep "Error in <T" | wc -l`))
#if [ $errorsSaving -ne 0 ]
#then
#  echo "Exiting with error"
#  exit 1
#fi
#
#echo "RUNTIME: $SECONDS seconds elapsed."
justIN time: 2025-08-14 16:31:52 UTC       justIN version: 01.03.02