justIN           Dashboard       Workflows       Jobs       AWT       Sites       Storages       Docs       Login

21 July 2025: This instance at RAL is read-only. Please do not try submitting new workflows for now.

Jobsub ID 231351.130@justin-prod-sched02.dune.hep.ac.uk

Jobsub ID231351.130@justin-prod-sched02.dune.hep.ac.uk
Workflow ID7991
Stage ID1
User nameavizcaya@fnal.gov
HTCondor Groupgroup_dune.prod_mcsim
RequestedProcessors1
GPUNo
RSS bytes4193255424 (3999 MiB)
Wall seconds limit80000 (22 hours)
Submitted time2025-06-27 14:10:23
SiteUK_Bristol
EntryCMSHTPC_T2_UK_SGrid_Bristol_lcgce02
Last heartbeat2025-06-27 14:13:34
From worker nodeHostnamehd67.dice.priv
cpuinfoAMD EPYC 7551P 32-Core Processor
OS releaseScientific Linux release 7.9 (Nitrogen)
Processors1
RSS bytes4193255424 (3999 MiB)
Wall seconds limit259200 (72 hours)
GPU
Inner Apptainer?True
Job statejobscript_error
Allocator namejustin-allocator-pro.dune.hep.ac.uk
Started2025-06-27 14:12:18
Input filesmonte-carlo-007991-000067
JobscriptExit code1
Real time0m (0s)
CPU time0m (0s = 0%)
Max RSS bytes0 (0 MiB)
Outputting started 
Output files
Finished2025-06-27 14:13:34
Saved logsjustin-logs:231351.130-justin-prod-sched02.dune.hep.ac.uk.logs.tgz
List job events     Wrapper job log

Jobscript log (last 10,000 characters)

Setting up larsoft UPS area... /cvmfs/larsoft.opensciencegrid.org
Setting up DUNE UPS area... /cvmfs/dune.opensciencegrid.org/products/dune/
Justin processors: 1
did_pfn_rse monte-carlo-007991-000067 000067 MONTECARLO
130 231351

usage: hadd [-a A] [-k K] [-T T] [-O O] [-v V] [-j J] [-dbg DBG] [-d D] [-n N]
            [-cachesize CACHESIZE]
            [-experimental-io-features EXPERIMENTAL_IO_FEATURES] [-f F]
            [-fk FK] [-ff FF] [-f0 F0] [-f6 F6]
            TARGET SOURCES

OPTIONS:
  -a                                   Append to the output
  -k                                   Skip corrupt or non-existent files, do not exit
  -T                                   Do not merge Trees
  -O                                   Re-optimize basket size when merging TTree
  -v                                   Explicitly set the verbosity level: 0 request no output, 99 is the default
  -j                                   Parallelize the execution in multiple processes
  -dbg                                 Parallelize the execution in multiple processes in debug mode (Does not delete partial files stored inside working directory)
  -d                                   Carry out the partial multiprocess execution in the specified directory
  -n                                   Open at most 'maxopenedfiles' at once (use 0 to request to use the system maximum)
  -cachesize                           Resize the prefetching cache use to speed up I/O operations(use 0 to disable)
  -experimental-io-features            Used with an argument provided, enables the corresponding experimental feature for output trees
  -f                                   Gives the ability to specify the compression level of the target file(by default 4) 
  -fk                                  Sets the target file to contain the baskets with the same compression
                                       as the input files (unless -O is specified). Compresses the meta data
                                       using the compression level specified in the first input or the
                                       compression setting after fk (for example 206 when using -fk206)
  -ff                                  The compression level use is the one specified in the first input
  -f0                                  Do not compress the target file
  -f6                                  Use compression level 6. (See TFile::SetCompressionSettings for the support range of value.)
  TARGET                               Target file
  SOURCES                              Source files
Querying ehn1-beam-np04:avizcaya_g4bl_mom5-w7584s1p1 for 100 files
Query: files from ehn1-beam-np04:avizcaya_g4bl_mom5-w7584s1p1 where dune.output_status=confirmed ordered skip 6600 limit 100
Getting names and metadata
done
{'core.runs': [231351], 'core.runs_subruns': [23135100130]}
Getting paths from rucio
Got 0 paths from 0 files
['hadd', '']
Traceback (most recent call last):
  File "/cvmfs/fifeuser3.opensciencegrid.org/sw/dune/7bf312a0a99f42cae01e4f8cfdd3c3bdaaedc832/merge_g4bl.py", line 433, in <module>
    do_merge(args)
  File "/cvmfs/fifeuser3.opensciencegrid.org/sw/dune/7bf312a0a99f42cae01e4f8cfdd3c3bdaaedc832/merge_g4bl.py", line 119, in do_merge
    raise Exception('Error in hadd')
Exception: Error in hadd
Exiting with error
justIN time: 2025-08-14 17:58:32 UTC       justIN version: 01.03.02