21 July 2025: This instance at RAL is read-only. Please do not try submitting new workflows for now.
Jobsub ID 231351.138@justin-prod-sched02.dune.hep.ac.uk
Jobsub ID | 231351.138@justin-prod-sched02.dune.hep.ac.uk | |
Workflow ID | 7991 | |
Stage ID | 1 | |
User name | avizcaya@fnal.gov | |
HTCondor Group | group_dune.prod_mcsim | |
Requested | Processors | 1 |
GPU | No | |
RSS bytes | 4193255424 (3999 MiB) | |
Wall seconds limit | 80000 (22 hours) | |
Submitted time | 2025-06-27 14:10:23 | |
Site | ES_PIC | |
Entry | DUNE_T1_ES_PIC_ce15-multicore | |
Last heartbeat | 2025-06-27 14:13:33 | |
From worker node | Hostname | tds434.pic.es |
cpuinfo | AMD EPYC 7502 32-Core Processor | |
OS release | Scientific Linux release 7.9 (Nitrogen) | |
Processors | 1 | |
RSS bytes | 4193255424 (3999 MiB) | |
Wall seconds limit | 216000 (60 hours) | |
GPU | ||
Inner Apptainer? | True | |
Job state | jobscript_error | |
Allocator name | justin-allocator-pro.dune.hep.ac.uk | |
Started | 2025-06-27 14:12:24 | |
Input files | monte-carlo-007991-000078 | |
Jobscript | Exit code | 1 |
Real time | 0m (0s) | |
CPU time | 0m (0s = 0%) | |
Max RSS bytes | 0 (0 MiB) | |
Outputting started | ||
Output files | ||
Finished | 2025-06-27 14:13:33 | |
Saved logs | justin-logs:231351.138-justin-prod-sched02.dune.hep.ac.uk.logs.tgz | |
List job events Wrapper job log |
Jobscript log (last 10,000 characters)
Setting up larsoft UPS area... /cvmfs/larsoft.opensciencegrid.org Setting up DUNE UPS area... /cvmfs/dune.opensciencegrid.org/products/dune/ Justin processors: 1 did_pfn_rse monte-carlo-007991-000078 000078 MONTECARLO 138 231351 usage: hadd [-a A] [-k K] [-T T] [-O O] [-v V] [-j J] [-dbg DBG] [-d D] [-n N] [-cachesize CACHESIZE] [-experimental-io-features EXPERIMENTAL_IO_FEATURES] [-f F] [-fk FK] [-ff FF] [-f0 F0] [-f6 F6] TARGET SOURCES OPTIONS: -a Append to the output -k Skip corrupt or non-existent files, do not exit -T Do not merge Trees -O Re-optimize basket size when merging TTree -v Explicitly set the verbosity level: 0 request no output, 99 is the default -j Parallelize the execution in multiple processes -dbg Parallelize the execution in multiple processes in debug mode (Does not delete partial files stored inside working directory) -d Carry out the partial multiprocess execution in the specified directory -n Open at most 'maxopenedfiles' at once (use 0 to request to use the system maximum) -cachesize Resize the prefetching cache use to speed up I/O operations(use 0 to disable) -experimental-io-features Used with an argument provided, enables the corresponding experimental feature for output trees -f Gives the ability to specify the compression level of the target file(by default 4) -fk Sets the target file to contain the baskets with the same compression as the input files (unless -O is specified). Compresses the meta data using the compression level specified in the first input or the compression setting after fk (for example 206 when using -fk206) -ff The compression level use is the one specified in the first input -f0 Do not compress the target file -f6 Use compression level 6. (See TFile::SetCompressionSettings for the support range of value.) TARGET Target file SOURCES Source files Querying ehn1-beam-np04:avizcaya_g4bl_mom5-w7584s1p1 for 100 files Query: files from ehn1-beam-np04:avizcaya_g4bl_mom5-w7584s1p1 where dune.output_status=confirmed ordered skip 7700 limit 100 Getting names and metadata done {'core.runs': [231351], 'core.runs_subruns': [23135100138]} Getting paths from rucio Got 0 paths from 0 files ['hadd', ''] Traceback (most recent call last): File "/cvmfs/fifeuser3.opensciencegrid.org/sw/dune/7bf312a0a99f42cae01e4f8cfdd3c3bdaaedc832/merge_g4bl.py", line 433, in <module> do_merge(args) File "/cvmfs/fifeuser3.opensciencegrid.org/sw/dune/7bf312a0a99f42cae01e4f8cfdd3c3bdaaedc832/merge_g4bl.py", line 119, in do_merge raise Exception('Error in hadd') Exception: Error in hadd Exiting with error