21 July 2025: This instance at RAL is read-only. Please do not try submitting new workflows for now.
Jobsub ID 201139.109@justin-prod-sched02.dune.hep.ac.uk
Jobsub ID | 201139.109@justin-prod-sched02.dune.hep.ac.uk | |
Workflow ID | 6735 | |
Stage ID | 1 | |
User name | calcuttj@fnal.gov | |
HTCondor Group | group_dune.prod_mcsim | |
Requested | Processors | 1 |
GPU | No | |
RSS bytes | 4193255424 (3999 MiB) | |
Wall seconds limit | 80000 (22 hours) | |
Submitted time | 2025-05-08 19:26:11 | |
Site | UK_Manchester | |
Entry | UBoone_T2_UK_Manchester_ce01 | |
Last heartbeat | 2025-05-08 19:28:40 | |
From worker node | Hostname | wn1910251.tier2.hep.manchester.ac.uk |
cpuinfo | Intel(R) Xeon(R) Gold 5118 CPU @ 2.30GHz | |
OS release | Scientific Linux release 7.9 (Nitrogen) | |
Processors | 1 | |
RSS bytes | 4193255424 (3999 MiB) | |
Wall seconds limit | 257400 (71 hours) | |
GPU | ||
Inner Apptainer? | True | |
Job state | jobscript_error | |
Allocator name | justin-allocator-pro.dune.hep.ac.uk | |
Started | 2025-05-08 19:27:58 | |
Input files | monte-carlo-006735-000023 | |
Jobscript | Exit code | 1 |
Real time | 0m (0s) | |
CPU time | 0m (0s = 0%) | |
Max RSS bytes | 0 (0 MiB) | |
Outputting started | ||
Output files | ||
Finished | 2025-05-08 19:28:40 | |
Saved logs | justin-logs:201139.109-justin-prod-sched02.dune.hep.ac.uk.logs.tgz | |
List job events Wrapper job log |
Jobscript log (last 10,000 characters)
Setting up larsoft UPS area... /cvmfs/larsoft.opensciencegrid.org Setting up DUNE UPS area... /cvmfs/dune.opensciencegrid.org/products/dune/ Justin processors: 1 did_pfn_rse monte-carlo-006735-000023 000023 MONTECARLO 109 201139 usage: hadd [-a A] [-k K] [-T T] [-O O] [-v V] [-j J] [-dbg DBG] [-d D] [-n N] [-cachesize CACHESIZE] [-experimental-io-features EXPERIMENTAL_IO_FEATURES] [-f F] [-fk FK] [-ff FF] [-f0 F0] [-f6 F6] TARGET SOURCES OPTIONS: -a Append to the output -k Skip corrupt or non-existent files, do not exit -T Do not merge Trees -O Re-optimize basket size when merging TTree -v Explicitly set the verbosity level: 0 request no output, 99 is the default -j Parallelize the execution in multiple processes -dbg Parallelize the execution in multiple processes in debug mode (Does not delete partial files stored inside working directory) -d Carry out the partial multiprocess execution in the specified directory -n Open at most 'maxopenedfiles' at once (use 0 to request to use the system maximum) -cachesize Resize the prefetching cache use to speed up I/O operations(use 0 to disable) -experimental-io-features Used with an argument provided, enables the corresponding experimental feature for output trees -f Gives the ability to specify the compression level of the target file(by default 4) -fk Sets the target file to contain the baskets with the same compression as the input files (unless -O is specified). Compresses the meta data using the compression level specified in the first input or the compression setting after fk (for example 206 when using -fk206) -ff The compression level use is the one specified in the first input -f0 Do not compress the target file -f6 Use compression level 6. (See TFile::SetCompressionSettings for the support range of value.) TARGET Target file SOURCES Source files Querying usertests:calcuttj_ehn1_np04_6305_merged-w6717s1p1 for 10 files Query: files from usertests:calcuttj_ehn1_np04_6305_merged-w6717s1p1 where dune.output_status=confirmed ordered skip 220 limit 10 Getting names and metadata done {'core.runs': [201139], 'core.runs_subruns': [20113900109]} Getting paths from rucio Got 0 paths from 0 files ['hadd', ''] Traceback (most recent call last): File "/cvmfs/fifeuser3.opensciencegrid.org/sw/dune/4e9b42dda8c1cbee7b07e2de7059f47384a3867b/merge_g4bl.py", line 259, in <module> do_merge(args) File "/cvmfs/fifeuser3.opensciencegrid.org/sw/dune/4e9b42dda8c1cbee7b07e2de7059f47384a3867b/merge_g4bl.py", line 111, in do_merge raise Exception('Error in hadd') Exception: Error in hadd Exiting with error