Producing branches - jniedzie/SVJanalysis_wiki GitHub Wiki
Overview
Code in directory: branchesProducer
This code produces new branches from an input ROOT file and saves them into a different output ROOT file.
The two files can be merged using the tool to merge NTuples.
The branches to produce must be defined in a script having a function called process
. The -p
flag should point to this script (without the .py extension).
An example can be found here.
CutFlow
TTree will be automatically initialized if it does not exist in the input ROOT file.
GenCrossSection
branch will be added to Metadata
TTree (which will be created if not existing) if the -xsec
flag is provided.
Usage
Latest usage: python produceBranches.py -h
sage: produceBranches.py [-h] -i INPUT_FILE_NAME -o OUTPUT_FILE_NAME [-c CHUNK_SIZE] [-m MAX_CHUNKS]
[-n N_WORKERS] [--skip_bad_files] [-e {iterative,futures,dask/condor,dask/slurm}]
[-voms VOMS] -p PROCESS_MODULE_NAME [-xsec GEN_CROSS_SECTION] [-r] [-debug]
optional arguments:
-h, --help show this help message and exit
-i INPUT_FILE_NAME, --input_file_name INPUT_FILE_NAME
Input ROOT file name
-o OUTPUT_FILE_NAME, --output_file_name OUTPUT_FILE_NAME
Output ROOT file name
-c CHUNK_SIZE, --chunk_size CHUNK_SIZE
Size of the data chunks (default=100000)
-m MAX_CHUNKS, --max_chunks MAX_CHUNKS
Maximum number of chunks to process, no flag means no maximum
-n N_WORKERS, --n_workers N_WORKERS
Number of worker nodes (default=4)
--skip_bad_files Skip bad files
-e {iterative,futures,dask/condor,dask/slurm}, --executor_name {iterative,futures,dask/condor,dask/slurm}
The type of executor to use (default=futures)
-voms VOMS, --voms VOMS
Path to voms proxy, accessible to worker nodes
-p PROCESS_MODULE_NAME, --process_module_name PROCESS_MODULE_NAME
Script with a process function defining the coffea process (without .py)
-xsec GEN_CROSS_SECTION, --gen_cross_section GEN_CROSS_SECTION
Sample cross section (in pb) before pre-selection
-r, --raw_events Use raw events instead of gen weights in cut flow table