Warnings - jniedzie/SVJanalysis_wiki GitHub Wiki
Running in parallel
When running coffea in parallel, the different chunks can be added in a different order (depending on the speed of the different cores).
So coffea should not be used in parallel for making new branches.
When sending jobs or running locally to make new branches, always use -e iterative
.
It can be safely used in the conversion to PFNanoAOD and pre-selection. But keep in mind that the events might be ordered differently than in the input file.
Segfaults when running the ML FW
There is no word to describe it: this is black magic!
If you encounter segfaults or errors with compiled libraries in general, flip the order of import torch
, import tensorflow
and import ROOT
till it works! Might get tricky when importing our own modules...
Examples of errors:
RuntimeError: Unable to find target for this triple (no targets are registered)
in this case, put import ROOT
last.
Or:
[libprotobuf FATAL google/protobuf/stubs/common.cc:83] This program was compiled against version 3.9.2 of the Protocol Buffer runtime library, which is not compatible with the installed version (3.20.1). Contact the program author for an update. If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library. (Version verification failed in "bazel-out/k8-opt/bin/tensorflow/core/framework/tensor_shape.pb.cc".)
terminate called after throwing an instance of 'google::protobuf::FatalException'