Setting up a fddaq‐v5.3.2 software area - DUNE-DAQ/daqconf GitHub Wiki
18-Jun-2025 - 🔺 Work in progress! 🔺 Steps 1-8 in the first section have been verified to work. The Reference Information, and the remaining sections, will be (re)verified soon.
Reference information:
- general development: software development workflow, DUNE DAQ Software Style Guide
- suggested Spack commands to learn about the characteristics of an existing software area are available here
- an introduction to the "assets" system, which we use to store files that are not code, is here
- testing: NP04 computer inventory
- other: Working Group task lists, List of DUNE-DAQ GitHub teams and repos
- Main grafana dashboard, Suggestions for setting up a proxy to see Grafana displays outside CERN
- Tag Collector
- OKS System Description
- DBE Editor Documentation
- Generate Configuration Diagrams
Here are the suggested steps:
-
create a new software area based on the v5.3.2 candidate release build (see step 1.v for the exact
dbt-create
command to use)-
The steps for this are based on the latest instructions for daq-buildtools
-
As always, you should verify that your computer has access to /cvmfs/dunedaq.opensciencegrid.org
-
If you are using one of the np04daq computers, and need to interact with GitHub servers (e.g clone packages), add the following lines to your $HOME/.gitconfig file. (Once you do this, there will be no need to activate the web proxy each time you want to run a git command that talks to the GitHub servers, and this means that you won't forget to disable it before running
drunc
...):[http] proxy = http://np04-web-proxy.cern.ch:3128 sslVerify = false
-
If you are using one of the np04daq computers, and need to install python packages into your python virtual environment using
pip install
, add the following lines to your $HOME/.config/pip/pip.conf file. (Once you do this, there will be no need to activate the web proxy each time you want to install a python package, and this means that you won't forget to disable it before runningdrunc
...):[global] proxy = http://np04-web-proxy.cern.ch:3128
-
Here are the steps for creating the new software area:
cd <directory_above_where_you_want_the_new_software_area> source /cvmfs/dunedaq.opensciencegrid.org/setup_dunedaq.sh setup_dbt fddaq-v5.3.1 dbt-create -b candidate fddaq-v5.3.2-rc3-a9 [work_dir_name] # work_dir_name is optional cd <work_dir_name if you specified one, or fddaq-v5.3.2-rc3-a9 otherwise> # or dbt-setup-release -b candidate fddaq-v5.3.2-rc3-a9
-
Please note that if you are following these instructions on a computer on which the DUNE-DAQ software has never been run before, there are several system packages that may need to be installed on that computer. These are mentioned in this script. To check whether a particular one is already installed, you can use a command like
yum list libzstd
and check whether the package is listed underInstalled Packages
.
-
-
add repositories that have patch-branch changes to the /sourcecode area. Known examples are listed in this section.
- clone the repositories (the following block has some extra directory checking; it can all be copy/pasted into your shell window)
# change directory to the "sourcecode" subdir, if possible and needed if [[ -d "sourcecode" ]]; then cd sourcecode fi # double-check that we're in the correct subdir current_subdir=`echo ${PWD} | xargs basename` if [[ "$current_subdir" != "sourcecode" ]]; then echo "" echo "*** Current working directory is not \"sourcecode\", skipping repo clones" else # finally, do the repo clone(s) git clone https://github.com/DUNE-DAQ/appmodel.git -b patch/fddaq-v5.3.x git clone https://github.com/DUNE-DAQ/daqsystemtest.git -b patch/fddaq-v5.3.x cd .. fi
- clone the repositories (the following block has some extra directory checking; it can all be copy/pasted into your shell window)
-
setup the work area, install the latest version of drunc, and build the software. NB: even if you haven't checked out any packages the
dbt-build
is necessary to install the rte script passed to the applications started bydrunc
source env.sh dbt-build -j 20 dbt-workarea-env
-
Set the ConnectivityService port in your local copy of the example configurations to a random available port number. This helps avoids conflicts between users running example systems on the same computer at the same time.
daqconf_set_connectivity_service_port local-1x1-config config/daqsystemtest/example-configs.data.xml
-
The
daqsystemtest
repository contains a sample configuration for a small test system. It can be exercised using the following steps:# from your Linux shell command line... drunc-unified-shell ssh-standalone config/daqsystemtest/example-configs.data.xml local-1x1-config ${USER}-local-test # from within the drunc shell... # Note that it is best to use a different run number each time that you "start". boot --no-override-logs conf start --run-number 101 enable-triggers # wait for a few seconds disable-triggers drain-dataflow stop-trigger-sources stop scrap terminate exit # Or, you can run everything in one Linux shell command: drunc-unified-shell ssh-standalone config/daqsystemtest/example-configs.data.xml local-1x1-config ${USER}-local-test boot --no-override-logs wait 5 conf wait 3 start --run-number 101 enable-triggers wait 10 disable-triggers drain-dataflow stop-trigger-sources stop scrap terminate # after you exit `drunc`, you should wait for several seconds for controller processes to exit before starting another session
-
Unit tests from software packages that have been cloned into the software development area can be run with the following command:
dbt-unittest-summary.sh
If this command is run with only
daqsystemtest
in the software area, the results will be rather underwhelming because that package doesn't have any unit tests defined. Additional repositories can be added to the software area using commands like the following:cd $DBT_AREA_ROOT/sourcecode git clone https://github.com/DUNE-DAQ/dfmodules.git -b <version> cd .. dbt-build -j 20 dbt-unittest-summary.sh
-
Integration tests can be run straight from the release with
pytest -s ${DAQSYSTEMTEST_SHARE}/integtest/minimal_system_quick_test.py
In case of errors, a more verbose log can be found in the
/tmp/pytest-of-${USER}
directory.pytest
creates a new subdirectory for each of our integration/regression tests, so you will see subdirs with names likepytest-1234
. The application log files will be in "run" directories underneath those subdirs, e.g.runcurrent
. A quick short-hand iscd /tmp/pytest-of-${USER}/pytest-current/runcurrent
. -
If developing
drunc
ordruncschema
, after these are cloned runpip install
in the correspondingsourcecode
subdirectories. Rundbt-workarea-env
in the root of the working directory. -
When you return to working with the software area after logging out, the steps that you'll need to redo are the following:
cd <work_dir> source ./env.sh dbt-build # if needed dbt-workarea-env # if needed
-
dbt-info release
# prints out the release type and name, and the base release name (version) -
dbt-info package <dunedaq_package_name>
# prints out the package version and commit hash used by the release -
dbt-info sourcecode
# prints out the branch names of source repos under sourcecode, and marks those with local changes with "*" -
spack find --loaded -N <external_package_name>
, e.g.spack find --loaded -N boost
# prints out the version of the specified external package that is in use in the current software area -
spack info fddaq
# prints out the packages that are included in thefddaq
bundle for the current software area -
spack info coredaq
# prints out the packages that are included in thecoredaq
(common) bundle for the current software area
Also see here.
This utility can be used to print out information from the HDF5 raw data files. To invoke it use
HDF5LIBS_TestDumpRecord <filename>
h5dump-shared -H <filename>
This is another use of the h5dump-shared
utility. This case uses the following command-line arguments:
- the HDF5 path of the block we want to dump (-d )
- the output binary file name (-o <output_file>)
- the HDF5 file to be dumped
An example is:
h5dump-shared -d /TriggerRecord00001.0000/RawData/Detector_Readout_0x00000064_WIBEth -
bLE -o dataset1.bin test_raw_run001041_0000_df-01_dw_0_20241113T163255.hdf5
Once you have the binary file, you can examine it with tools like Linux od
(octal dump), for example
od -x dataset1.bin
There are several integration tests available in the integtest directory of the daqsystemtest package. To run them, we suggest adding the daqsystemtest package to your software area (if not already done), cd $DBT_AREA_ROOT/sourcecode/daqsystemtest/integtest
, and cat the README file to view the suggestions listed within it. To run a test, type
pytest -s <test_name>
For example,
pytest -s minimal_system_quick_test.py
When running with drunc, metrics reports appear in the info_*.json
files that are produced, one for each application (e.g. info_df-01.json
). We can collate these, grouped by metric name, using python -m opmonlib.info_file_collator info_*.json
(default output file is opmon_collated.json
).
It is also possible to monitor the system using a graphic interface.