OnLine nodes configuration - PADME-Experiment/padme-fw GitHub Wiki
Installing and configuring on-line nodes with CentOS 7
Initial installation
Language: English (US)
Network & Hostname: Switch ON Eth port connected to LNF public LAN (leave all other ports OFF for the moment)
Date & Time: Europe/Rome - Network time ON - Add ntp1(2,3).lnf.infn.it NTP servers
Keyboard: English (US) and Italian
Installation Destination: select "I will configure partitioning", then:
swap Depends on local memory (e.g. RAM=8GB -> 16GB). Not really needed if lots of RAM is available.
/boot 1024MiB ext4
/var 80GiB ext4
/ 100GiB ext4
/home (all the rest) ext4
For nodes in Control Room (8GB RAM, 512GB HD) use this partitioning:
/boot 1GiB ext4
/boot/efi 1GiB EFI system partition
/ 100GiB ext4
/var 20GiB ext4
/home 311GiB (all) ext4
On disk servers: leave big disk unconfigured here. Create one big xfs partition after OS is installed.
Software Selection: "Development & Creative Workstation", then select:
- Additional development
- Compatibility libs
- Development tools
- Emacs
- File and Storage Server (only for disk servers)
- Graphics creation tools
- Hardware monitor utils
- Legacy X Windows sytem compatibility
- NFS client
- PHP support
- Perl for the web
- Platform development
- Python
- Technical writing
Root password: (ask Emanuele)
User: daq (usual password, administrator) <- for DAQ nodes
User: padme (usual password, administrator) <- for Control Room and Off-line nodes
Reboot the node.
On final configuration screen (appears after reboot):
- accept license
- if node has additional network interfaces on any PADME private VLAN, enable them now:
- Select interface and switch it ON
- Go to advanced settings (small wheel on the bottom-right) and:
- Identity -> Enable "Connect automatically"
- IPv4 -> Enable "Use this connection only for resources on its network" (bottom)
N.B. Network interface configuration can also be done later from:
Applications -> System Tools -> Settings -> Network
or using the line-mode tool nmtui
.
Update software packages to latest version (takes several minutes):
Applications -> System Tools -> Software Updates -> Install Updates
N.B. Huawei machines have problems with kernel 3.10.0-862(.x.x): deselect these kernel packages from installation procedure.
Proxy configuration
If node is not on the public network (i.e. it is only on private LNF VLANs) then configure proxy service:
Applications -> System Tools -> Settings -> Network -> Network Proxy
Method: Automatic
Config URL: http://www.lnf.infn.it/proxy.pac
Also yum must use the proxy service: edit file /etc/yum.conf
and add line
proxy=http://squid.lnf.infn.it:3128
Repository configuration
Several needed packages can be found in the EPEL repository. To add it do:
[root@localhost ~]# yum install http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
If you need a specific version of the repository (why?) use instead, e.g.
[root@localhost ~]# yum install http://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/epel-release-7-11.noarch.rpm
CVMFS configuration
Install and configure CVMFS (only for nodes with public access)
[root@localhost ~] yum install https://ecsft.cern.ch/dist/cvmfs/cvmfs-release/cvmfs-release-latest.noarch.rpm
[root@localhost ~] yum install cvmfs cvmfs-config-default
[root@localhost ~] cvmfs_config setup
Create /etc/cvmfs/default.local with content shown below. N.B. CNAF proxy should be reinserted as it is now on
[root@localhost ~] cat /etc/cvmfs/default.local
CVMFS_REPOSITORIES=atlas.cern.ch,atlas-condb.cern.ch,grid.cern.ch
CVMFS_HTTP_PROXY="http://atlassquid1.lnf.infn.it:3128;http://atlassquid.lnf.infn.it:3128;http://squid-cvmfs01.na.infn.it:3128;DIRECT"
CVMFS_QUOTA_LIMIT=30000
Verify that everything works fine:
[root@localhost ~] cvmfs_config probe
Probing /cvmfs/atlas.cern.ch... OK
Probing /cvmfs/atlas-condb.cern.ch... OK
Probing /cvmfs/grid.cern.ch... OK
If this is a disk server, you can now create the big partition with fdisk (use xfs by preference) and mount it to /data
Create mountpoint:
mkdir /data
Add mount point to fstab: edit /etc/fstab and add line
/dev/sdb1 /data xfs defaults 0 0
If this is a L0 DAQ node, then install all CAEN packages (mind the order).
[root@localhost lib]# pwd
/home/daq/Downloads/CAENVMELib-2.50/lib
[root@localhost lib]# sh install_x64
[root@localhost lib]# ls -l /lib/libCAENVME*
lrwxrwxrwx. 1 root root 27 Apr 6 12:06 /lib/libCAENVME.so -> /usr/lib/libCAENVME.so.2.50
-rwxr-xr-x. 1 root root 51512 Apr 6 12:06 /lib/libCAENVME.so.2.50
[root@localhost lib]# pwd
/home/daq/Downloads/CAENComm-1.2/lib
[root@localhost lib]# sh install_x64
[root@localohost lib]# ls -l /lib/libCAENComm*
lrwxrwxrwx. 1 root root 29 Apr 6 12:08 /lib/libCAENComm.so -> /usr/lib/libCAENComm.so.1.2.0
-rwxr-xr-x. 1 root root 20480 Apr 6 12:08 /lib/libCAENComm.so.1.2.0
[root@localhost A3818Drv-1.6.0]# pwd
/home/daq/Downloads/A3818Drv-1.6.0
[root@localhost A3818Drv-1.6.0]# make
[root@localhost A3818Drv-1.6.0]# make install
[root@localhost CAENUpgrader-1.6.3]# pwd
/home/daq/Downloads/CAENUpgrader-1.6.3
[root@localhost CAENUpgrader-1.6.3]# ./configure
[root@localhost CAENUpgrader-1.6.3]# make
[root@localhost CAENUpgrader-1.6.3]# make install
[root@localhost CAENDigitizer_2.9.1]# pwd
/home/daq/Downloads/CAENDigitizer_2.9.1
[root@localhost CAENDigitizer_2.9.1]# sh install_64
[root@localhost CAENDigitizer_2.9.1]# ls -l /lib/libCAENDigitizer*
lrwxrwxrwx. 1 root root 34 Apr 6 12:15 /lib/libCAENDigitizer.so -> /usr/lib/libCAENDigitizer.so.2.9.1
-rwxr-xr-x. 1 root root 238640 Apr 6 12:15 /lib/libCAENDigitizer.so.2.9.1
Reboot to load A3818 driver and check that everything is working
[root@localhost ~] reboot
[root@localhost ~]# lsmod | grep a3818
a3818 21582 0
[root@localhost ~]# dmesg
...........
[ 31.629000] a3818: loading out-of-tree module taints kernel.
[ 31.629043] a3818: module verification failed: signature and/or required key missing - tainting kernel
[ 31.629747] CAEN A3818 PCI Express CONET2 controller driver v1.6.0s
[ 31.629750] Copyright 2013, CAEN SpA
[ 31.629971] pci 0000:04:00.0: irq 71 for MSI/MSI-X
[ 31.629990] pci 0000:04:00.0: can't disable ASPM; OS doesn't have ASPM control
[ 31.630024] Found A3818 - Common BAR at iomem ffffc9000cef0000 irq 0
[ 31.630027] Found A3818 with 4 link(s)
[ 31.630043] found A3818 Link 0 BAR at iomem ffffc9000cf90000 irq 0
[ 31.630050] found A3818 Link 1 BAR at iomem ffffc9000cf92000 irq 0
[ 31.630057] found A3818 Link 2 BAR at iomem ffffc9000cf94000 irq 0
[ 31.630065] found A3818 Link 3 BAR at iomem ffffc9000cf96000 irq 0
[ 31.644234] CAEN A3818 Loaded.
[ 31.647905] CAEN PCIe: 1 device(s) found.
...........
PIPE size
To improve DAQ resiliency, the maximum buffer size of fifo pipes must be increased from the default size of 64kiB to some large value, e.g. 128MiB.
On all L0 and L1 nodes this is done as root with the commands
[root@localhost ~]# sysctl fs.pipe-max-size=134217728
[root@localhost ~]# sysctl fs.pipe-user-pages-soft=0
Configuration can then be checked with
[root@localhost ~]# cat /proc/sys/fs/pipe-max-size
134217728
[root@localhost ~]# cat /proc/sys/fs/pipe-user-pages-soft
0
Both commands must then be added to the /etc/rc.d/rc.local
file which must then be activated
by changing its execution bit:
[root@localhost ~]# chmod +x /etc/rc.d/rc.local
Additional packages
If this is a machine for general DAQ tasks (including L0) then install the following packages:
# MySQL DB access
[root@localhost ~]# yum install mariadb
# Python graphics modules for RunControl
[root@localhost ~]# yum install python-pillow python-pillow-tk
# Python daemon module for RunControl (needs EPEL repository)
[root@localhost ~]# yum install python-daemon
If you need to run DCS modules then you need:
# CAEN HV Wrapper library
[root@localhost CAENHVWrapper-5.82]# pwd
/root/CAEN/CAENHVWrapper-5.82
[root@localhost CAENHVWrapper-5.82]# ./install.sh
# memcached
[root@localhost ~]# yum install libmemcached libmemcached-devel
# YAML
[root@localhost ~]# yum install libyaml libyaml-devel yaml-cpp yaml-cpp-devel
Installing PADME framework
[daq@localhost ~]$ git clone https://github.com/PADME-Experiment/padme-fw Padme
[daq@localhost ~]$ cd Padme/
[daq@localhost Padme]$ git checkout develop
Branch develop set up to track remote branch develop from origin.
Switched to a new branch 'develop'
Enabling DB access
[daq@localhost ~]$ cat .padme-dbaccess.sh
export PADME_DB_HOST=l0padme1
export PADME_DB_PORT=3306
export PADME_DB_USER=padme
export PADME_DB_PASSWD=(ask Emanuele)
export PADME_DB_NAME=PadmeDAQ
PADME configuration at login
To enable PADME enviroment configuration at login, edit .bashrc
and add the following lines:
# Configure PADME enviroment with standard gcc and root from cvmfs
if [ -f /home/daq/Padme/Configure/padme_init_cvmfs_centos7_root5.sh ]; then
. /home/daq/Padme/Configure/padme_init_cvmfs_centos7_root5.sh
fi
Logout and login again to check if configuration works:
[daq@localhost ~]$ echo $PADME
/home/daq/Padme
[daq@localhost ~]$ echo $PADMEROOT
/home/daq/Padme/PadmeRoot
[daq@localhost ~]$ echo $ROOTSYS
/cvmfs/sft.cern.ch/lcg/app/releases/ROOT/5.34.36/x86_64-centos7-gcc49-opt/root
[daq@localhost ~]$ echo $PADME_DB_HOST
l0padme1
Compilation of PADME packages
After downloading them, compile all needed PADME packages:
[daq@localhost ~]$ cd Padme/PadmeRoot/
[daq@localhost PadmeRoot]$ make
[daq@localhost PadmeRoot]$ mv PersistencyDict_rdict.pcm lib/
[daq@localhost ~]$ cd Padme/PadmeDAQ/
[daq@localhost PadmeDAQ]$ make
[daq@localhost ~]$ cd Padme/Level1/
[daq@localhost Level1]$ make
Creating DAQ directory
[daq@localhost ~]$ mkdir DAQ
[daq@localhost ~]$ cd DAQ/
[daq@localhost DAQ]$ mkdir backup cfg log run setup test
[daq@localhost DAQ]$ mkdir data rawdata
[daq@localhost DAQ]$ ln -s /home/daq/Padme/RunControl/code/RunControl.py RunControl
setup
directory
[daq@localhost DAQ]$ cd setup/
[daq@localhost setup]$ cd TestADC201711/
[daq@localhost TestADC201711]$ ls -l
total 8
lrwxrwxrwx. 1 daq daq 9 Nov 13 08:57 board_00.cfg -> board.cfg
lrwxrwxrwx. 1 daq daq 9 Nov 13 08:57 board_01.cfg -> board.cfg
...........
lrwxrwxrwx. 1 daq daq 9 Nov 22 16:43 board_31.cfg -> board.cfg
-rw-r--r--. 1 daq daq 605 Nov 13 08:56 board.cfg
-rw-r--r--. 1 daq daq 245 Nov 22 16:43 run.cfg
This show an example for the setup files
[daq@localhost TestADC201711]$ cat run.cfg
# GUI configuration for new ADC tests of November 2017
boardid_list 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 24
[daq@localhost TestADC201711]$ cat board.cfg
# Board configuration for inital test
startdaq_mode 0
# Use TRG IN/TRG OUT mode
trigger_mode 0
group_enable_mask 0xf
channel_enable_mask 0xffffffff
# Set sampling frequency at 1GHZ (0:5GHz, 1:2.5GHz, 2:1GHz)
drs4_sampfreq 2
# +/- 0.5V
#offset_global 0x8f00
# 0 to 1V
#offset_global 0xc260
# -1V to 0V
offset_global 0x6000
# Change file every hour (should not be necessary)
file_max_events 100000
file_max_duration 3600
# Wait 1s between acquisitions (trigger rate is very low)
daq_loop_delay 1000000
zero_suppression 0
drs4corr_enable 1
#1GHz
post_trigger_size 75
##5GHz
#post_trigger_size 45
[daq@localhost TestADC201711]$ cd ..
[daq@localhost setup]$ echo TestADC201711 > last_used_setup
Network ports for PadmeMerger
Nodes runnning PadmeMerger (usually the disk servers) need to change firewall rules to open the ports from which they will receive data from L0 and Trigger nodes.
Ports used are in the range 31400-31499 (0-29: ADC boards, 99: trigger)
As root
execute the following commands:
firewall-cmd --add-port=31400-31499/tcp --permanent
firewall-cmd --add-port=31400-31499/tcp
NFS configuration
Disk servers will export under NFS the big /data partition. Package nfs-utils is needed (should already be available in our configuration).
On disk servers enable and start all services needed by NFS:
systemctl enable rpcbind
systemctl enable nfs-server
systemctl enable nfs-lock
systemctl enable nfs-idmap
systemctl start rpcbind
systemctl start nfs-server
systemctl start nfs-lock
systemctl start nfs-idmap
Edit file /etc/exports
and add this line (192.168.60.* is the padmedaq VLAN 410):
/data 192.168.60.*(rw,sync,no_root_squash,no_all_squash)
Then restart the NFS service:
systemctl restart nfs-server
Finally open firewall ports used by NFS:
firewall-cmd --permanent --zone=public --add-service=nfs
firewall-cmd --permanent --zone=public --add-service=mountd
firewall-cmd --permanent --zone=public --add-service=rpc-bind
firewall-cmd --reload
All client nodes (e.g. the monitor node l0padme3) can now mount the exported partitions.
On all clients first create the needed mount points:
mkdir -p /home/daq/DAQ
mkdir -p /mnt/l1padme1/data
mkdir -p /mnt/l1padme2/data
Then edit file /etc/fstab and add lines
192.168.60.2:/data/daq/DAQ /home/daq/DAQ nfs defaults 0 0 # Main DAQ directory: needed everywhere
192.168.60.2:/data/dcs /mnt/l0padme1/dcs nfs defaults 0 0 # DCS directory: needed by l0padme3 for backup purposes
192.168.60.5:/data /mnt/l1padme1 nfs defaults 0 0 # l1padme1(3) data directory: PADME DAQ data
192.168.60.6:/data /mnt/l1padme2 nfs defaults 0 0 # l1padme2(4) data directory: PADME DAQ data
192.168.60.8:/data /mnt/l0padme2 nfs defaults 0 0 # l0padme2 data directory: Chamber DAQ data
Finally mount new filesystems with
mount -a
If you want to test disk speed access to mounted partition, on client node go to a mounted directory,
e.g. /mnt/l1padme1/data/rawdata
and issue command (creates a 1GiB file)
> dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 10.3556 s, 104 MB/s
TeamViewer
TeamViewer is needed on terminals in Control Room to get access to some Target PCs in BTF.
As root:
wget https://download.teamviewer.com/download/linux/teamviewer.x86_64.rpm (or use browser)
yum install teamviewer.x86_64.rpm
As user:
teamviewer
Menu Extras -> Options -> Proxy settings -> Configure
Use manual proxy - Proxy IP: 193.206.84.225 Port: 3128
Apply - Ok
Then wait for green light in bottom left and "Ready to connect"